text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Bioinformatic Analysis of Roquin Family Reveals Their Potential Role in Immune System The Roquin family is a recognized RNA-binding protein family that plays vital roles in regulating the expression of pro-inflammatory target gene mRNA during the immune process in mammals. However, the evolutionary status of the Roquin family across metazoans remains elusive, and limited studies are found in fish species. In this study, we discovered that the RC3H genes underwent a single round of gene duplication from a primitive ancestor during evolution from invertebrates to vertebrates. Furthermore, there were instances of species-specific gene loss events or teleost lineage-specific gene duplications throughout evolution. Domain/motif organization and selective pressure analysis revealed that Roquins exhibit high homology both within members of the family within the same species and across species. The three rc3h genes in zebrafish displayed similar expression patterns in early embryos and adult tissues, with rc3h1b showing the most prominent expression among them. Additionally, the promoter regions of the zebrafish rc3h genes contained numerous transcription factor binding sites similar to those of mammalian homologs. Moreover, the interaction protein network of Roquin and the potential binding motif in the 3’-UTR of putative target genes analysis both indicated that Roquins have the potential to degrade target mRNA through mechanisms similar to those of mammalian homologs. These findings shed light on the evolutionary history of Roquin among metazoans and hypothesized their role in the immune systems of zebrafish. Introduction Precisely regulating gene expression is crucial for all organisms, especially for the development and immune responses of eukaryotes.As a result, abnormal gene expression leads to developmental defects and imbalanced immune responses [1][2][3].Gene expression is regulated through various mechanisms, such as epigenetic modifications, transcriptional regulation, and post-transcriptional regulation, which also involve multiple regulators.RNA-binding proteins (RBPs) recognize specific RNAs through their RNA-binding domains, thereby regulating RNA splicing, localization, stability, and transport.They play crucial roles in various aspects, including organ development, disease occurrence, and immune system homeostasis [1,[4][5][6][7][8][9][10]. Several RBP families, such as TTP, AUF1, KSRP, TIA-1/TIAR, Roquin, Regnase, HuR, and Arid5a, have been identified as important regulators of immune systems at the post-transcriptional level [5].These proteins typically act as trans-acting factors that bind to specific cis-elements located within the untranslated regions of mRNA.In mammals, Roquin and Regnase are two prominent RBP families known to regulate their targets, thereby controlling immune systems and preventing excessive immune responses. The structures of the ROQUIN family: The mammalian ROQUIN family includes ROQUIN-1 and a paralog, ROQUIN-2, which are encoded by RC3H1 and RC3H2, respectively [8,11].RC3H1 was initially discovered during a screening for autoimmune regulators in mice.It is the causative mutation in the sanroque mouse strain and exhibits symptoms characteristic of lupus, an autoimmune disease [11].ROQUIN proteins are highly conserved and contain multiple domains.Roquin-1 and Roquin-2 share similar domain organizations, including a RING finger domain in the N-terminal, an RNA-binding ROQ domain, and a C3H1 zinc finger domain.The C-terminal sequence features several intrinsically disordered regions, containing a PRR (Proline-Rich Region) with several PxxP, a glutamine/asparagine-rich (Q/N Rich) region, and a CC (coiled-coiled) domain [8,12].The RING finger domain serves as a distinctive feature of E3 ligase, suggesting that ROQUIN is a potential E3-ubiquitin ligase candidate.The presence of the ROQ domain, a characteristic feature of the Roquin family, has been identified through sequence homology analysis.Subsequent research has confirmed that the ROQ domain is embedded in the HEPN domain, with HEPNN and HEPNC flanking on either side of the ROQ domain.Further studies have involved determining the crystal structure of the ROQ domain in human ROQUIN-1, which has revealed a helical fold bearing a winged helix-turn-helix (wHTH) motif responsible for binding to stem-loop mRNA carrying constitutive decay elements (CDEs) [13].The C3H1 zinc finger domain is also a common feature of RNA-binding proteins and plays a role in RNA binding [7].The PRR generally interacts with proteins containing the SH3 domain [8].The glutamine/asparagine-rich region is responsible for the subcellular localization of Roquin [14].The CC domain is a prevalent and structurally versatile folding motif with diverse functions based on its specific structure [15]. Roquin is located in P-bodies and regulated by various transcription factors and cytokines.Roquin proteins are predominantly found in the cytoplasm and are enriched in P-bodies, with the ability to translocate to stress granules in response to stress.The ROQ domain plays a key role in localizing Roquin to stress granules [16].The expression of Roquin family members is regulated by various transcription factors and cytokines.Transcription factors STAT1, STAT3, GATA2, and c-Rel can activate the RC3H promoter, whereas IKZF2 represses RC3H expression.The immunosuppressive cytokine interleukin IL-10 enhances the activity of these transcription factors, leading to increased Rc3h1 expression [17].Studies of Il10 −/− mice have shown reduced Roquin-1, further indicating that IL-10 plays a role in regulating the expression of Rc3h1 [18]. The role of the ROQUIN family in immunity: Roquin-1 and Roquin-2 have important and redundant roles in both innate and adaptive immunity.They regulate common mRNA targets, such as ICOS, Ox40, IFN-γ and TNF-α, but also have many distinct targets.They bind to the 3 ′ -UTR region of target mRNA, destabilizing the mRNAs and, thus, preventing immune cell over-activation.ICOS, a costimulatory receptor for follicular T helper cells, is the first identified target of Roquin-1 [11].Roquin promotes the ICOS degradation.In sanroque mice, ICOS is overexpressed, leading to aberrant Tfh cell accumulation.Roquin inhibits TH17 cell differentiation, leading to inhibition of target mRNA encoding the TH17 cell-promoting factors IL-6, ICOS, c-Rel and IRF4.Upon activating T cell receptor and co-stimulatory signaling, MALT1 cleaves and inactivates Roquin-1 and Roquin-2, thereby enhancing TH17 cell differentiation and induced humoral autoimmunity and organismal defense [19][20][21].The first identified sequence recognized by Roquin-1 is a conserved class of stem-loop RNA degradation motifs located in the 3 ′ -UTR region known as the CRE (Conserved Cis-Regulatory Element).This motif contains an AU-rich consensus sequence, 5 ′ -NNNNNUUCYRYGAANNNNN-3 ′ .Roquin-1 directly binds to this element and represses the expression of target mRNA involved in innate and adaptive immunity.In addition to CRE, an alternative decay element (ADE), a U-rich hexaloop motif, has been identified using the SELEX assay.This ADE and the previously identified CDE cooperate in the repression of Ox40 via Roquin [22].Indeed, the mechanism of Roquin regulation is quite complex.Roquin targets mRNAs in their 3 ′ -UTR regions through diverse modes of regulation that could interact with SL structures as well as linear sequence elements [23]. Most targets are degraded by mRNA decay; however, a small subset also experiences translational inhibition [23].The recognition of Roquin to the target is mediated by the ROQ domain, which has two separate RNA-binding sites (A and B), the A site being for stem-loop RNA and the B site being for double-stranded RNA [24]. In addition to mammalian Roquin, there have been several studies about Roquin in other species.The C. elegans homolog of RC3H1, RLE-1 (regulation of longevity by E3), was initially identified as an E3 ubiquitin ligase of transcription factor DAF-16, regulating aging [25].Later, it was proven that RLE-1 post-transcriptionally regulates ETS-4, which is the master transcriptional regulator of diverse effectors [26].The Drosophila melanogaster homolog of Roquin recruits the CCR4-Not complex through a CAF40-binding motif and represses the expression of its target mRNAs [27].Later, it was found that Roquin regulates the STING-dependent immune response negatively in Drosophila [28].However, the phylogenetic evolution of the Roquin family remains unclear, and its homologs remain undermined in teleost species.In this study, we explored the phylogenetic evolution of the Roquin family in metazoans; then, we examined the expression profiles and regulation of Roquins in zebrafish. Roquin Genes in Metazoan Taking advantage of extensive genomic data, we acquired RC3H sequences of various animal species at different evolutionary positions from the NCBI and Ensembl databases, using human RC3H1 as a query.We observed the presence of RC3H genes in all the species chosen, although the exact numbers varied between species (Table 1).Notably, both RC3H1 and RC3H2 were identified in humans, mice, and chickens.It is worth mentioning that Rc3h2 was not detected in Xenopus tropicalis; thus, we searched amphibian species, and interestingly, we found that only one Rc3h1 was present in all Anura species, excluding Xenopus laevis, which is a tetraploid and contains two duplicated Rc3h1s.However, both Rc3h1 and Rc3h2 are present in Gymnophiona species.These data suggest that Rc3h2 is specifically lost in Anura species.In teleost species, such as the spotted gar and torafugu, only two members (rc3h1 and rc3h2) have been identified.Rainbow trout exhibit six members, consisting of four copies of rc3h1 (rc3h1aa, rc3h1ab, rc3h1ba and rc3h1bb) and two copies of rc3h2 (rc3h2a and rc3h2b).Zebrafish have three family members, rc3h1a, rc3h1b and rc3h2.Most cartilage fishes had both rc3h1 and rc3h2.In invertebrates, only one member, rc3h1, is represented (the homologous gene in nematodes is referrd to as rle-1).We conducted an analysis of Roquins in zebrafish.Encoded Roquin-1a consisted of 1078 amino acids, with a molecular weight of approximately 120.12 kDa and an isoelectric point (pI) of around 7.87.Roquin-1b was similar to Roquin-1a, comprising 1111 amino acids, with a molecular weight of approximately 122.61 kDa and a pI of approximately 7.41.Roquin-2 was slightly shorter than Roquin-1a and Roquin-1b, containing 1028 amino acids, with a molecular weight of approximately 113.96 kDa and a pI of around 6.56 (Table 2). Evolutionary Relationship of ROQUINS To gain a deeper understanding of the evolutionary relationships of Roquin homologs in metazoans, we meticulously aligned the retrieved sequences of the Roquin proteins using the ClustalW algorithm.Additionally, we comprehensively analyzed their phylogenetic evolution by constructing two phylogenetic trees using Neighbor-Joining (NJ) and Maximum-Likelihood (ML) methods, available in MEGA10.Remarkably, the resulting trees generated from both methods were generally consistent (Figure 1A,B).In vertebrates, Roquin-1 and Roquin-2 were found to belong to two distinct clades, indicating their evolutionary divergence.Interestingly, all the invertebrate Roquins were located at the base of the vertebrate clades.This suggests that ROQUIN-1 and ROQUIN-2 originated from a single, primitive Roquin-1 in invertebrates.Notably, in teleost species, Roquin-1 was divided into two branches, namely, Roquin-1a and Roquin-1b.These observations suggest that these teleost-specific branches originated from extra genome duplication in teleost.In rainbow trout, Roquin-1aa and 1ab; Roquin-1ba and 1bb; and Roquin-2a and 2b clustered together within distinct sub-branches.This suggests that these homologs are likely derived from a fourth whole genome duplications (WGDs) event in rainbow trout. Genomic Structure and Synteny of Rc3h Genes We conducted an analysis and created a visual representation of the genomic structure of Rc3h genes in humans, mice, and zebrafish.The genomic structure of Rc3hs was conserved among genes and across different species (Figure 2).Interestingly, the exonintron organization of Rc3h1 genes was generally conserved in each species, consisting of 20 exons and 19 introns.The only exception was rc3h1b, which had only 19 exons and 18 introns.The exon composition of Rc3h2 varied between the three species.There were 19 exons in zebrafish, 22 exons in mice, and 21 in humans.In addition, it is noteworthy that the length of these genes varied, with shorter rc3h genes in zebrafish (Figure 2). We also analyzed the synteny of Rc3hs and found no synteny between the invertebrate rc3h1 genes and the vertebrate Rc3h genes.Additionally, we illustrated a synteny map of rc3h genes in zebrafish and humans (Figure 3).The results revealed that rc3h1a in zebrafish exhibited a well-conserved collinearity with RC3H1 in humans.Downstream of rc3h1a and RC3H1, two neighboring genes, serpinc1 and zbtb37, were identified.However, in zebrafish rc3h1b, the genomic region underwent significant rearrangement, and no copies of these two neighboring genes were found.Combined with phylogenetic analysis, these finding suggest that rc3h1a and rc3h1b are orthologs of RC3H1 and originated from a third WGD event in zebrafish.For rc3h2, two neighboring genes, strbp and rabgap1, were identified in both zebrafish and humans.Although these two genes share the same orientation, their location differs between zebrafish and humans, indicating a reorganization of the neighboring genes. was conserved among genes and across different species (Figure 2).Interestingly, the exon-intron organization of Rc3h1 genes was generally conserved in each species, consisting of 20 exons and 19 introns.The only exception was rc3h1b, which had only 19 exons and 18 introns.The exon composition of Rc3h2 varied between the three species.There were 19 exons in zebrafish, 22 exons in mice, and 21 in humans.In addition, it is noteworthy that the length of these genes varied, with shorter rc3h genes in zebrafish (Figure 2).We also analyzed the synteny of Rc3hs and found no synteny between the invertebrate rc3h1 genes and the vertebrate Rc3h genes.Additionally, we illustrated a synteny map of rc3h genes in zebrafish and humans (Figure 3).The results revealed that rc3h1a in zebrafish exhibited a well-conserved collinearity with RC3H1 in humans.Downstream of rc3h1a and RC3H1, two neighboring genes, serpinc1 and zbtb37, were identified.However, in zebrafish rc3h1b, the genomic region underwent significant rearrangement, and no copies of these two neighboring genes were found.Combined with phylogenetic analysis, these finding suggest that rc3h1a and rc3h1b are orthologs of RC3H1 and originated from a third WGD event in zebrafish.For rc3h2, two neighboring genes, strbp and rabgap1, were identified in both zebrafish and humans.Although these two genes share the same orientation, their location differs between zebrafish and humans, indicating a reorganization of the neighboring genes. Domain and Motif Organization of Roquins To better understand the functional diversification of Roquins, we analyzed the domains and motifs of Roquins of invertebrate amphioxi, vertebrate zebrafish, and humans using the MEME and SMART protein domain prediction program combined with Mega- Domain and Motif Organization of Roquins To better understand the functional diversification of Roquins, we analyzed the domains and motifs of Roquins of invertebrate amphioxi, vertebrate zebrafish, and humans using the MEME and SMART protein domain prediction program combined with Megalign analysis.The results revealed that all Roquin possess a Ring finger domain, a ROQ domain, and a C3H1-ZNF domain located at the N-terminal (Figure 4).The C-terminal sequence of all the Roquins exhibited prominent multiple intrinsically disordered fragments.These fragments contained a PRR with several PxxPs, as well as a glutamine/asparagine-rich (Q/N Rich) region, although the lengths of these regions varied between paralogs and across species.Additionally, a CC (coiled-coiled) domain was present in all Roquins except for Roquin-2 in zebrafish, although the length of the CC domain differed between paralogs and across species (Figure 4).These results indicate that the domains of vertebrate ROQUIN proteins are highly conserved, implying that their functions are also somewhat conserved.We identified ten motifs and labeled them motifs 1-10 based on their level of conservation, with motif 1 being the most conserved (Figure 5).The results revealed that all Roquin proteins contained these ten motifs and exhibited similar motif organization.The N-terminal to C-terminal motif organization was as follows: motif 1, 10, 7, 3, 5, 2, 4, 8, 6, and 9, with respective amino acid lengths of 50, 30, 27, 50, 29, 50, 50, 29, 25, and 42.The nine motifs located at the N-terminal corresponded to the RING finger domain, the ROQ domain, and the C3H1-Znf domain.Notably, these nine motifs were clustered together and relatively conserved, while the last motif at the C-terminal was dispersed and not conserved in its position (Figure 5).The first motif, located at the N-terminal of the sequence, exhibited the highest degree of conservation and spanned 50 amino acids, resembling the RING finger domain found in Roquin-1.Motifs 7, 3, 5, 2, 4, and 8 collectively formed the ROQ domain, while motif 6 contained the C3H1-Znf domain.The conserved domain/motif organization of these Roquins suggests that they have similar functions. Selective Pressure of rc3h Genes To investigate whether the duplicated Roquin genes underwent selection pressures, we calculated nonsynonymous (Ka), synonymous (Ks), and Ka/Ks ratios for the rc3h gene pairs in vertebrate zebrafish and invertebrate amphioxi. The Ka/Ks ratio is a measure of selective pressure on protein-coding genes.The results demonstrated that the Ka/Ks ratios of all the rc3h gene pairs ranged from 0.1089 to 0.2738.Interestingly, the rc3h gene pairs in zebrafish exhibited slightly lower ratios compared with those between different species.Furthermore, all the Ka/Ks ratios were much lower than 0.5, indicating that they had all experienced purifying selection and lower evolutionary pressure (Table 3).In summary, the above analysis suggests that Roquin-1 and Roquin-2 are evolutionarily conserved and that their functions may be redundant to some extent. Expression Profile of rc3h Genes in Zebrafish We analyzed the expression profile of rc3h genes in the early development and adult tissues of zebrafish.All three genes exhibited similar expression patterns in the early embryos, with particular emphasis on the fact that rc3h1b had the strongest expression level among the three.Rc3h1a showed modest expression both maternally and zygotically throughout all zebrafish development stages.Rc3h1b exhibited robust expression in all the tested stages, with the highest expression level observed from 2.25 h post-fertilization (hpf) to 8 hpf.On the other hand, rc3h2 only showed weak expression during embryonic development (Figure 6A).Collectively, these data suggest that Rc3h genes are involved in the early development of zebrafish, with a key role that can be attributed to Rc3h1b. Potential Transcription Factor in the Promoters of rc3h Genes Transcription factors (TFs) Stat1, Stat3, Gata2, and c-Rel can upregulate Roquin expression, while Ikzf2 can downregulate it.Therefore, we predicted potential TF-binding sites within the 2 kb promoter region of the rc3h genes.The analysis revealed the presence of abundant TF-binding sites in the promoters of rc3h genes in both zebrafish and amphioxi (Figure 7).Interestingly, some of these TF binding sites overlapped.In detail, in the zebrafish rc3h1a gene, five Stat1β-binding sites, seven Gata2-binding sites, and twentythree Ikzf2-binding sites were identified.In zebrafish rc3h1b, all five TF-binding sites were found, including one Stat3, eleven c-Rel, two Stat1β, one Ikzf2, and twelve Gata2.Zebrafish rc3h2 contained TF-binding sites, including three Gata2, seven Ikzf2, and nineteen Stat1β.In amphioxi rc3h1, there were fifteen Ikzf2-, nineteen Gata2-, one c-Rel-, and three Stat1β-binding sites.These findings highlight the abundance and potential functional importance of these TF-binding sites in regulating rc3h genes.Despite variations in the location and number of each TF-binding site across different genes, it can be inferred that there is a conserved pattern in the rc3h gene regulated by TFs between paralogs and across species.The expression profile of rc3h genes in adult tissues revealed similar expression patterns with slight differences.Rc3h1a exhibited modest expression in all tissues, with the highest expression in the testes, followed by the skin, brain, and kidneys.Rc3h1b showed the strongest expression among the three genes, with the highest levels observed in the testes, followed by the brain, kidneys, and intestines.Rc3h2 displayed weak expression compared with rc3h1a and rc3h1b, with the highest expression in the brain, followed by the intestines, skin, and kidneys (Figure 6B).These data indicate that the function of Roquin is redundant, with slight divergence. Potential Transcription Factor in the Promoters of rc3h Genes Transcription factors (TFs) Stat1, Stat3, Gata2, and c-Rel can upregulate Roquin expression, while Ikzf2 can downregulate it.Therefore, we predicted potential TF-binding sites within the 2 kb promoter region of the rc3h genes.The analysis revealed the presence of abundant TF-binding sites in the promoters of rc3h genes in both zebrafish and amphioxi (Figure 7).Interestingly, some of these TF binding sites overlapped.In detail, in the zebrafish rc3h1a gene, five Stat1β-binding sites, seven Gata2-binding sites, and twenty-three Ikzf2-binding sites were identified.In zebrafish rc3h1b, all five TF-binding sites were found, including one Stat3, eleven c-Rel, two Stat1β, one Ikzf2, and twelve Gata2.Zebrafish rc3h2 contained TF-binding sites, including three Gata2, seven Ikzf2, and nineteen Stat1β.In amphioxi rc3h1, there were fifteen Ikzf2-, nineteen Gata2-, one c-Rel-, and three Stat1β-binding sites.These findings highlight the abundance and potential functional importance of these TF-binding sites in regulating rc3h genes.Despite variations in the location and number of each TF-binding site across different genes, it can be inferred that there is a conserved pattern in the rc3h gene regulated by TFs between paralogs and across species.zebrafish rc3h1a gene, five Stat1β-binding sites, seven Gata2-binding sites, and twentythree Ikzf2-binding sites were identified.In zebrafish rc3h1b, all five TF-binding sites were found, including one Stat3, eleven c-Rel, two Stat1β, one Ikzf2, and twelve Gata2.Zebrafish rc3h2 contained TF-binding sites, including three Gata2, seven Ikzf2, and nineteen Stat1β.In amphioxi rc3h1, there were fifteen Ikzf2-, nineteen Gata2-, one c-Rel-, and three Stat1β-binding sites.These findings highlight the abundance and potential functional importance of these TF-binding sites in regulating rc3h genes.Despite variations in the location and number of each TF-binding site across different genes, it can be inferred that there is a conserved pattern in the rc3h gene regulated by TFs between paralogs and across species. Interaction Protein Network and Potential Binding Motif of Roquins in Zebrafish According to the STRING protein-protein interaction database, zebrafish Roquin-1a can interact with Cnot1 and Cnot9, both of which are members of the CCR4-NOT complex.Roquin-1a can also interact with Roquin-2 and shows co-expression with Zc3h12a and Roquin-2 (Figure 8A).Zebrafish Roquin-1b, on the other hand, interacts with Cnot1, Cnot9, Cnot11 and Cnot3a, all of which are components of the CCR4-NOT complex.Additionally, Roquin-1b also exhibits co-expression and interaction with Roquin-2 (Figure 8B).Zebrafish Roquin-2 can interact with Cnot1, a member of the CCR4-NOT complex.Interestingly, Roquin-2 can bind to Helz and shows co-expression with Arid5a (Figure 8C), both of which are involved in immune hemostasis post-transcriptionally.These data suggest that all Roquins have the potential to promote mRNA target decay by recruiting the CCR4-NOT complex. It has been confirmed that Roquin degrades target mRNA by binding to the constitutive decay element (CDE) in the 3'-UTR region of mRNA, thereby limiting the accumulation of harmful inflammatory factors.Through NCBI alignment, we identified a putative 13-nucleotide CDE-like motif in zebrafish, which is shorter than the 17 nt CDE found in mammals (Figure 9A,B).We found this CDE-like motif in at least 12 genes in zebrafish, most of which are closely related to immune and inflammatory processes, such as tnfα, smarca2, prkca, and stk10.These findings further indicate that the mechanism of mRNA degradation caused by ROQUIN is highly conserved across vertebrates. ditionally, Roquin-1b also exhibits co-expression and interaction with Roquin-2 (Figure 8B).Zebrafish Roquin-2 can interact with Cnot1, a member of the CCR4-NOT complex.Interestingly, Roquin-2 can bind to Helz and shows co-expression with Arid5a (Figure 8C), both of which are involved in immune hemostasis post-transcriptionally.These data suggest that all Roquins have the potential to promote mRNA target decay by recruiting the CCR4-NOT complex.It has been confirmed that Roquin degrades target mRNA by binding to the constitutive decay element (CDE) in the 3'-UTR region of mRNA, thereby limiting the accumulation of harmful inflammatory factors.Through NCBI alignment, we identified a putative 13-nucleotide CDE-like motif in zebrafish, which is shorter than the 17 nt CDE found in mammals (Figure 9A, B).We found this CDE-like motif in at least 12 genes in zebrafish, most of which are closely related to immune and inflammatory processes, such as tnfα, smarca2, prkca, and stk10.These findings further indicate that the mechanism of mRNA degradation caused by ROQUIN is highly conserved across vertebrates. Discussion The Regnase and Roquin RNA-binding proteins collaboratively control the deg tion of mRNA and maintain homeostasis and the immune system.Previously, we dated the evolution of the Regnase-encoded Zc3h12 gene family and uncovered th munomodulatory role of Regnases in zebrafish [29].However, information regardi evolution of Roquins in zebrafish is lacking.In this study, we aimed to explore the e tionary relationships of the Roquin family in metazoans and examined its expressio files, regulation, and potential role in zebrafish. Genome duplication increased gene numbers, thus leading to an increase in Discussion The Regnase and Roquin RNA-binding proteins collaboratively control the degradation of mRNA and maintain homeostasis and the immune system.Previously, we elucidated the evolution of the Regnase-encoded Zc3h12 gene family and uncovered the immunomodulatory role of Regnases in zebrafish [29].However, information regarding the evolution of Roquins in zebrafish is lacking.In this study, we aimed to explore the evolutionary relationships of the Roquin family in metazoans and examined its expression profiles, regulation, and potential role in zebrafish. Genome duplication increased gene numbers, thus leading to an increase in gene numbers and an expansion of gene families, which is important for genome evolution and genetic robustness [30][31][32][33][34].We observed that invertebrates and jawless vertebrates possess one Rc3h gene, whereas mammals, owing to WGD, have two Rc3h genes (Rc3h1 and Rc3h2).Interestingly, in the Anura amphibian, the ortholog of Rc3h2 has been lost.In teleost, which experienced third and fourth WGD events, the number of rc3h genes varies from two to six.These findings suggest that during evolution from invertebrates to jawless vertebrate lamprey to vertebrates, only one round of rc3h gene duplication occurred.However, in teleost, additional gene duplication events took place.Moreover, there were species-specific gene loss events throughout this evolution.Across invertebrates, jawless vertebrates, and vertebrates, there are generally one, two, and four Zc3h12 genes, respectively.Notably, in bony fish, which experienced three or four rounds of WGD events, the number of zc3h12 genes varies greatly [29].Zc3h12 underwent two rounds of gene duplication events, along with exceptional species-specific gene duplication or loss events during teleost evolution [29].Both the Roquin and Regnase families participate in posttranscriptionally regulating mRNA and are involved in immune responses.The rc3h and zc3h12 gene families in metazoans were generated through WGD and lineage-specific gene duplication/loss events.However, the rc3h gene family experienced one less round of duplication compared with zc3h12 during evolution.As a result, rc3h gene family members are much simpler across species compared with zc3h12. Roquins share significant homology and generally exhibit a high level of conservation within both family members within the same species and across species, although there may be slightly fewer variations in domain and motif composition.Our analysis of Ka/Ks ratios for the rc3h gene pairs of vertebrate zebrafish and invertebrate amphioxi indicates that purifying selection and lower evolutionary pressure have influenced their evolution.The putative TF binding site in the promoters of invertebrate amphioxi rc3h1 and teleost zebrafish rc3h genes suggests that rc3h genes are regulated by similar TFs, which is consistent with previous studies on mice [17].These bioinformatic data suggest that Roquins perform similar functions across species and exhibit redundant roles among different members within the same species.Roquins in vertebrate humans, invertebrate flies and nematodes exhibit similar subcellular localization: they localize mainly in P-bodies in the cytoplasm, concentrate in stress granules in response to stress, function as RNA-binding proteins, and trigger mRNA decay [16].In mice, both Roquin-1 and Roquin-2 can bind nucleic acids and demonstrate functional redundancy [35,36].Roquins and Regnases can cooperate or act independently to regulate mRNA silencing post-transcriptionally [19,37].Nematode Rle-1 (Roquin-1) has conserved functions, and both Rle-1 and REGE-1 work collaboratively yet independently to regulate ets-4 mRNA silencing [26].Drosophila Roquin regulates innate immune responses by inhibiting STING signaling [28].Rc3h1b expression is most prominent among the three paralogs in zebrafish, suggesting that Roquin-1b is the primary player in zebrafish. Our analysis of domain and motif organization implies that the C-terminal regions of Roquin show variability compared with the highly conserved N-terminal region, which is consistent with a previous report [38].In humans, zebrafish, and amphioxi, only one motif consisting of 42 amino acids was identified in the C-terminal region of Roquin, but its function is not yet defined.Despite the low similarity in the C-terminal region, Roquins exhibit multiple intrinsically disordered fragments, indicating a conserved function.Human Roquins are known to promote RNA degradation by recruiting the CCR4-NOT deadenylase complex, thus preventing autoimmunity [39].Interestingly, Drosophila Roquin can also interact with the CCR4-NOT complex and mediate the degradation of bound mRNA targets [27].It has been demonstrated that the Roquin C-terminal region is responsible for interaction with the CCR4-NOT complex [27].Consistently, the protein-protein interaction network of Roquins suggests that all zebrafish Roquins are capable of interacting with the CCR4-Not complex, which mediates mRNA target decay.Notably, zebrafish Roquin-2 can bind Helz, which is an interaction partner of the CCR4-NOT complex and acts as a mediator of mRNA decay [40].Furthermore, zebrafish Roquin-2 exhibits co-expression with Arid5a, which is also a well-known RNA-binding protein involved in multiple immune pathways.Arid5a is known to regulate several IL-17 mRNA targets by promoting their stability and/or translation [41].The mammalian ROQUIN degrades target-gene mRNA by binding to the CDE motif [38].In our study, we identified a similar CDE motif in many immune/inflammatory genes of zebrafish.Therefore, it can be speculated that the roles and mechanisms of Roquins are conserved across vertebrates. This study investigated the evolutionary history of rc3h genes among metazoans.Additionally, we analyzed the expression profiles and regulation of rc3h genes in zebrafish.Furthermore, we hypothesized that zebrafish Roquins play a role in post-transcriptional immune responses.Our findings provide insights into the evolution of rc3h and its potential involvement in the immune systems of Roquin in zebrafish. Sequence Retrieval and characterization To obtain Roquin family genes in selected species, we conducted a search on GenBank (http://www.ncbi.nlm.nih.gov,accessed on 18 May 2024) and Ensembl (http://www.ensembl.org,accessed on 18 May 2024) using homo sapiens RC3H1 as the query sequence by BLAST.Redundant transcript sequences of the same gene were removed, and the candidate RC3H genes were further verified by predicting the conserved domain of the encoded proteins using the Smart program v9.0 (http://smart.embl-heidelberg.de,accessed on 28 April 2024) combined with Megalign analysis.Then, the domain organization of Roquins were drawn with IBS v1.0 software (Changsha, China.http://ibs.biocuckoo.org/,accessed on 5 May 2024). Subsequently, we analyzed the characteristics of these proteins.Specifically, the molecular weight and isoelectric point (pI) of the Roquin proteins were calculated with the online pI/Mw tool v3.0 (https://web.expasy.org/compute_pi,accessed on 16 April 2024).Sequence similarity and divergence were aligned using Megalign. We further analyzed the protein motifs of Roquin with Multiple Em for the Motif Elicitation (MEME) tool v5.5.5 [42] (https://meme-suite.org/meme/tools/meme, accessed on 11 May 2024) in the neighborhood of homology, selected the "Classic mode" option and the "DNA, RNA or Protein" option, entered the amino acid sequences to be analyzed, and set the number of expected motifs to 10. Phylogenetic Analysis Using human and mouse ROQUIN-1 and ROQUIN-2 protein sequences, a BLAST of ROQUIN-1 and ROQUIN-2 protein homologs from other species was performed using GenBank and Ensembl.Then, we removed the redundant sequences of the same proteins and checked them one by one to obtain all ROQUIN-1 and ROQUIN-2 protein sequences. We performed a multiple alignment of the Roquin proteins from all the selected species.To construct a phylogenetic tree, we utilized both the Maximum-Likelihood and Neighbor-Joining methods of MEGA11.The tree was generated with a bootstrap of 1000 replicates, and the evolutionary distances were computed using the JTT matrix-based method.We used Figtree v1.4.4 to modify the phylogenetic trees. Expression Analysis of rc3h Genes by Online Data The mRNA expression profiles of zebrafish rc3h genes during different stage of embryonic development and in various adult tissues were retrieved from online data [45,46]. Putative Transcription Factors Binding Sites The 2 kb promoter sequences of rc3h1 and rc3h2 were retrieved from the NCBI database.The PROMO database (https://alggen.lsi.upc.es/,accessed on 3 May 2024) was utilized to identify the putative TF binding sites [47], with a maximum matrix dissimilarity rate set to less than 15.Using Adobe Illustrator 2020, visual representations were created based on the location of the predicted transcription factor binding sites. Analysis of Protein Interaction Network and Potential Binding Motif of Roquins in Zebrafish The protein-protein interactions (PPI) of Roquin were analyzed using the STRING database (https://cn.string-db.org/,accessed on 21 April 2024). Using the UGUUUUCUGUGAAAACA motif of TNFα in mammals as a reference, we performed a sequence BLAST in the zebrafish genome to identify potential CDE motifs.Nucleotide BLAST was selected, Query Sequence was entered, and Reference RNA sequences (refseq rna) was selected.Subsequently, we retrieved candidate mRNAs containing this motif. Figure 2 . Figure 2. The exon-intron structure of the RC3H1 and RC3H2 genes between different species.Rectangles represent exons, while lines signify introns; green rectangles denote Untranslated Regions; yellow rectangles indicate Coding Sequences.The scale bar at the bottom provides a reference for gene length. Figure 2 . Figure 2. The exon-intron structure of the RC3H1 and RC3H2 genes between different species.Rectangles represent exons, while lines signify introns; green rectangles denote Untranslated Regions; yellow rectangles indicate Coding Sequences.The scale bar at the bottom provides a reference for gene length. 17 Figure 3 . Figure 3.The collinearity analysis of the RC3H1 and RC3H2 genes between humans and zebrafish All genes are represented by arrows, with the direction of the arrow indicating the gene's orienta tion.The same color of arrows suggests homologous genes between different species.Chr, Chromo some. Figure 3 . Figure 3.The collinearity analysis of the RC3H1 and RC3H2 genes between humans and zebrafish.All genes are represented by arrows, with the direction of the arrow indicating the gene's orientation.The same color of arrows suggests homologous genes between different species.Chr, Chromosome. Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 8 of 17 ROQ domain, while motif 6 contained the C3H1-Znf domain.The conserved domain/motif organization of these Roquins suggests that they have similar functions. Figure 4 . Figure 4.The schematic diagrams of the domain composition of ROQUIN-1 and ROQUIN-2 proteins between various species.RING, ROQ, C3H1-Znf, PRR, Q/N-Rich, and CC domains are shown in colored boxes on a grey background, representing the full length of the proteins.Each domain is depicted with a unique color for visualization.Hs, Homo sapiens; Dr, Danio rerio; Bf, Branchiostoma floridae. Figure 4 . Figure 4.The schematic diagrams of the domain composition of ROQUIN-1 and ROQUIN-2 proteins between various species.RING, ROQ, C3H1-Znf, PRR, Q/N-Rich, and CC domains are shown in colored boxes on a grey background, representing the full length of the proteins.Each domain is depicted with a unique color for visualization.Hs, Homo sapiens; Dr, Danio rerio; Bf, Branchiostoma floridae. Figure 4 . Figure 4.The schematic diagrams of the domain composition of ROQUIN-1 and ROQUIN-2 proteins between various species.RING, ROQ, C3H1-Znf, PRR, Q/N-Rich, and CC domains are shown in colored boxes on a grey background, representing the full length of the proteins.Each domain is depicted with a unique color for visualization.Hs, Homo sapiens; Dr, Danio rerio; Bf, Branchiostoma floridae. Figure 5 . Figure 5. Analysis of the protein motifs of ROQUIN-1 and ROQUIN-2 between various species.The motifs are illustrated in colored boxes.The letters within each motif stand for the abbreviations of amino acids.Larger letters signify higher conservation, indicating a greater probability of the amino acid appearing at the same position within the motif between various species.Hs, Homo sapiens; Dr, Danio rerio; Bf, Branchiostoma floridae. 17 Figure 6 . Figure 6.The heatmap displays the expression levels of zebrafish rc3h1a, rc3h1b, and rc3h2 genes between different stages and tissues.All expression levels between different stages (A) and tissues (B) are derived from RNA-seq data.The colors range from dark blue to dark red, reflecting low expression levels to high expression levels. Figure 6 . Figure 6.The heatmap displays the expression levels of zebrafish rc3h1a, rc3h1b, and rc3h2 genes between different stages and tissues.All expression levels between different stages (A) and tissues (B) are derived from RNA-seq data.The colors range from dark blue to dark red, reflecting low expression levels to high expression levels. Figure 7 . 7 . Figure 7. Analysis of transcription factor binding sites in the promoter regions of the rc3h1 and rc3h2 genes in zebrafish and amphioxi.The two kb promoter regions are shown, with arrows indicating Figure 7. Analysis of transcription factor binding sites in the promoter regions of the rc3h1 and rc3h2 genes in zebrafish and amphioxi.The two kb promoter regions are shown, with arrows indicating the transcription start site.Different-colored dots represent the binding sites of different transcription factors.Note that there are sequences in the promoter of the zebrafish rc3h2 gene that contain multiple overlapped binding sites for Gata2. Figure 9 . Figure 9.The schematic diagram shows the 3'-UTR region of multiple genes containing CDE-like motifs.(A) Secondary structure model of CDE-like motif.(B) Genes containing the CDE-like motif identified in zebrafish; this motif presents in the 3'-UTR region of mRNAs.Red nucleotides represent 100% conservation, marked with * at the bottom.N refers as an undefined nucleotide. Table 1 . The numbers of RC3H genes among different species. Table 2 . Summary of characteristics of rc3h genes in zebrafish. Table 3 . The Ka/Ks ratios of the rc3h1 and rc3h2 genes in zebrafish and amphioxi.
8,248.8
2024-05-28T00:00:00.000
[ "Biology", "Medicine" ]
Tribological Behavior of Bioinspired Surfaces Energy losses due to various tribological phenomena pose a significant challenge to sustainable development. These energy losses also contribute toward increased emissions of greenhouse gases. Various attempts have been made to reduce energy consumption through the use of various surface engineering solutions. The bioinspired surfaces can provide a sustainable solution to address these tribological challenges by minimizing friction and wear. The current study majorly focuses on the recent advancements in the tribological behavior of bioinspired surfaces and bio-inspired materials. The miniaturization of technological devices has increased the need to understand micro- and nano-scale tribological behavior, which could significantly reduce energy wastage and material degradation. Integrating advanced research methods is crucial in developing new aspects of structures and characteristics of biological materials. Depending upon the interaction of the species with the surrounding, the present study is divided into segments depicting the tribological behavior of the biological surfaces inspired by animals and plants. The mimicking of bio-inspired surfaces resulted in significant noise, friction, and drag reduction, promoting the development of anti-wear and anti-adhesion surfaces. Along with the reduction in friction through the bioinspired surface, a few studies providing evidence for the enhancement in the frictional properties were also depicted. Introduction Nature exhibits outstanding evolutionary abilities [1][2][3][4]. It has developed quite diverse and complex structures. Based on its evolutionary characteristics, nature has developed optimal solutions to adapt the different life forms to their local environment. Mimicking nature helps in solving many complex problems [5,6]. The foremost example of biomimicry is probably the design of flying machines by Leonardo da Vinci inspired by birds [7]. Although there are numerous instances, one of the most exciting areas where biomimicry has made a substantial contribution is the creation of superhydrophobic surfaces [8,9]. To obtain functional characteristics such as self-cleaning, non-wettable, anti-icing surfaces, lowering drag in submarines and other vessels, and for the self-propulsion of liquids in micro-channels, superhydrophobicity is necessary [10]. In addition to low adhesion and friction characteristics offered by superhydrophobic materials, nature has also devised other modulation strategies such as SLIPS (Slippery Liquid-infused Porous Surfaces), and anti-wear surfaces for sustaining extreme tribological challenges [11]. Superhydrophobic surface designs have been influenced by the surface structure of plants and insects, such as the lotus leaf, rose petal, and water strider's feet [12,13]. Figure 1 shows the hierarchical structures that make up such biological surfaces. These structures' ability to repel water is enhanced by their hierarchy. Both the form and the length scale exhibit the hierarchy [14]. As seen in Figure 1a,b,e, the leaves of the taro (Colocasia esculenta) and the lotus, respectively, are composed of nanoscale wax structures with the shapes of platelets and tubules that are overlaid on papillae epidermal cells [1]. Similar to this, the papillae epidermal cells of the leaves of the Asteraceae plant family and the petals The global energy dilemma of the twenty-first century is an increasingly critical issue. Numerous forms of transportation utilize a significant amount of energy. A major fraction of this energy is used to overcome friction [21]. Conventional ships and aircraft have a surface friction resistance of around 50% of the total resistance. Additionally, most of the pumping station's power is utilized to overcome surface friction throughout the long-distance pipeline conveyance operation. Energy significantly restricts underwater robots' operational range and duration [22]. Research on bio-inspired drag reduction has been the priority for energy saving since 1970 [23,24]. Nature offers numerous sources of inspiration which can be employed effectively for sustainable future advancement. For instance, fast-swimming sharks have special micro-grooves in their skin to aid reduction in friction [25]; the surface of a lotus leaf exhibits a water-repellent effect [26]; gecko feet have a smart-adhesion function that allows them to climb even the smoothest surfaces [27]. It is well acknowledged that friction is reduced with surface smoothness, but an investigation in 1982 revealed that shark skin has a micro-groove structure that can significantly minimize friction in some turbulent situations [28]. The rib pattern of shark skin is efficient for The global energy dilemma of the twenty-first century is an increasingly critical issue. Numerous forms of transportation utilize a significant amount of energy. A major fraction of this energy is used to overcome friction [21]. Conventional ships and aircraft have a surface friction resistance of around 50% of the total resistance. Additionally, most of the pumping station's power is utilized to overcome surface friction throughout the long-distance pipeline conveyance operation. Energy significantly restricts underwater robots' operational range and duration [22]. Research on bio-inspired drag reduction has been the priority for energy saving since 1970 [23,24]. Nature offers numerous sources of inspiration which can be employed effectively for sustainable future advancement. For instance, fast-swimming sharks have special micro-grooves in their skin to aid reduction in friction [25]; the surface of a lotus leaf exhibits a water-repellent effect [26]; gecko feet have a smart-adhesion function that allows them to climb even the smoothest surfaces [27]. It is well acknowledged that friction is reduced with surface smoothness, but an investigation in 1982 revealed that shark skin has a micro-groove structure that can significantly minimize friction in some turbulent situations [28]. The rib pattern of shark skin is efficient for drag reduction [29]. Numerous advancements in biomimetic drag reduction have been made and can be categorized into three groups: non-smooth surfaces [30], surfaces that are highly hydrophobic [31,32], and surfaces that use water jets [23]. Table 1 depicts the various biomimetic surfaces utilized for drag reduction. The drag reduction by bionic surfaces is a prerequisite in conserving energy through air, entailing surface area reduction. Similar characteristics are depicted in the turtle body, persisting to drag reduction [33]. Furthermore, the modifications in the surface morphology and topography of bioinspired surfaces are also contributing towards a further reduction in friction, which saves energy consumption. The object was the flow rate and denticle size, and the hydrodynamic characteristics of 3D-printed shark-skin foils were investigated. The outcomes revealed the drag reduction is around 35%. [35] 3 Heidarian et al. Riblet The impact of various riblet types was examined using computational fluid dynamics. The outcomes revealed the drag reduction is around 11%. Barchan dunes To research the impact of drag reduction, a planned and simulated non-smooth surface with barchan dunes-like contours was developed. The outcomes revealed the drag reduction is around 33.63%. [37] Wen et al. Denticles A flexible, synthetic shark skin membrane was created and put to the test in the water. The outcomes revealed the drag reduction is around 5.9%. [38] 6 Han et al. Denticles In a water tunnel, a biomimetic surface created via the exact duplication of shark skin was put to the test. The outcomes revealed the drag reduction is around 8.25%. [39] 7 Rastegari et al. Riblets DNS (direct numerical simulation) looked at the general mechanism of superhydrophobic longitudinal microgrooves and riblets reducing turbulent drag. The outcomes revealed the drag reduction is around 61%. [40] 8 Khan et al. Dragonfly Experimental evaluation using the 3D printer in a wind tunnel at different angles and speeds. The outcomes revealed that the higher angle and low speed entail a suitable drag reduction. Shark scales The vortex model resembled shark scales and was applied to NACA 0015 airfoil that revealed the reduction in drag [42] 10 Yakkundi et al. Rear wings spoiler Automobile models with rear wings spoiler were obtained at 70 km/h and depicted a drag reduction of around 8.2%. Golf ball embedded with no dimples but tiny grooves Measured the drag coefficient over the gold ball embedded with tiny grooves and obtained that the drag coefficient of the micro-groove surface was higher as compared to dimple surfaces. Box-fish Analyze and compute the computational behavior of box fish-inspired texture for drag reduction. The outcomes depicted that the bluff geometry in the case of box fish has obtained the most appropriate drag reduction. [45] In the experiment, a collection of fish skin acted as a transition to the turbulent boundary layer and formed an overlapping of the fish array structure. The outcomes revealed a 27% drag reduction was observed. [51] 19 Ibrahim et al. Riblets Riblets motivated by shark skin denticles subtended to the change in the marine vessel's structures. The outcomes revealed that a 3.75% reduction in drag was observed. [52] The morphological modulation of the surface is an intriguing approach that finds immense use in the field of tribology [53]. For lubricated contacts, this strategy has proven to be quite successful; for instance, a reduction in friction by over 80% was achievable for a unidirectional steel-on-steel contact with circular dimples [54]. Attempts have been made to generate a bio-inspired surface morphology and understand its potential to reduce the friction forces in both lubricated and unlubricated interfaces. Different strategies for translating biological solutions have been created. To retain the essence of the biological solution holding the outcome of a protracted evolutionary adaptation process, great caution is required [55]. Nature offers a wide range of low-friction surfaces as an alternative [56,57]. Researchers emphasize the skin of several reptiles such as sandfish skink, restoring the anti-friction and anti-wear characteristics owing to their intense interactions with the land during locomotion [58,59]. More importantly, the surface morphology, along with the species scale and the location of scale, is of significant concern on the snake's body that needs to be addressed while mimicking the snake-inspired surface [60]. The literature study depicted that the individual scales that make up a snake's skin overlap each other and have frequent protrusions in the shape of teeth to reduce wear and friction [60][61][62]. However, the role of such scale-like surface topographies on metal surfaces in lowering the friction forces due to changes in structural stiffness and whether or not this is true for lubricated interactions has not been explored yet. Some bio-inspired approaches have been studied, and interesting results for polymer surfaces have recently been realized [53,[63][64][65]. The ventral scales of the snake Phyton regius and the sand skink lizard served as inspiration for developing the surface morphologies promoting the reduction in friction [66]. Both creatures exhibit surface patterns with varying sizes as well as the usual scale-like pattern found on the skin [61]. The skin of sandfish has been extensively studied and is renowned for its low friction and high resistance to wear against the sand [30,53,67,68]. This characteristic has been used to produce a surface with a strong resistance to wear. In terms of low friction and high wear resistance, the micro-and nano-scale over the hierarchical pattern were also considered to be a beneficial methodology [69,70]. At the nano and micro-scales, the surface area to volume ratio considerably increases, which causes surface forces to have a high impact on the functionality of nano and microscale structures [71]. The intermolecular forces of the interacting phases define the final surface forces [72]. These intermolecular forces govern the tribological (friction and adhesion) and wetting behavior of the micro-and nano-scale systems [73]. Low friction and adhesion enhance many micro-and nano-electromechanical systems (MEMS/NEMS) endurance and effectiveness. It is typically advisable to employ low surface energy materials and texturing to reduce adhesion and friction between interacting surfaces. Numerous textural geometrical shapes that were inspired by nature have been used to significantly improve the tribological behavior of MEMS/NEMS [74,75]. Hierarchical patterns have demonstrated superior performance compared to their micro-and nano-counterparts [73,76,77]. Understanding the function of the micro-and nano-scalar aspects of the hierarchical patterns in the tribological and wetting behaviors is necessary to achieve superior performance [73]. The surface chemistry and different geometric parameters, pitch (distance between the pillars), height, and diameter of the micro-and nano-features are the factors defining the performance evaluation of different surface textures/patterns [78]. The tribological and wetting behavior of these features is also influenced by their shape [79]. The link between geometric factors and friction and adhesion is highly complex, in contrast to surface chemistry [80]. The mechanical reliability of the pattern geometry is somewhat responsible for this intricate relationship. High stresses can cause deformation, which can relate to the patterns' erratic behavior. The level of deformation for a particular pattern depends on the material and geometrical characteristics [81]. Many studies on hairy attachment mechanisms have led to a new field of study addressing the gecko adhesion effect [82]. Contrarily, smooth attachment methods have gotten much less attention, which begs for more research given their equally remarkable characteristics, such as excellent resistance to slippage. Focusing on the smooth contact pads that amphibians, insects, and mammals have developed to improve the ability of their feet to cling to objects can lead to exciting applications [83]. The research studies depicted that such surfaces have different surface micropatterns that act in the presence of fluid secretion, such as an oil-in-water emulsion in the case of insects [84]. Additionally, some of the animals with lubricated pads with smooth patterns jump, which involves a lot of friction when pushing off and landing [85]. The contact pads of these creatures have one of the most stunning surface textures ever seen. It is based on a hexagonal pattern that originated in bush crickets, tree and torrent frogs, and mushroom-tongued salamanders, as shown in Figure 2a-d. The hexagonal surface pattern was recognized as a friction-oriented characteristic capable of decreasing stick-slip and hydroplaning while enabling friction adjustment [86]. Besides these hexagonal structures, bioinspired bio-materials also paved the way for developing low-friction surfaces [87,88]. Dopamine is among the category of green oil-soluble additives, providing low-friction have been discussed in detail in the work-study. Modern bio-inspired functional materials can be designed for solid particle erosion resistance. For instance, the body coverings of the scorpion and tamarisk, which have a unique surface structure that is present everywhere, can withstand sand erosion very well [90]. By altering solid particle erosion parameters, such structure can increase the resistance of naturally created surfaces against solid particle erosion [91]. The cuticle of a lobster and the nacre of a shell have unique interior structures. Because the unique interior structure can increase the fracture toughness of the natural materials, the nacre is employed for safeguarding internal soft tissue and the cuticle [92]. The two-layer structure, which includes a hard layer and a soft layer, has a buffering effect, which helps the skin of desert lizards and sandfish endure wind-blown sand quite effectively [78,93]. The internal vascular system of the skin or bone can provide healing agents to injury sites for self-healing to repair the damage and mitigate further damage [94]. Therefore, precise and accurate solutions to solid particle erosion could be developed by mimicking natural materials with these unique architectures. Furthermore, surface texturing was also considered a beneficial aspect of strengthening the surface properties of the bioinspired surfaces [54,[95][96][97][98]. Surface textures like dimples, grooves, or convex on surfaces of friction units using mechanical or chemical processing technologies, have gained popularity as a means to enhance the tribological performance of mechanics [99]. Surface texturing has been used extensively in engineering since the idea of fabricating microstructures on mechanical friction pairs as textures was first proposed in the 1960s. Examples include minimizing frictional resistance and side leakage by arranging textures on mechanical seals and reducing abrasion and energy consumption by manufacturing micro-grooves on automobile piston rings [100]. Recent decades have seen an increase in the development of surface texturing techniques to enhance material performance and features, which can be attributed to the rise in the demand for materials in various applications. Due to its capacity to regulate Modern bio-inspired functional materials can be designed for solid particle erosion resistance. For instance, the body coverings of the scorpion and tamarisk, which have a unique surface structure that is present everywhere, can withstand sand erosion very well [90]. By altering solid particle erosion parameters, such structure can increase the resistance of naturally created surfaces against solid particle erosion [91]. The cuticle of a lobster and the nacre of a shell have unique interior structures. Because the unique interior structure can increase the fracture toughness of the natural materials, the nacre is employed for safeguarding internal soft tissue and the cuticle [92]. The two-layer structure, which includes a hard layer and a soft layer, has a buffering effect, which helps the skin of desert lizards and sandfish endure wind-blown sand quite effectively [78,93]. The internal vascular system of the skin or bone can provide healing agents to injury sites for self-healing to repair the damage and mitigate further damage [94]. Therefore, precise and accurate solutions to solid particle erosion could be developed by mimicking natural materials with these unique architectures. Furthermore, surface texturing was also considered a beneficial aspect of strengthening the surface properties of the bioinspired surfaces [54,[95][96][97][98]. Surface textures like dimples, grooves, or convex on surfaces of friction units using mechanical or chemical processing technologies, have gained popularity as a means to enhance the tribological performance of mechanics [99]. Surface texturing has been used extensively in engineering since the idea of fabricating microstructures on mechanical friction pairs as textures was first proposed in the 1960s. Examples include minimizing frictional resistance and side leakage by arranging textures on mechanical seals and reducing abrasion and energy consumption by manufacturing micro-grooves on automobile piston rings [100]. Recent decades have seen an increase in the development of surface texturing techniques to enhance material performance and features, which can be attributed to the rise in the demand for materials in various applications. Due to its capacity to regulate exterior qualities in specific applications, such as self-cleaning of surfaces in medicine and anti-biofouling, surface texturing has emerged as a crucial field in material science [94,101]. By carefully evaluating the effect of texturing on materials under various tribological conditions, including cavitation wear, adhesive wear, and wear with lubrication, numerous studies showed improvement in tribological performance. Significant studies explored surface texturing to address the need to improve tribological characteristics like wear and friction. Surface texturing is frequently used to improve the mechanical characteristics of segments [102]. It was discovered that surface texturing could enhance not just tribological characteristics but also light absorption in solar cells, the performance of biological implants, and the ability to create super-hydrophobic coatings. Laser surface texturing can dramatically increase the wettability of materials [103]. Improved lubricating coatings can result in super-hydrophobic coatings attributed to surface texturing. Surface texturing involves forming, micro-grooving, micro-dimples, and microchannels, among other surface modifications [104]. During the process of making the material, surface textures can be created. Inverted pyramids, micro-dimples, micro-grooves, nano-dots, micro-pits, and other surface texturing structures have all been created and studied [103]. For tribological applications, research was done on textured surfaces of various sizes and shapes. Additionally, it has been investigated how different surface texturing characteristics, such as spacing, dimensions, geometries, distance, width, area fractions, and tuning the depth of the micro/nanostructures, affect tribological properties [103,[105][106][107]. Microscopic features over the surface texturing have huge potential for improving tribological properties by reducing friction. In addition to the reduction in friction, surface texturing can also be used to purposefully increase friction in a variety of applications that depend on friction to function properly. In recent trends, laser texturing has been used to enhance the tribological properties of the surface, leading to more attention toward the lubrication regime [108,109]. Surface texturing reduces the contact area and restores the wear debris for dry conditions [110]. The study depicted that multi-scale LST (laser surface texturing) imparted self-cleaning and water-repellent behavior over the surface, as shown in Figure 3a [111]. Besides that, the Rib-shaped structure inspired by the shark skin formed via the LST approach imparted a reduction in skin friction drag and wall shear stress on the solid surface in the turbulence condition, as shown in Figure 3b [111]. The focus for textured surfaces has been mainly on reducing friction, but few studies also showed that an increase in friction of the textured surface is also a beneficial aspect, correspondingly maintaining the low wear rate. Xiang et al. [112] analyzed the Al 2 O 3 /TiC composite textured with linear and zig-zag-like structures over the surface formed by the laser surface texturing approach, maintaining variable periodicity and similar width and depth. Regardless of groove periodicity, sliding speed, and geometry, texturing marked the enhancement in the coefficient of friction with a low wear rate. Conducive to friction, zig-zag surface texturing with low groove periodicity led to an increase in friction, attributed to the roughness of ceramic particles resulting in micro-cutting of edges of the groove [113]. Therefore, the entrapped wear debris over the surface of the groove entails a reduction in the wear rate. Similar behavior was observed with a few other materials, e.g., maskless electrochemical texturing over the steel surface as a working material entails higher friction in boundary lubrication, with 39% wear rate reduction (entrap of wear debris) [114][115][116]. Surface textures involving micro-holes, grooves, and dimples were effectively produced by LIPSS (Laser-induced periodic surface structure), as shown in Figure 3e,f [117]. The experimental analysis of various studies resembling the texturing of the surface by the LIPSS approach is depicted in Table 2. Other than LIPSS approach texturing, Wang et al. [118] formed the variable periodicity micro-grooves by femtosecond laser processing in steel, maintaining groove depth and width the same throughout. The outcomes revealed that the increase in COF with small periodicity was attributed to the reduction in restoring wear debris. Dunn et al. [119] formed a high friction surface, i.e., COF > 0.6, by varying the pulse energy and pulse overlap during surface texturing on steel depicted in Figure 3d and obtained the highest enhancement in friction by a factor of four with 0.8 mJ (pulse energy), and 50-95% (pulse overlap). More conclusive in detail, the enhancement in friction over the textured surface correlated with an improvement in surface hardness. Schille et al. [120] formed hemispherical texture and deep welding dots q-switched nanosecond and continuous wave (CW) laser processing on 42CrMo4 steel surface, as depicted in Figure 3c,g. Hemispherical texturing entails the enhancement in friction factor by 1.8, but for welding dots (diameter: 330 µm, height: 70 µm) increase in friction was decoded as 0.8. Hence, surface texturing is also a conducive aspect for increasing friction. Biomimetics 2023, 8, x FOR PEER REVIEW with 0.8 mJ (pulse energy), and 50-95% (pulse overlap). More conclusive in deta enhancement in friction over the textured surface correlated with an improvement i face hardness. Schille et al. [120] formed hemispherical texture and deep welding d switched nanosecond and continuous wave (CW) laser processing on 42CrMo4 ste face, as depicted in Figure 3c,g. Hemispherical texturing entails the enhancement i tion factor by 1.8, but for welding dots (diameter: 330 µm, height: 70 µm) increase i tion was decoded as 0.8. Hence, surface texturing is also a conducive aspect for incre friction. The current review identified different bio-inspired surfaces playing a key r tribological interaction in solid-solid and solid-liquid interaction resulting in exquis havior, i.e., anti-wear, drag reduction, self-cleaning, super-hydrophobicity, and redu in friction. The current review aims to include studies investigating the tribologic havior of various species in nature. The primary focus has been to identify strategie by these species to address tribological issues, which can lead to characteristics su The current review identified different bio-inspired surfaces playing a key role in tribological interaction in solid-solid and solid-liquid interaction resulting in exquisite behavior, i.e., anti-wear, drag reduction, self-cleaning, super-hydrophobicity, and reduction in friction. The current review aims to include studies investigating the tribological behavior of various species in nature. The primary focus has been to identify strategies used by these species to address tribological issues, which can lead to characteristics such as drag reduction, low friction, anti-wear, and anti-adhesion surfaces, which have been included in this review paper. The role of bioinspired surface morphologies in tribological applications is also discussed in detail. Surface texturing is paving the way toward creating low-friction surfaces, which has been covered in the current work. Various surface textures have been identified and listed in the paper, entailing the reduction as well as improvement in friction. The different studies have been discussed under sub-headings based upon examples from plants (mushroom-like structures, super-slippery surfaces, and tree-like bifurcation network texture) and animals (snake scale, sandfish, shark skin, scaly texture, oil-soluble additive, laminated structures, frog and cheetah) due to their difference in interaction with the surrounding Animals undergo significant motion as a results friction and wear becomes a prime concern. This review paves the way for development in the field of biomimetics, enhancing the tribological properties of the materials. The work-study mainly concentrates on providing a way to reduce friction. Biomimetic Surfaces Inspired by Animals The capability of the animals to survive in extremely harsh conditions has been enacted by the structural surface of the animal bodies [125]. The mimicking of surfaces for texturing inspired by different animals offers noise and drag reduction that develops anti-wear and anti-adhesion surfaces and enriches the surface in water-capturing ability. Wear (catastrophic failure) and tear of the surface of the animals surviving in the desert by the sandy wind make their life challenging [126][127][128]. Regardless of the survival of animals in the desert, wear is undesirable for many industrial applications reducing the lifespan of components and hindering the recycling ability of the components [129]. But the surface texture of the various animals, including the ground beetle, dung beetle, earthworm, mole cricket, centipede, ant, etc., prevents the soil from adhering to the surface of the animal's body and restricts the soil wear [129][130][131]. Pertaining to marine biological applications, whelks and seashells comprising corrugated shells can effectively sustain in highly abrasive slurry environmental conditions [131,132]. Tian et al. [133] analyzed that the unequal lattice geometry of three typical shells in Ark Shells (Scapharca subcrenata) attributed to an excellent anti-wear characteristic. Tong et al. [134,135] analyzed that the micro-cracking and micro-shoveling attributed to the abrasive wear of different mollusk shells. Erosion is regarded as the major problem entailing the types of equipment failure and damage to the material, a phenomenon which is widely seen in, e.g., the nozzle of a rocket engine, helicopter rotors, turbine blades, and a few other mechanical parts/components [91,136]. There are few animals present in nature whose skin has evolved with erosion resistance, mainly including scorpions and desert lizards [137,138]. These animals can survive in the solid/gas mixed medium environment, i.e., sand, exhibiting high erosion resistance due to the biological functionality and unique surface texture/morphology [91,[136][137][138]. Hang and Zang et al. [91,136,139] identified the anti-erosion functionality of the scorpion's back and the outcomes of multi-coupling effects. Few research studies identified that surface morphology is one of the most critical factors in resisting erosion, i.e., the scorpion resists erosion without causing damage due to its surface morphology [136,[140][141][142]. In some of the research studies, it was obtained that the scorpion body has a special arrangement of grooves on the back (evolution and adaptability to the living environment) that can alter the boundary layer flow over the surface and helps in resisting erosion [143,144]. Other than through erosion, few animals have roughness and hierarchical morphology over the surface that tends to impart superhydrophobicity (static contact angle > 150 • ) [143][144][145][146][147]. The strider is one of them, which has the ability to walk and stand on the surface of the water without getting wet [147]. The research group of Jiang and Gao [148] analyzed the structural morphology of the strider (especially the legs), covering the cuticle with wax and hairs with nano-grooves attributing to the superhydrophobicity of striders. Pertaining to superhydrophobicity, the surface morphology of a butterfly reveals the scales over the wings with overlapping edges that resemble roof tile morphology, promoting the directional super-hydrophobicity on a butterfly's wings [149][150][151]. Figure 4a-l depicts the surface texture resembling ground beetle, dung beetle, pangolin, scorpion surface (dorsal) obtained via laser scan, scorpion back embedded convex hull, and grove, respectively. Anti-wear phenomena on the scorpion's surface are provided by the rotation of air over the groove channel depicted in Figure 4g enabling a low-speed-reverse flow zone and movement of the pond skater over the surface of the water. Promoting the inherited advantage of the animal's species (mobility, surface topography, skin) as per the surrounding response, the enacted surface morphology is desirable for anti-wear, anti-adhesion, and low-friction surfaces. Hence, more such animal species can be identified in creating these surfaces. surface texture resembling ground beetle, dung beetle, pangolin, scorpion surface (d obtained via laser scan, scorpion back embedded convex hull, and grove, respec Anti-wear phenomena on the scorpion's surface are provided by the rotation of ai the groove channel depicted in Figure 4g enabling a low-speed-reverse flow zon movement of the pond skater over the surface of the water. Promoting the inherit vantage of the animal's species (mobility, surface topography, skin) as per the surr ing response, the enacted surface morphology is desirable for anti-wear, anti-adh and low-friction surfaces. Hence, more such animal species can be identified in cr these surfaces. For high adhesion (dry), Gecko is an eminent example that supports its weigh helps the gecko move against gravity [20]. This is possible due to the complex morph (hierarchical) that inbuilt over the gecko toe and foot skin. The skin morphology e branches, setae, spatula, and complex structures of fibrillar lamellae that are attribu the attachment and detachment phenomena [154][155][156][157][158][159]. Further studies identified th gecko has great adaptability toward surface roughness and acquired a larger surfac between the foot and the contact surface due to split ends [155,160]. Further investi of the surface morphology of the gecko, it was identified that high adhesion was attri to the adaptability and compliance of setae, as depicted in Figure 5a,b [161]. Since fr between the dry, hard, and macroscopic materials typically decreases during slidin when velocity increases, friction continues to reduce due to the reduction in the inte contact [162]. However, the gecko setae did not exhibit a decrease in friction or adh while transitioning from static to kinetic contact mechanism [163]. Therefore, gecko excellent stickiness due to millions of dry, hard setae on their toes. The requiremen low adhesion against soil has been critically resolved by investigating the morphol soil-burrowing animals, as they can move in the soil without any soil sticking ov body [164]. Other than gecko morphology, underwater animals are also suitable fo reduction [126]. These include the surface morphology of underwater animals incl sharks and carp [3,128,165]. The sector-like scaling surface of the carp, surrounded b cro-papillae attributed to superoleophilicity in water and air, serves the function o reduction [3,166]. Shark-skin surface morphology is another example of bioinspired ture attributing to the drag reduction over the surface. The surface of the shark s embedded with small individual tooth-like scales (dermal denticles) textured with longitudinal grooves [167][168][169]. The presence of longitudinal grooves over the surf For high adhesion (dry), Gecko is an eminent example that supports its weight and helps the gecko move against gravity [20]. This is possible due to the complex morphology (hierarchical) that inbuilt over the gecko toe and foot skin. The skin morphology entails branches, setae, spatula, and complex structures of fibrillar lamellae that are attributed to the attachment and detachment phenomena [154][155][156][157][158][159]. Further studies identified that the gecko has great adaptability toward surface roughness and acquired a larger surface area between the foot and the contact surface due to split ends [155,160]. Further investigation of the surface morphology of the gecko, it was identified that high adhesion was attributed to the adaptability and compliance of setae, as depicted in Figure 5a,b [161]. Since friction between the dry, hard, and macroscopic materials typically decreases during sliding and when velocity increases, friction continues to reduce due to the reduction in the interfacial contact [162]. However, the gecko setae did not exhibit a decrease in friction or adhesion while transitioning from static to kinetic contact mechanism [163]. Therefore, geckos have excellent stickiness due to millions of dry, hard setae on their toes. The requirement of a low adhesion against soil has been critically resolved by investigating the morphology of soil-burrowing animals, as they can move in the soil without any soil sticking over the body [164]. Other than gecko morphology, underwater animals are also suitable for drag reduction [126]. These include the surface morphology of underwater animals including sharks and carp [3,128,165]. The sector-like scaling surface of the carp, surrounded by micro-papillae attributed to superoleophilicity in water and air, serves the function of drag reduction [3,166]. Shark-skin surface morphology is another example of bioinspired structure attributing to the drag reduction over the surface. The surface of the shark skin is embedded with small individual tooth-like scales (dermal denticles) textured with some longitudinal grooves [167][168][169]. The presence of longitudinal grooves over the surface of shark skin allows them to align parallel to the flowing direction of the water. Along with flowing direction, a groove-like textured surface entails the reduction in the vortices formed over the surface (smooth), allowing effective movement over the surface of the water [170,171]. Resembling the morphological computation principle, the interaction between a non-smooth substrate and passive anisotropic scale-like material (shark skin) enhanced the locomotion efficiency of the robot walking on an inclined surface, resulting in low energy consumption. A significant example of scaling surface texturing is the Galapagos shark, whose surface provides a suitable reduction in the drag function [126,172]. Research studies depicted that some owl species can fly quietly, e.g., the eagle owl [128]. The surface morphology of the wings of an eagle owl involves feathers over wings, low flight noise (frequency), and low intensity of sound, which is suitable for sound absorption, as depicted in Figure 5c-h [173]. The feather structure over the wings (microscopic analysis) entails the enhancement in the fluctuation of the pressure around the turbulence boundary that causes a reduction in the vortex noise. To generate high friction forces on a wide range of substrates, the granular media friction pad (GMFP), inspired by the biological smooth attachment pads of cockroaches and grasshoppers, uses passive jamming [174]. The flexible membrane surrounding the pad's granular media adapts to the substrate profile at contact [175]. The granular media passes through the jamming transition when under load, switching from fluid-like to solid-like characteristics [175]. High friction forces are produced on various substrate topographies by the jammed granular medium and the deformation of the encasing elastic membrane [174,175]. From the research studies, gecko surface morphology is highly recommended as a comparator when creating anti-adhesion surfaces, shark and carp surface morphology assists in obtaining drag reduction surfaces, and studying the wings of eagle owls provides a noise reduction advantage. shark skin allows them to align parallel to the flowing direction of the water. Along with flowing direction, a groove-like textured surface entails the reduction in the vortices formed over the surface (smooth), allowing effective movement over the surface of the water [170,171]. Resembling the morphological computation principle, the interaction between a non-smooth substrate and passive anisotropic scale-like material (shark skin) enhanced the locomotion efficiency of the robot walking on an inclined surface, resulting in low energy consumption. A significant example of scaling surface texturing is the Galapagos shark, whose surface provides a suitable reduction in the drag function [126,172]. Research studies depicted that some owl species can fly quietly, e.g., the eagle owl [128]. The surface morphology of the wings of an eagle owl involves feathers over wings, low flight noise (frequency), and low intensity of sound, which is suitable for sound absorption, as depicted in Figure 5c-h [173]. The feather structure over the wings (microscopic analysis) entails the enhancement in the fluctuation of the pressure around the turbulence boundary that causes a reduction in the vortex noise. To generate high friction forces on a wide range of substrates, the granular media friction pad (GMFP), inspired by the biological smooth attachment pads of cockroaches and grasshoppers, uses passive jamming [174]. The flexible membrane surrounding the pad's granular media adapts to the substrate profile at contact [175]. The granular media passes through the jamming transition when under load, switching from fluid-like to solid-like characteristics [175]. High friction forces are produced on various substrate topographies by the jammed granular medium and the deformation of the encasing elastic membrane [174,175]. From the research studies, gecko surface morphology is highly recommended as a comparator when creating anti-adhesion surfaces, shark and carp surface morphology assists in obtaining drag reduction surfaces, and studying the wings of eagle owls provides a noise reduction advantage. Biomimicking Surfaces Inspired by Snake Scales and Sandfish In the past decades, scientists identified the existence of life in the desert (harsh) conditions and evolved inspiration from nature, i.e., sandfish [176]. The evolution mechanism depicted that sandfish can move below the surface, revealing motion similar to swimming and dives into the sand [177]. Sandfish can swim at speed (10-30 cm/s) and move several centimeters laterally [178]. The scales over the surface of sandfish caused a reduction in the coefficient of friction (COF) [68,179]. The COF of the sandfish scale was observed to be better than PTFE particles, smooth and flat glass, steel (polished), and nylon surfaces (high density) [176,180]. As a result, the scaly texture of the sandfish hardly had any marks of wear when abrased against the sand. With a priority to reduce friction, the researcher identified that the arrangement and shape of the scale are vital in enhancing the tribological performance of the material surface [181,182]. Similarly, a group from Karlsruhe Institute of Technology mimicked two different surface textures (ball python and sandfish) on a bearing surface made of steel material and investigated tribological behavior under dry and lubricating regimes [53]. Figure 6 shows the scaling texture of sandfish and snakes. The outcomes revealed around a 40% reduction in the COF for the textured surface inspired by sandfish and a 22% reduction in the case of ball python when compared to the untextured surface. When mineral oil was used as a lubricant over the textured surfaces, three times further reduction was noticed for ball python texture and a 1.6 times reduction in sandfish texture than unlubricated condition [176]. From a future perspective, such surfaces can reduce COF in sensors embedded with lock brake systems (car), artificial hips, computer hardware, and machines running in a vacuum environment. It follows that the surface morphology of sandfish and snakes (ball pythons) are beneficial for research advancements in the field of tribology [53,176]. Considering the surface morphology of snake scales (hexagonal scales), the self-lubricating surface geometry acts as a prudent surface in creating a low-friction regime. Biomimetic Surfaces (Shark Skin) Revealing the Riblet Effect The surface morphology of shark skin is the primary evidence of the riblet effect in bioinspired surfaces [25,46]. Riblet consists of fine structures embedded with consecutive Furthermore, the ultra-low friction regime can be obtained with the snake scales by creating micro and nano-structures over the surface. Besides that, sandfish depicted no abrasion behavior under the sand, which helps extend the ultra-low friction regime. Sandfish morphology is quite a suitable inspiration when developing anti-wear surfaces. Biomimetic Surfaces (Shark Skin) Revealing the Riblet Effect The surface morphology of shark skin is the primary evidence of the riblet effect in bioinspired surfaces [25,46]. Riblet consists of fine structures embedded with consecutive grooves (longitudinal) [184]. Adhering to the microscopic scaling surface texture entails the free flowing of water into the grooves (shark skin surface) without the evidence of whirling [46]. This promotes the reduction in the drag force acting over the surface. Sharkskin surface textures over various materials have been mainly used in automotive, aircraft, and naval industry applications [185,186]. The formation of wing skin over the Airbus aircraft revealed the riblet effect, leading to drag reduction (6%) that entails fuel conservation [187]. Other than scaling texture, the new composite material (polyurethane) was also derived from the inherited surface morphology and characteristics of shark skin [188]. The usage of polyurethane composite material was seen in the BMW Z4 car model (hood, body components, and roof of the car), contributing to energy conservation [176]. Riblet effects were also utilized in the swimsuits made by Speedo International Limited [178]. In underwater photography, fabric and advanced design over the swim-suit trapped the air bubbles that kept the swim-suit dry; as a result, a reduction in water drag was observed as depicted in Figure 7 [189]. At the Beijing Olympics 2008, more than 60% of the swimmer used a speedo swim-suit embedded with the riblet effect, showcasing the new world record established by the swimmers [189]. The research studies indicated that the riblet structure and dermal denticles present on the shark surface were the reason for superior drag reduction, allowing fast swimming [184,190]. Furthermore, Miyazaki et al. [47] prepared a bioinspired riblet surface depicting the non-uniform morphology in shark denticles that paved the way for providing control over the local turbulent flow. The control over turbulent flow is suitable for fluid machinery and marine vehicle applications. Ibrahim et al. [52] developed the bio-inspired surface by shark denticles, paving the way for the enhancement in the marine vessel design (hydrodynamic) by macro-scale modifications in the hull design. Considering shark skin and denticle morphology, Lu et al. [191] prepared the bioinspired surface attributing to the reduction in water resistance. It is concluded that the micro-grooves over the skin allow the shark to move faster underwater, paving the way towards improvement in the performance of swimmers wearing bio-mimicked swim-suits contributing to drag reduction in water. Along with shark skin, shark denticles are also crucial in developing drag reduction. Mimicking the surface morphology of shark skin over the flexible surface can be considered a promising route toward obtaining more drag reduction over the surface. denticle morphology, Lu et al. [191] prepared the bioinspired surface attributing to the reduction in water resistance. It is concluded that the micro-grooves over the skin allow the shark to move faster underwater, paving the way towards improvement in the performance of swimmers wearing bio-mimicked swim-suits contributing to drag reduction in water. Along with shark skin, shark denticles are also crucial in developing drag reduction. Mimicking the surface morphology of shark skin over the flexible surface can be considered a promising route toward obtaining more drag reduction over the surface. Figure 7. Shark skin's low hydrodynamic surface drag is an inspiration for the design of high-performance swimwear with an antibacterial effect. The surface drag of water is significantly reduced by nature's distinctive microscale design (Riblet effect). Arrows depict anti-microbial traits that resemble the micro-topography of a shark's skin [192]-copyright permission from Elsevier, 2012. Biomimetics Surfaces Inspired by Scaly Texture Scaly structures inspired by pangolin and loach were used for surface texturing, reducing friction between solid/liquid surfaces and bio-surfaces [193]. As seen in Figure 8a, the loach scales fabricated through a micro 3D metal printer are stacked up like falling dominoes. They exude mucus as a lubricant to lessen wear and friction between their biosurfaces and the solid or water they come into contact. The scaly surface textures on the metal surface were fabricated as depicted in Figure 8b in accordance with such stack-up structures and their lubricating capabilities. The geometric parameters of Figure 8b are provided in [193], and SLM (selective laser melting) approach was used to fabricate the textured surfaces [194]. Steel was used as the processing material in 3D metal printing [195]. For tribological testing, the specimen was fixed on the rotary platform (8 mm radius) for a time of 20 min, and white pharmaceutical oil was used as a lubricant [193]. The coefficient of friction was analyzed on the textured surface at different angles and circumferential directions at a time, compared with the bare specimen depicted in Figure 8c,d. Compared to standard textures with dimples or grooves, 3D structures with relatively deep layers allow more lubricant to be squeezed out and stored, magnifying the secondary lubrication effect to minimize wear and friction [196]. Therefore, the research studies concluded that under severe lubrication conditions of relatively low speed and high load, Figure 7. Shark skin's low hydrodynamic surface drag is an inspiration for the design of highperformance swimwear with an antibacterial effect. The surface drag of water is significantly reduced by nature's distinctive microscale design (Riblet effect). Arrows depict anti-microbial traits that resemble the micro-topography of a shark's skin [192]-copyright permission from Elsevier, 2012. Biomimetics Surfaces Inspired by Scaly Texture Scaly structures inspired by pangolin and loach were used for surface texturing, reducing friction between solid/liquid surfaces and bio-surfaces [193]. As seen in Figure 8a, the loach scales fabricated through a micro 3D metal printer are stacked up like falling dominoes. They exude mucus as a lubricant to lessen wear and friction between their bio-surfaces and the solid or water they come into contact. The scaly surface textures on the metal surface were fabricated as depicted in Figure 8b in accordance with such stack-up structures and their lubricating capabilities. The geometric parameters of Figure 8b are provided in [193], and SLM (selective laser melting) approach was used to fabricate the textured surfaces [194]. Steel was used as the processing material in 3D metal printing [195]. For tribological testing, the specimen was fixed on the rotary platform (8 mm radius) for a time of 20 min, and white pharmaceutical oil was used as a lubricant [193]. The coefficient of friction was analyzed on the textured surface at different angles and circumferential directions at a time, compared with the bare specimen depicted in Figure 8c,d. Compared to standard textures with dimples or grooves, 3D structures with relatively deep layers allow more lubricant to be squeezed out and stored, magnifying the secondary lubrication effect to minimize wear and friction [196]. Therefore, the research studies concluded that under severe lubrication conditions of relatively low speed and high load, the effect of scaly textures on friction control was more substantial. The secondary lubrication amplification effect for textures with a reasonably high tilt angle close to 90 • is comparatively mild [197]. Due to the deformation of the cantilever-beam-like structure of scaly textures with decreasing tilt angle, the deformation and contact stress concentration become increasingly substantial, leading to increased friction. The negative impacts of relatively high (70 • ) or low (40 • ) tilt angles may be countered by textures with a medium tilt angle (45 • ) [198]. Given the considerations above, textures with a medium tilt angle (45 • ) had a lower friction coefficient than those with high (70 • ) or low (40 • ) tilt angles. However, pre-lubrication suggests that lubricant in the sliding contact region is entrapped, which positively enhances the tribological behaviors of textured surfaces [193,198]. [193]. Copyright permission from Elsevier, 2022. Bio-Inspired Green Dopamine Oil Soluble Additive To ensure the effective and long-term operation of the equipment, lubricants are us in mechanical equipment to lower the friction between the friction pair and reduce we Simultaneously, lubricating additives can dramatically increase lubricant performan Most lubricant additives currently in use contain sulfur and phosphorus-based co pounds and other highly ecotoxic elements, are poorly biodegradable, persistent in t environment, and quickly contaminate soil and water resources [200]. Therefore, resear ers are keen to develop new ways of preventing environmental resources via green lub geometrical characteristics of a bionic scaly texture and bionic scaly textures placed on a disc, (c,d) coefficient of friction was analyzed on the textured surface at different angles and different circumferential directions at a time, compared with the bare specimen [193]. Copyright permission from Elsevier, 2022. Meanwhile, to reduce the resistance in the water, a biomimetic surface resembling fish scales was formed at the surface of FKM (Binary fluorine rubber) by hot pressing at 150 • C followed by template replication using a 2800 mesh screen [199]. Self-cleaning and bouncing behavior was observed at the surface of the 2800FKM during the measurement of rolling and contact angle [199]. In comparison with FKM, the bio-inspired 2800FKM surface showed heat retention ability (98.89%) and contact angle (143.5 • ) along with self-cleaning behavior, illustrating the de-wetting performance. Under grease lubrication, no sign of wear was seen over the surfaces, and anti-friction and anti-wear behavior was observed over a 2800FKM surface in dry friction conditions [109,199]. Therefore, using the fish-scale structure for surface texturing over different materials is advisable to obtain superior tribological properties. The bio-inspired fish scale can aid the development of anti-wear surfaces. Bio-Inspired Green Dopamine Oil Soluble Additive To ensure the effective and long-term operation of the equipment, lubricants are used in mechanical equipment to lower the friction between the friction pair and reduce wear. Simultaneously, lubricating additives can dramatically increase lubricant performance. Most lubricant additives currently in use contain sulfur and phosphorus-based compounds and other highly ecotoxic elements, are poorly biodegradable, persistent in the environment, and quickly contaminate soil and water resources [200]. Therefore, researchers are keen to develop new ways of preventing environmental resources via green lubricating additives. There is a need of efficient green lubricant additives that are oil-soluble and have a good affinity to steel alloy, reducing friction, and improving anti-wear performance for steel/steel contacts. It has been demonstrated that the dopamine derivatives, a new class of chemicals based on amino and cholesterol hydroxyl modification, have high adhesion, oil solubility, and good lubricity [201]. All mammals, including dogs, have a pleasure center in their brain, which is stimulated by dopamine which entails a feeling of happiness [202]. The boundary adsorption and tribological performance can be improved by including the N element from dopamine in the additive molecule, which has the capability to adhere to various organic and inorganic surfaces [201,203]. Tribo-chemistry is essential for enhancing the effectiveness of lubrication as lubricants. At various concentrations of 0.5%, 1%, 2%, 3%, and 4%, samples of DA (Dopamine) were dissolved in PAO 10 (Poly alpha oil), as indicated in Figure 9 [201]. It was evident that DA has great oil solubility because it does not precipitate in PAO 10, and the oil sample remains clear [204,205]. Further investigations were performed into the synthetic DA's physicochemical and tribological characteristics as a PAO 10 additive [201]. The viscosity and thermal stability of PAO 10 increase as DA content rises. In addition, as the DA concentration rises, PAO 10's adsorption efficiency improves, and its contact angle with the metal surface decreases, demonstrating that DA has a strong affinity for the metal substrate and lowering PAO 10 surface energy at the metal interface. It enhances the tribological characteristics of PAO 10 and its lubricating efficacy, as indicated by Figure 9. Figure 9 also suggests that the concentration of 3% of DA in PAO 10 is the optimum/ideal value in terms of reduced friction and anti-wear surface. The 3% of DA in PAO 10 exhibits the best tribological performance compared to PAO 10 as displayed in Figure 9 [201]. Through electrostatic contact, the hydroxyl and amide bonds in the DA molecules first create a physical adsorption coating [206]. The friction pair's surface is simultaneously reacted with by the active N and O components in the DA molecule, forming a protective tribo-film made of nitrate, cyanide, and iron oxide [201]. The tribological characteristics of PAO 10 are enhanced by this tribo-additional film's obstruction of the friction pairs. As a PAO 10 lubrication additive, DA is essential, especially as a green lubricating additive that is being employed in obtaining low-friction surfaces. coating [206]. The friction pair's surface is simultaneously reacted with by the active N and O components in the DA molecule, forming a protective tribo-film made of nitrate, cyanide, and iron oxide [201]. The tribological characteristics of PAO 10 are enhanced by this tribo-additional film's obstruction of the friction pairs. As a PAO 10 lubrication additive, DA is essential, especially as a green lubricating additive that is being employed in obtaining low-friction surfaces. Figure 9. SEM morphology of wear-out sample with different compositions of DA in PAO 10 along with the critical value of wear rate. The 3% of DA in PAO 10 is the optimum/ideal value in terms of reduced friction and anti-wear surface [201]. Copyright permission from Elsevier, 2022. Biomimetic Structures Inspired by a Laminated Structure Layered materials have shown exquisite low-friction properties due to weak interlayer bonding [207][208][209]. The bio-inspired laminated structure from natural biological materials such as bones, shells, and spider silk can contribute excellent tribological properties [210,211]. Graphene is one of the most suitable bio-inspired materials that can be used to Figure 9. SEM morphology of wear-out sample with different compositions of DA in PAO 10 along with the critical value of wear rate. The 3% of DA in PAO 10 is the optimum/ideal value in terms of reduced friction and anti-wear surface [201]. Copyright permission from Elsevier, 2022. Biomimetic Structures Inspired by a Laminated Structure Layered materials have shown exquisite low-friction properties due to weak interlayer bonding [207][208][209]. The bio-inspired laminated structure from natural biological materials such as bones, shells, and spider silk can contribute excellent tribological properties [210,211]. Graphene is one of the most suitable bio-inspired materials that can be used to obtain laminated structures [212][213][214][215]. The nano-indentation and nano-scratch approaches were used to evaluate the tribological behavior of bio-inspired laminated aluminum matrix composite (BAMC) reinforced with graphene [216]. Compared to Al (pure), the friction resistance was improved by 28%, and the reduction in adhesion and ploughing was about 32% and 16%, respectively [216]. Upon nanoindentation on the biomimetic laminated structure, heterogeneous deformation at the interface of graphene intensified the strain hardening and improved the hardness, wear, and frictional resistance of BAMC. Nacre, often known as nature's armor, has been used as a model for creating stronger and more durable bioinspired materials [217]. In nacre, hard aragonite bricks and soft biopolymer layers are arranged in a brick-and-mortar pattern by nature [217][218][219]. Despite knowing this, it has proven difficult to replicate all of the nacre's reinforcing mechanisms in synthetic materials. To recreate the structure and reinforcing effects of nacre in aluminum composites, the hybrid graphene/Al 2 O 3 platelets with surface nano-interlocks act as hard bricks for the main load bearer and mechanical interlocking and aluminum laminates as soft mortar [217]. The bioinspired graphene/Al 2 O 3 doubly reinforced aluminum composite outperformed even nacre in terms of strength (223%), hardness (210%), stiffness (78%), and toughness (30%) when compared to aluminum. Along with mechanical properties, the tribology behavior was observed to improve in comparison with aluminum. The research study depicted that laminated ceramic materials are highly reliable in improving the tribological performance of the material surface [220][221][222]. Song et al. [223] investigated the friction behavior of Al 2 O 3 /MoS 2 -BaSO 4 laminated material with reciprocating motion. The outcomes depicted a 20% to 40% reduction in friction as compared to the alumina. Hadad et al. [224] investigated the frictional properties of Si 3 N 4 -TiN laminated material and depicted no more improvement in the friction properties of laminated material. But the addition of hBN material to Si 3 N 4 -TiN laminated material entails less friction than Si 3 N 4 -TiN laminated material and Si 3 N 4 material. From the research studies, it has been obtained that the layered materials offer improvement in the surface's tribological aspect, inspired by bone, spider silk, and shells. Different composite materials formed with graphene as a matrix obtained better results than hybrid reinforcement. For future advancement, laminated structured material is a good idea for creating low-friction surfaces. Biomimetics Surfaces Inspirations for Improved Traction Strong traction between solids with rough surfaces occurs if at least one of the solids is soft (elastically). Meanwhile, some spiders and lizards can provide dry adhesion and move on vertical surfaces (rough) due to the presence of compliant layers present on the surface of their attachment pads [225]. Flies, grasshoppers, bugs, and tree frogs have less compliant layers on the surface of attachment pads, and the adhesion occurs on rough surfaces because the animals inject a wetting liquid into the pad-substrate contact area, generating a relatively long-range attractive interaction provided by the formation of capillary bridges [226]. On the other hand, the surface layer on the cheetah paws has more compliant layers, providing strong traction to the rough surface [227]. These surface morphologies of attachment pads and cheetah paws, providing strong traction, are quite beneficial for industrial applications as the variation in the morphologies provides a way to optimize energy distribution between the road and tires [176,178,226,227]. Considering that tires serve various purposes, including providing high sliding resistance during braking (stopping distance shorten) and low rolling resistance during driving (saving fuel consumption) [228,229]. During the formation of the tire bioinspired surface morphology inspired by the cheetah (characteristics), stalking the prey slowly and acquiring speed (high) for a short duration were incorporated [176]. The cheetah has flat barrow paws attributing to the low friction concerning ground contact during running, revealing low energy consumption as shown in Figure 10a. However, the flat paws broaden during the directional change in running and slow down the process, enhancing the contact area with the ground. Therefore, the transmission of force over a large surface area enhances stability. Thus, cheetah paws' morphological changes are vital in optimizing stability across curve paths (high), effectiveness in direction change, and optimization of acceleration [230]. Resembling similar characteristics and traits, the summer tire Continental ContiPremiumContact™ was developed. The tire width was similar to conventional tires, but the morphology changes similar to cheetah paws; the tires' width was widened during braking [178]. The widening of the tire width was attributed to the reduction in the stopping distance by around 10% to 12%. Therefore, the tire profile and selection of material used are critical in order to save energy consumption [178]. Another evidence of biomimetic surface attributing to energy consumption saving is the morphology of frog species [231]. The hexagonal pattern on the tire, which is similar to the frog species (tree frog and torrent frog), provides better performance (stopping distance reduction and grip optimization) to winter tires during wet conditions, as shown in Figure 10b-e [176]. Tree frogs (live on the tree, known for climbing) and torrent frogs (known to climb on wet surfaces near the waterfall) offer two bio-inspired surface morphologies resembling hexagonal patterns, suitable for the best performance in tires [232,233]. Other than that, the formation of the V-pattern tread of the tire is attributed to the evacuation (quick) of water from the contact surface, causing a reduction in the aquaplaning risks [176]. From the above studies, the morphology of cheetah paws and the hexagonal pattern on frog species are efficient in saving energy consumption. Widening the tire width contributed to a reduction in stopping distance. Therefore, varying the cheetah paws morphology will significantly impact saving energy consumption. Besides this, hexagonal patterns also act as a low-friction surface structure but depend on optimizing the parameters while mimicking the bio-inspired structure. Further, the micro-and nanostructures over the hexagonal patterns will produce better results in terms of providing improved tribological performance. Hence, these surface morphologies have developed anti-sticking surfaces. Table 3 further summarizes studies exploring the tribological behavior of various biomimetic surfaces and materials. morphology of cheetah paws and the hexagonal pattern on frog species are efficient in saving energy consumption. Widening the tire width contributed to a reduction in stopping distance. Therefore, varying the cheetah paws morphology will significantly impact saving energy consumption. Besides this, hexagonal patterns also act as a low-friction surface structure but depend on optimizing the parameters while mimicking the bio-inspired structure. Further, the micro-and nanostructures over the hexagonal patterns will produce better results in terms of providing improved tribological performance. Hence, these surface morphologies have developed anti-sticking surfaces. Table 3 further summarizes studies exploring the tribological behavior of various biomimetic surfaces and materials. Friction between bio-surfaces and their contracted solid/water is decreased by a scaly surface. [193] 4. Junya et al. Surface modification by bio-inspired nanoparticles To improve interfacial adhesion, polyethyleneimine (PEI), dopamine (DA), and SiO 2 nanoparticles were co-deposited onto the surface of the Basalt/PTFE fabric. To improve the tribological performance of fabric composites, CaF 2 and Si 3 N 4 were added to fabric composites [235] Granular Media friction pad inspired by cockroach and grasshopper When a load is applied, the granular medium goes through the jamming transition, changing its properties from fluid to solid. High friction forces are produced on a variety of substrate topographies by the jammed granular medium in conjunction with the deformation of the encasing elastic membrane. [174] 6. Yi et al. Colloidal hydrogel system of aluminum hydroxide nanosheets (ANHS) Colloidal hydrogel develops excellent stiffness and elasticity, as seen by its elastic modulus of >10 MPa. It has been shown that AHNS hydrogel works well as a lubricant and an anti-corrosive. [236] Tian et al. Ark Shells Unequal lattice geometry of three typical shells in Ark Shells (Scapharca subcrenata) attributed to an excellent anti-wear characteristic [133] 8. Regardless of groove periodicity, sliding speed, and geometry, texturing marked the enhancement in the coefficient of friction with a low wear rate. [112] 10. Tong et al. Mollusk shells The micro-cracking and micro-shoveling attributed to the abrasive wear of different mollusk shell [134,135] Biomimetic Surfaces Inspired by Plants In scientific theories and technical applications, such as self-cleaning, liquid-repelling, energy harvesting, and droplet manipulation that rely on reproducing the chemical properties and morphology of natural surfaces, a wide variety of biomimetic surfaces are provided by Plantae [237]. One well-known example of a liquid-repelling material is the lotus effect, which refers to the waterproofing properties of the sacred lotus plant's surface [185]. Water rapidly beads up and rolls over the leaf due to the intrinsic benefits provided by the combination of hierarchical shape and wax-based chemical modifications, imparting a high contact angle with low adhesion and friction [238]. A few examples in nature also persist, which can trap water droplets that maintain high contact angle (essential for maintaining droplet shape), such as rose flower petals due to the high value of adhesion, known as the petal effect. This depends on the semi-wetting state of their hierarchical morphology between the Cassie-Baxter and Wenzel states [239]. However, these surfaces do not exhibit a persistent de-wetting condition and a potent repellency to liquids with low surface tension (Omni-repellency) [240]. The cuticles of springtails, a common arthropod, have been found to use another promising strategy to deflect most fluorinated fluids and maintain cutaneous respiration in quite moist conditions [241]. Such cuticles have given rise to morphological geometry that resembles a mushroom, ranging from singly re-entrant to triply re-entrant topology [242]. In addition to the static repellency, artificial liquid-repelling surfaces require repellency against impacting droplets, including minimal droplet-surface contact time to rebound droplets [243]. The normal behavior of droplets impinging on a super-repellent surface is for them to spread, retract, and bounce off with circular symmetry, resulting in the contact time bounded by an inertia-capillarity limit [237]. The symmetric spreading was avoided through interfacial features on macro curvatures. As a result, the asymmetrical droplet dynamics at impact have increased impalement resistance and decreased contact time. Recently, natural leaves and wings have inspired another strategy that abandons the conventional rigid prerequisite [237]. Flexible surfaces encourage kinetic repellency by minimizing impacting loads through their oscillations, extending the related research from statics to dynamics in contrary to the asymmetry process used by rigid surfaces [243]. The different bio-inspired surfaces have been enlisted in detail in the below section. Bio-Inspired Mushroom-like Structures Through the use of three-dimensional projection micro stereolithography, a waterrepellent biomimetic surface was created with single re-entrant mushroom-like basic units, each of which included a mesoscale head and a microscale spring set [244]. The research study showed that a single re-entrant mushroom-like structure repels the impinging droplets from the surface; therefore, FS (flexible surfaces) with low-energy particles combined to couple chemical modification to provide the kinetic repellency during the impact condition [245]. The mushroom-like flexible structure deformed in a downward direction when the head was compressed in a normal direction but came to its original state when the load was released [246]. The mushroom-like structure showed recovery capacity even when the head was under shear loading, suggesting good mechanical robustness resembling flexible support in relation to shear and normal compression, as shown in Figure 11 [247]. The tribology behavior of the mushroom-like flexible structure was studied for mechanical robustness at a normal load of (1, 2, or 4 N) with a speed of 1 mm/s, compared with rough surfaces (RS) [237]. The structural damage influenced coefficient of friction as a function of load. In the case of 4 N, the coefficient of friction first reached zero, but this was not the case in 1 and 2 N. The structural damage entails head fragmentations and breakages at pillar-bottom connections. Under the 4 N condition, on heads with irregular patterns, fragmentation was not directly seen, but by shearing the heads with a tweezer, some breakages at spring-head connections could be identified. Nonetheless, the ratio of damaged units on the FS was lower than that on the RS, even though both the FS and RS began to show structural damage with the same typical load of 4 N [237]. A previous research study depicted the structural damage at 0.04 N/mm, but a mushroom-like flexible structure can withstand higher loads without failure till normal load of 0.44 N/mm as well as high recovery potential in response to widespread normal and shear compression, indicating better mechanical robustness against tribological friction to approach real-world applications [248]. In terms of impalement barrier enhancement and contact time reduction, it is demonstrated that the flexibility of underlying spring sets improves the kinetic repellency of droplet infiltration, partially improving by 80% via structural tilting movements [242]. The flexibility gradient that results from incorporating different flexibilities in each mushroom-shaped unit was demonstrated to manipulate droplets directionally, thereby opening the door for droplet transport [237]. This is the primary example of using flexible interfacial structures that can effectively lower friction and improve water repellency. Flexible surfaces with low-energy particles provide kinetic repellency during the impact condition and can withstand without failure at a normal load of 0.44 N/m. Therefore, the development of flexible structures will be considered a suitable option for obtaining an improvement in the tribological behavior of a surface. manipulate droplets directionally, thereby opening the door for droplet transport [237]. This is the primary example of using flexible interfacial structures that can effectively lower friction and improve water repellency. Flexible surfaces with low-energy particles provide kinetic repellency during the impact condition and can withstand without failure at a normal load of 0.44 N/m. Therefore, the development of flexible structures will be considered a suitable option for obtaining an improvement in the tribological behavior of a surface. Therefore, mushroom-like flexible structures are preferable for better tribology properties [237]. Copyright permission from ACS, 2021. Biomimetic Tree-like Bifurcation Network Texture The tribological characteristics of biomimetic tree-like network texture, together with the liquid spreading flow properties, were analyzed in order to secure and extend the service life of the titanium alloy/ultrahigh molecular weight polyethylene artificial joint [249]. Three different surface textures (cross-shaped network, T-shaped network, and Yshaped network) were created with various branch numbers and branch angles [249][250][251]. The texture ratios for each type of tree-like network are 10%, 15%, and 20%, respectively [252]. All three types of textured surfaces have high anti-friction characteristics and can minimize the immediate contact angle and achieve complete liquid spreading within a specific time frame [249,250]. The Y-shaped network texture exhibits the best liquid spreading flow property, whose instantaneous contact angle for a 2L liquid in a 10% texture ratio is 23°, and whose liquid may spread completely in 0.95 s [250,253]. In a 15% texture ratio, the T-shaped network texture's friction coefficient drops to 0.077, a 38% reduction from the original surface [250]. Self-lubricating artificial joints benefit from this strategy for friction reduction [254][255][256][257]. Therefore, the Y-shaped network has been recommended to improve friction/tribological performance. The cross-linked network texture can find suitable resemblance in future advancements in the tribological aspects of the different bioinspired surfaces. Besides, surface texturing is always considered a beneficial aspect of improving the tribological aspects of the surface, depending upon the application usage. Furthermore, bifurcation network textures can be mimicked over the surface, resembling the morphological advantage of a Y-shaped network texture. Biomimetic Tree-like Bifurcation Network Texture The tribological characteristics of biomimetic tree-like network texture, together with the liquid spreading flow properties, were analyzed in order to secure and extend the service life of the titanium alloy/ultrahigh molecular weight polyethylene artificial joint [249]. Three different surface textures (cross-shaped network, T-shaped network, and Y-shaped network) were created with various branch numbers and branch angles [249][250][251]. The texture ratios for each type of tree-like network are 10%, 15%, and 20%, respectively [252]. All three types of textured surfaces have high anti-friction characteristics and can minimize the immediate contact angle and achieve complete liquid spreading within a specific time frame [249,250]. The Y-shaped network texture exhibits the best liquid spreading flow property, whose instantaneous contact angle for a 2L liquid in a 10% texture ratio is 23 • , and whose liquid may spread completely in 0.95 s [250,253]. In a 15% texture ratio, the T-shaped network texture's friction coefficient drops to 0.077, a 38% reduction from the original surface [250]. Self-lubricating artificial joints benefit from this strategy for friction reduction [254][255][256][257]. Therefore, the Y-shaped network has been recommended to improve friction/tribological performance. The cross-linked network texture can find suitable resemblance in future advancements in the tribological aspects of the different bioinspired surfaces. Besides, surface texturing is always considered a beneficial aspect of improving the tribological aspects of the surface, depending upon the application usage. Furthermore, bifurcation network textures can be mimicked over the surface, resembling the morphological advantage of a Y-shaped network texture. Plant-Based Super Slippery Surfaces Typically, a light coating of lubricant with low surface energy involving fluorinated oil is applied to textured surfaces, creating slippery surfaces, often referred to as lubricantinfused surfaces [258,259]. Fluorinated materials are mainly used to infuse lubricants that produce a low-friction condition over the surface [260]. Although lubricant-infused surfaces are smooth enough for liquids to slide with a low contact angle hysteresis, liquidsincluding water and organic solvents -may not necessarily offer a significant contact angle on these surfaces [261]. These slippery surfaces use micro-textures to retain the lubricants, as a result, the lubricant layer repels the liquid, as opposed to surfaces that use reentrant shapes [262]. Liquid-repellent surfaces contain omniphobic surfaces and lubricant-infused slippery surfaces [263]. Pitcher plants are the perfect example of super-slippery surfaces that helps in producing low-friction surfaces. The pitcher plant, where insects slip into the pitcher and are then digested for food, served as the inspiration for the slippery surfaces depicted in Figure 12 [264]. Figure 12 shows the microscopic and macroscopic grooves over the surface, which are separated by ridges, providing hindrance to the lateral spread of water by enhancing the radial spread towards creating slippery surfaces [265,266]. A new class of liquid-repellent surfaces with self-cleaning capabilities can be created by the surface morphology of pitcher plants along with the microscopic and macroscopic grooves [267]. The pitcher plant surface has inherited a thin coating of lubricant, with surface features exhibiting natural durability both chemically and mechanically [268]. These slippery surfaces are used for fouling-resistant coatings and fluid-handling equipment in harsh environments [269]. These slippery surfaces paved the way for the improvement in the tribological properties of the surface [270]. Another example in this category is the lotus plant, regarded as an emblem of purity. The lotus plant (leaf) has unique characteristics, i.e., it does not get dirty and wet when exposed to rain and dust [271]. The droplet rolls over the surface of the lotus plant picking up the dirt, and keeping the leaves clean and dry after the rain, as depicted in Figure 13a-c. This happens as the adhesion between the dust and water particles is greater than the adhesion between the leaf and dust surfaces [272]. In the same regard, Barthlott et al. [273] evaluated the self-cleaning characteristics of the lotus. Further analysis at a higher magnification of lotus-inspired surface showed dust particle accumulation only at the asperity peaks [274][275][276]. Therefore, when the droplets fall over the surfaces, the dust particle gets washed away, demonstrating the self-cleaning behavior of the lotus leaf as depicted in Figure 13c [176,178,275,276]. Figure 13 depicts the biomimetic surfaces resembling the morphology of a lotus leaf and a computer graphicembedded lotus leaf, as well as self-cleaning phenomena on painted surfaces mimicking the lotus effect. In the case of a superhydrophobic surface, the extreme water-repellent state is formed owing to the effective entrapment of air, contributing to lower friction and adhesion. Lu et al. [277] mimicked the surface morphology inspired by silver ragwort and lotus leaf to develop the fibrous mats pertaining the superhydrophobicity. The outcomes entail the presence of stable superhydrophobicity with a contact angle of 160 • with lotus leaf and 147 • with silver ragwort leaf. Other than lotus and silver ragwort leaf, pitcher plant-inspired surfaces are mainly effective in marine applications providing low adhesion can be seen in the reference [278][279][280]. Wang et al. [281] developed SLIPS aluminum, which mimics the pitcher plant with the anti-biofouling property of its surface, suitable for marine applications. Furthermore, research studies reveal that higher regularity and lower length provide suitable low adhesion properties at the surface, with higher SLIPS stability [269,281,282]. Since various biological surfaces have inspired the design of robust, air-resistant surfaces [283]. The lotus leaves, pitcher plant, and Salvinia are air-infused liquid-repellent surfaces [283]. All these use rough surfaces to trap air pockets. In the air, the layered roughness combined with the surface's hydrophobic wax provides a stable and durable air-filled repellent layer for the lotus leaf [284,285]. The air layer can also be preserved or replenished during the transition from the air to the water environment [283]. For example, Salvinia uses hydrophilic patches on a superhydrophobic whisk-like substrate to strongly anchor the air-water interface, which helps establish the low friction and adhesion surfaces as depicted in Figure 13d. Figure 13e shows another air-infused liquidrepellent surface that has been discussed above. The surface morphology of pitcher-plant and Salvinia is ideal for developing slippery surfaces, while lotus, taro, and rice leaf create anti-wetting surfaces. Table 4 depicts the studies of various bioinspired surfaces drawing upon plants. helps establish the low friction and adhesion surfaces as depicted in Figure 13d. Figure 13e shows another air-infused liquid-repellent surface that has been discussed above. The surface morphology of pitcher-plant and Salvinia is ideal for developing slippery surfaces, while lotus, taro, and rice leaf create anti-wetting surfaces. Table 4 depicts the studies of various bioinspired surfaces drawing upon plants. Figure 12. (a) Bioinspired pitcher plant surface paving the way for slippery surface, highlighting the micro-and macroscopic channels over the surfaces, (b) macroscopic channels allowing the flow of water into the pitcher plant, providing a slippery pathway, and (c) microscopic channels stabilizing water films and trapping insects [286]. Copyright permission from Elsevier, 2021. Superhydrophobic copper meshes were developed and prepared, followed by etching and modification with 1-dodecanethiol over the surface. The resultant copper foam removes organic solvents below and above water. The 153° ± 3° was the contact angle (static) obtained. Hence, this enhances copper cloth, a good tool for oil spill cleanup as well as oily wastewater treatment. [288] Figure 12. (a) Bioinspired pitcher plant surface paving the way for slippery surface, highlighting the micro-and macroscopic channels over the surfaces, (b) macroscopic channels allowing the flow of water into the pitcher plant, providing a slippery pathway, and (c) microscopic channels stabilizing water films and trapping insects [286]. Copyright permission from Elsevier, 2021. The study concluded that bioinspired surfaces from pitcher plants possess omni-repellent characteristics on the surface that grant non-stickiness nature to the surface. Neither polar nor non-polar liquids would stick on the surface. [286] Figure 13. Biomimetics surface resembling the morphology of (a,b) normal lotus leaf and compu graphics embedded lotus leaf, (c) self-cleaning phenomena on painted surface mimicking lotus fect, (d,e) air-infused liquid repellent surface of Salvinia and the pitcher plant [176,283]. Copyr permission from ACS, 2022. Conclusions and Future Outlook The present work focuses on understanding different strategies devised by nat for modulating tribological interaction with the surrounding. Different tribolog scenarios involving solid-solid, solid-liquid, and liquid-liquid interactions are discus for reduced friction, adhesion, and wear. The water-repellant superhydrophobic possesses low adhesion and friction, leading to exceptional properties such as s cleaning, anti-fouling, and helps in drag reduction in submarines and vessels. The vari examples of bioinspired surfaces are discussed that entail the modulation in friction a adhesion, i.e., lotus leaf surface (water repellency), gecko feet (directional-adhesio micro-groove over shark skin (fast-swimming), eagle owl wings (noise reduction), sn scales and lizards (low-friction surfaces), sandfish bodies (high-wear resistance), and pitcher plant and Salvinia (Super-slippery surface). Furthermore, the evidence of d reduction was observed by mimicking the turtle's surface morphology. The work stu revealed that the bio-inspired approaches with tailored stiffness showed better outcom in terms of low frictional properties. The development of micro-grooves inspired fr Figure 13. Biomimetics surface resembling the morphology of (a,b) normal lotus leaf and computer graphics embedded lotus leaf, (c) self-cleaning phenomena on painted surface mimicking lotus effect, (d,e) air-infused liquid repellent surface of Salvinia and the pitcher plant [176,283]. Copyright permission from ACS, 2022. Structural damage at 0.04 N/mm but a mushroom-like flexible structure can withstand without a failure at a normal load of 0.44 N/mm as well as high recovery potential in response to widespread normal and shear compression, indicating better mechanical robustness against tribological friction to approach real-world applications [237] Superhydrophobic copper meshes were developed and prepared, followed by etching and modification with 1-dodecanethiol over the surface. The resultant copper foam removes organic solvents below and above water. The 153 • ± 3 • was the contact angle (static) obtained. Hence, this enhances copper cloth, a good tool for oil spill cleanup as well as oily wastewater treatment. [288] Li et al. Lotus and pitcher plant Transformable liquid-resistant fabric surfaces were formed using a simple one-pot approach. The surface of PDMS@Fe 3 O 4 fabric was formed, with lotus leaf-like characteristics retaining slipperiness over the surface. Other than that, the lubricant-infused surface with a continuous coating resembles the rim of a pitcher plant. [289] 8. Jiang et al. Cactus spine Fog collection characteristics of cluster-distributed trichomes and their surface structural characteristics were discovered. [290] 9. Labonte et al. Pitcher plant The study concluded that bioinspired surfaces from pitcher plants possess omni-repellent characteristics on the surface that grant non-stickiness nature to the surface. Neither polar nor non-polar liquids would stick on the surface. [286] Conclusions and Future Outlook The present work focuses on understanding different strategies devised by nature for modulating tribological interaction with the surrounding. Different tribological scenarios involving solid-solid, solid-liquid, and liquid-liquid interactions are discussed for reduced friction, adhesion, and wear. The water-repellant superhydrophobicity possesses low adhesion and friction, leading to exceptional properties such as self-cleaning, anti-fouling, and helps in drag reduction in submarines and vessels. The various examples of bioinspired surfaces are discussed that entail the modulation in friction and adhesion, i.e., lotus leaf surface (water repellency), gecko feet (directional-adhesion), micro-groove over shark skin (fast-swimming), eagle owl wings (noise reduction), snake scales and lizards (low-friction surfaces), sandfish bodies (high-wear resistance), and the pitcher plant and Salvinia (Superslippery surface). Furthermore, the evidence of drag reduction was observed by mimicking the turtle's surface morphology. The work study revealed that the bio-inspired approaches with tailored stiffness showed better outcomes in terms of low frictional properties. The development of micro-grooves inspired from shark skin through surface texturing can significantly minimize friction. The flexible mushroom-inspired surface was able to withstand high mechanical load without a failure than rigid surface. It also showed high recovery potential in response to widespread normal and shear compression, indicating better mechanical robustness along with improved kinetic impalement resistance. The multi-scale laser surface texturing is considered a suitable approach for imparting self-cleaning and water-repellent behavior over the surface. The straight and zig-zag-like structures over the surface formed by the laser surface texturing with variable periodicity, width and depth have been widely explored. The laminated/layered structures were also identified in the current study leading to the formation of low-friction surfaces. Graphene and similar 2D materials are suitable bio-inspired materials to obtain laminated structures. From the perspective of future study, an ultra-low friction regime can be obtained by considering the surface morphology of sandfish skin which is renowned for its low friction and high resistance to wear against sand. In the same regard, the surface morphology of scorpions and tamarisk can withstand sand erosion exceptionally and are an appropriate inspiration for ultra-low wear surfaces. Since future advancements are more related to reducing energy consumption, ultra-low friction surfaces are beneficial in serving the same purpose. Furthermore, surface texturing inculcating dimples, grooves, or convex on surfaces of friction units using mechanical or chemical processing technologies are attracting the research trend towards improvement in tribological performance. The surface texture of the animals, including ground beetle, dung beetle, earthworm, mole cricket, centipede, ant, etc., restricts the soil wear and can be further explored. In the same context, pangolin and loach inspired structures are discussed in this review article that helps reduces the friction between solid/liquid surfaces and bio-surfaces. More particularly, the flexible interfacial structures can effectively resist tribological friction and encourage water repellency. Bioinspired hierarchical structures should be considered for the development of low-friction surfaces. The surface morphology of the snake scale can inspire the development of surfaces with directional friction properties. The overlapping scales of the snake's skin with protrusions in the shape of teeth help control wear and friction. However, such surfaces have not been explicitly explored on metal surfaces. The possibility of utilizing snake-inspired textures for reducing friction and wear and how these surfaces perform in the presence of lubricants can be evaluated. Although reports of hierarchical patterns' favorable impacts on adhesion and friction have been observed, the impact of the pitch of nano-scale features has not been thoroughly studied. Creating efficient green lubricant additives that are oil-soluble, have a good affinity towards steel for reducing friction, and improve anti-wear performance for steel/steel contacts is attracting research attention. Funding: This research was funded by the Shiv Nadar Institution of Eminence, Greater Noida.
19,093.2
2023-02-02T00:00:00.000
[ "Materials Science", "Engineering", "Environmental Science" ]
Interview with Joe Freidhoff: A Bird's Eye View of K-12 Online Learning Welcome to the interview portion of this special issue of the OLC Online Learning journal. Our intent is to introduce our long-time Online Learning readership to the field of K-12 online learning while also providing direction for our K-12 online learning scholars about where the field is going or should be going in terms of meeting the needs of K-12 stakeholders. We recently sat down with Dr. Joe Freidhoff, executive director of the Michigan Virtual Learning Research Institute states have both.Sometimes students take online courses at their local school building, and sometimes they engage in online learning away from campus.The history of K-12 online learning mostly has centered on high school and, to a degree, middle school, and predominantly for supplemental contexts.As full-time options have become more available, the K-6 and K-8 enrollments have started to increase.These are just a few examples of how place plays out in K-12.K-12 online learning also looks at how students learn and are taught in online environments.From course design aspects to content-specific pedagogies, the field works to better understand how to efficiently and effectively design and deliver high-quality instruction to students.This has tended to include work around personalized learning and competency-based education.The interaction of technology, pedagogy, and content has necessitated professional learning for teachers and other school staff who work to support online learners and online learning programs. On the policy side, frequent state issues include teacher credentialing and reciprocity, the number and size of full-time statewide cyber schools, funding models for online learning, and a student's right to choose online courses-often referred to as course choice or course access.Each state views these issues differently, which results in a differentiated set of rules and requirements across the United States.Districts also have their own sets of policies governing online learning and learners.These policies, both state and local, shift constantly, making it a career just to keep up with this area of K-12 online learning. If you were to recommend three or four seminal pieces in the field, what would they be and why are these so important to the field?Cathy Cavanaugh was the lead author on two meta-analyses from the early 2000s that compared K-12 distance and online learning with traditional K-12 schooling (Cavanaugh, 2001;Cavanaugh, Gillan, Kromrey, Hess, & Blomeyer, 2004).These works were seminal in that they provided evidence that online or distance delivery methods could be as effective as traditional methods for K-12 students.While both works provided validation to the field, they also presented a challenge: The number of existing studies was and still remains small, and little was known about why or under what conditions K-12 students succeeded or failed when they moved into online environments. In the mid-2000s, Kerry Rice (2006) published a comprehensive literature review on K-12 distance education that addressed the aforementioned comparison studies and field policy while also delving into areas such as learner characteristics, learner supports, and the affective domain.It remains a good primer on the challenges and opportunities that K-12 online research offers. My third recommendation groups together several publications under the heading of iNACOLrelated works.iNACOL is the International Association for K-12 Online Learning and is one of the key trade organizations in this field.iNACOL publishes reports that are important to K-12 online learning in that they identify trends and directions for the field by covering topics such as access and equity, at-risk learners and online education, national standards, blended programs, quality assurance, and competencybased education.One example of an iNACOL-related work is the Keeping Pace report released yearly at the annual iNACOL conference by John Watson and his Evergreen Education Group.Although not an iNACOL publication, this report is one of the most heavily cited pieces in our field, especially when it comes to documenting the size and growth of K-12 online learning.It provides state-by-state profiles, updated on an annual basis, and identifies key trends in the field of K-12 online learning.Keeping Pace has documented much of the history of K-12 online learning; it will release its 12th edition in November 2015. Lastly, recent publications like Rick Ferdig and Kathryn Kennedy's (eds.)Handbook of Research on K-12 Online and Blended Learning and Tom Clark and Michael Barbour's (eds.)Online, Blended, and Distance Education in Schools: Building Successful Programs have addressed a range of K-12 online learning topics by bringing together in one collection key works written by a who's who of key researchers.Additionally, the Michigan Virtual Learning Research Institute maintains the Research Clearinghouse for K-12 Blended and Online Learning (http://k12onlineresearch.org), where interested readers can find many more publications from the field of K-12 online learning. Considering research in K-12 online learning, what can we say we know about the field?K-12 online enrollment has increased dramatically over the last decade.Michigan went from about 185,000 online enrollments in 2012-13 to over 319,000 one year later.Nationally, there is likely a total of 5,000,000-7,000,000 online enrollments in full-time and supplemental programs combined, if not more.K-12 online learning is growing at a rate that clearly outpaces the research we have about it.I think language from a recent Institute of Education Sciences grant (U.S.Department of Education, 2015) sums up the research need quite succinctly: Given the omnipresence of technology in modern life, it may be that the most pertinent research questions have less to do with the effectiveness of online and blended learning relative to traditional (i.e., nontechnological) modes of instruction and more to do with understanding how to improve delivery so that more students derive greater benefit.(p.11) The idea of improving delivery to increase student benefit is so vital, since students enroll in online courses for a variety of reasons ranging from face-to-face course scheduling conflicts to retaking courses in an online format to recover credits from failing grades.We know that students who take online courses to resolve scheduling or unavailability issues with face-to-face courses tend to have higher pass rates than students who enroll in online courses out of learner preference and credit recovery, yet the students who struggle in face-to-face courses tend to be viewed as the prime candidates for online learning.In my mind, there is a disconnect between what we know and what we practice.We know that online students tend to be more successful when they have strong time management skills, know how to set goals, have regular attendance, and enter a course having a solid foundation in the prerequisite knowledge, skills, and attitudes of the subject, et cetera-traits similar to successful face-to-face learners.The challenge is that schools often enroll online students who have significant weaknesses in one or more of these areas, so the learner traits we directly relate to high success are not necessarily those of the students typically being enrolled. We know online learning can work.Many programs succeed by combining disciplined student selection and preparation with systems of local support, engaging parents in the online learning process, and choosing or creating high-quality course content taught by skilled online learning instructors; however, we can also point to many programs that fail despite using these same combinations.A key challenge for K-12 online learning researchers is to effectively tease out the traits of successful programs for replication with fidelity at other institutions. Finally, we know that teacher preparation programs do not adequately prepare teachers to teach in online environments, and very few offer preservice practicum teaching experiences housed in fully online environments.Clearly, high-quality teachers are one of the key leverage points for improving online delivery, and we need more research and better training for those who teach online. What challenges do K-12 online learning scholars face? Many researchers in our field are faculty at institutions of higher education who must navigate the tenure and promotion process, which includes publishing.Most publications end up in highly-ranked, peer-reviewed journals, many of which do not offer open access.A challenge facing all researchers is publishing high-quality research that is accessible and applicable by K-12 online learning practitioners.We need to help practitioners steer the students best suited for online learning and also help them create educational systems that afford students the most applicable and appropriate support systems for their learning needs and contexts.Doing this involves reducing the amount of time it takes to move from theory into practice given the large number of students being educated through online learning.Researchers need to use more widely accessible forums to expand the theories and knowledge of our research base in the field.Publishing in open journals such as this one would be a start. Our field is one of big data and rich description.We increasingly capture an abundance of metadata about student interactions and pathways through the learning management systems that deliver online courses, but there are critical data points that go uncaptured and will remain uncaptured by these systems.The challenge here for scholars lies in developing proficiency in mixed methods approaches to research.Developing the skill set to analyze millions of data points competently while simultaneously providing rich, qualitative analysis of the off-line contexts and the online interactions takes time and a commitment to ongoing professional development. In this same vein, another challenge for scholars in our field is developing relationships with researchers from other disciplines, including those who focus on online learning and adults, or online learning and higher education.Developing more interdisciplinary research teams will benefit our field and others by generating new questions, applying new methods, discovering new insights, and guarding against group thinking. The central challenge we have as K-12 online learning researchers is improving student learning.We need research that moves us closer to the potential that online learning advocates proclaim.Proponents see online learning as a way to bring highly skilled teachers to students so that zip codes cease to define educational opportunities, a way to educate students anytime and anywhere.We need research to help make that a reality.Despite notions of anytime-anywhere learning, time and routine are critical factors in student success, and the location and settings in which they work matter.We need more research to better inform time-and-place decisions.I mentioned earlier that online learning is seen as having the potential to help students recover credits which they have failed, but it also is expected to help close the gaps for low-income students.In Michigan last year, 64% of the K-12 online enrollments came from students in poverty, but only 53% of them were successfully completed.We need more research on better serving at-risk populations, who make up a large percentage of current online learning students. Where do you see K-12 online learning in 20 years, and how should research help shape this vision? I doubt I have a good vision of what technologies are going to exist in 10 years let alone 20, but I'm willing to speculate.Today, I think schools, students, and parents turn to online learning when something goes wrong with the traditional model-a course the student wants to take is not offered at the school, the student can't take the course at the hour it is offered, or the student has fallen behind and needs to make up the credit.To that extent, I think we are still at the stage where K-12 online learning occurs at the fringe. Over the next 20 years, preferably much sooner, I think schools-some because of educational beliefs and others because of parent and student pressure-will integrate online learning as one of multiple flexible-learning options provided to best serve the needs of diverse student populations.Many "traditional" courses will blend substantial online content into their delivery, and schools will continue to offer à la carte versions of online courses to supplement local offerings.Full-time online schools will continue to grow and educate significant numbers of students, but the majority of students who participate in online learning will do so in a supplementary manner.I think we will have moved beyond the face-toface versus online learning mindset and will be talking more about how the two complement rather than compete against each other. In twenty years, I believe that longitudinal data systems at the local and state levels will be better; I doubt this will happen at the national level.Statewide systems will provide much richer detail and
2,804.2
2015-09-22T00:00:00.000
[ "Education", "Computer Science" ]
Phylomorphometrics reveal ecomorphological convergence in pea crab carapace shapes (Brachyura, Pinnotheridae) Abstract Most members of the speciose pea crab family (Decapoda: Brachyura: Pinnotheridae) are characterized by their symbioses with marine invertebrates in various host phyla. The ecology of pea crabs is, however, understudied, and the degree of host dependency of most species is still unclear. With the exception of one lineage of ectosymbiotic echinoid‐associated crabs, species within the subfamily Pinnotherinae are endosymbionts, living within the body cavities of mollusks, ascidians, echinoderms, and brachiopods. By contrast, most members of the two other subfamilies are considered to have an ectosymbiotic lifestyle, sharing burrows and tubes with various types of worms and burrowing crustaceans (inquilism). The body shapes within the family are extremely variable, mainly in the width and length of the carapace. The variation of carapace shapes in the family, focusing on pinnotherines, is mapped using landmark‐based morphometrics. Mean carapace shapes of species groups (based on their host preference) are statistically compared. In addition, a phylomorphometric approach is used to study three different convergence events (across subfamilies; between three genera; and within one genus), and link these events with the associated hosts. It is worth noting that also in these subfamilies, the host specificity is understudied and some species now considered to be free-living might have unknown host associations (McDermott, 2009). Pinnotherids have evolved a wide range of ecomorphological adaptations that could be linked to their presumed host choice (described for the subfamily Pinnotherinae in de Gier & Becker, 2020). These include: (A) several types of setae on the walking legs and claws used for feeding, swimming, or camouflaging; (B) asymmetry and widening of the walking legs' segments for feeding purposes and/or grip within the host (or host tube/burrow); and (C) various ornamentations, setaetion and colouration patterns, shape differences, and a variation of thickness of the carapaces in order to fit inside their hosts or to blend with their hosts' colouration (de Gier & Becker, 2020, and references therein for examples of adapted species). In addition, in sexual dimorphic species, females have evolved an enlarged pleon to carry eggs, making them almost immobile and very vulnerable to predation if they ever leave their host (Baeza, 2015). This is likely a consequence of living hidden within a host (de Gier & Becker, 2020). Although the variation in the abovementioned characters is most diverse in the speciose pea crab family, similar adaptations to endo-or ectosymbiotic lifestyles can be found in other brachyuran ("true" crab) taxa (Castro, 2015;Serène, 1961). Most of the ecomorphological adaptations mentioned above have only briefly been described in taxonomic and phylogenetic literature (Campos, 1996a(Campos, , 2016Manning, 1993;, and were not directly studied with respect to the crabs' host choice. In a large-scale study, Laughlin (1981) mentioned that various pinnixine pea crabs (mainly including species then attributed to Pinnixa White, 1846) have a much wider carapace than the studied pinnotherines (in this case, species of Pinnotheres Bosc, 1801), and linked this character to their biology. More recently, Hultgren et al. (2022) analyzed the relationship between the host choice of a wide range of pea crabs and their carapace size ratios, considering also their phylogenetic positions. In this way, convergence in carapace shapes could be studied. They did this by testing the aspect ratios of 149 species, 59 of which had known phylogenetic positions (see Palacios Theil et al., 2016). The present study elaborates on the analyses by Hultgren et al. (2022), by using additional morphometrics to investigate the relationship between the adult female carapace shape, and the host choice, focusing on all currently included pinnotherine members. A phylomorphospace approach will be used, including both symbiotic, as well as free-living outgroup species from the two other subfamilies. This projection of the phylogeny should reveal clusters and convergence patterns in the data (Stayton, 2015), indicating that the colonization of similar host phyla has led to analogous carapace shapes in the evolution of pea crabs. | Selection of illustrations Similarly to the methods described by Hultgren et al. (2022), published illustrations of 181 pea crab species (in particular Pinnotherinae; see below) were collected through an extensive literature search. Available dorsal views of adult female carapaces (independently of the number of pereiopods depicted in the illustration) were selected for the analyses due to their often obligatory symbiosis, being restricted to remain inside their host. By contrast, males often leave their host, and juveniles have been found to switch hosts multiple times before reaching their adult stages (de Gier & Becker, 2020). In addition, one rarely figured species, and two species without previously published illustrations were photographed using a Leica M165c stereo microscope with a Leica From all the currently recognized species of Pinnotherinae (excluding two genera discussed below), 168 species could be included in the study. Thirty-five species were excluded due to the absence of illustrations of the dorsal view of adult female crabs (Appendix S1). To cover the morphological variation of the non-pinnotherine pea crabs, outgroup illustrations were selected of ten species from the other pea crab subfamilies. Nine were selected from Pinnixinae and one from Pinnixulalinae. A representative of both the genera Sakaina Serène, 1964 andParapinnixa Holmes, 1895 were also added but considered as outgroups, although they are still within the subfamily Pinnotherinae (WoRMS Editorial Board, 2022), as the phylogeny of Palacios Theil et al. (2016) suggests that this placement is rather questionable. The subfamily classification of pea crabs seems to be unstable and in need of further research, and therefore the subfamily status for these species will be annotated as "Pinnotherinae?". In addition, one species with a tentative placement basal to the three | 3 of 14 de GIER subfamilies is included as an outgroup: Tetrias fischerii (A. Milne-Edwards, 1867). This species has no subfamily status (WoRMS Editorial Board, 2022). Five of the outgroup species were chosen based on their recorded, or presumed endosymbiotic lifestyle: Tetrias fischerii is thought to be associated with bivalves (Milne-Edwards, 1873), the pinnixines Pinnixa barnharti Rathbun, 1918 and Pinnixa tumida Stimpson, 1858 are thought to be internal symbionts of holothurians (Dai & Yang, 1991;Zmarzly, 1992), and adults of the pinnixines Scleroplax faba (Dana, 1851) and Scleroplax littoralis (Holmes, 1895) are commonly found in bivalve hosts (Zmarzly, 1992). The latter two are suggested to be morphotypes of the same species Zmarzly, 1992) but are treated as separate species in the analyses. Scleroplax faba was reported in various other host types (gastropods, ascidians, and holothurians) in juvenile specimens (Zmarzly, 1992). | Landmark selection and morphometrics Collector bias is a common problem in morphometric studies when selecting landmark data (e.g., Percival et al., 2019), as is the use of nonhomologous and inconsistent datapoints (e.g., nonuniform orientations of specimens; Collins & Gazley, 2017). These problems could be evaded due to the uniformity in the orientations of illustrations used in taxonomic pea crab publications: only uniform (dorsal) orientations with visible ocular carapace ridges (cavities for the eyes) were used (with the exception of anteriorly ornamented species). Landmark (LM) selection was done to digitize the right half of the pea crabs' carapace shape, with the inclusion of three landmarks (LM 1, 3, 22), one semi-landmark (LM 2), and 18 sliding semi-landmarks (curve) (LM 4 to 21) along the lateral and caudal margin of the carapace (see Figure 2). Because of the lack of homologous anatomical features on the lateral curvature of pea crab carapaces, sliders were used to capture the shape variation. Landmark data were gathered in tpsDig2 (v. 2.31) (Rohlf, 2017) and analyzed using R v. 4.2.1 and Rstudio v. 2022.07.0 (R Core Team, 2022;RStudio Team, 2022), using the packages geomorph v. 4.0.4 and ggplot2 v. 3.3.6 (Adams et al., 2022;Baken et al., 2021;Wickham, 2016). A generalized Procrustes analysis was performed to scale, transform and rotate all images, or a subset of the images for morphospace analyses. In this way, the scale was set to "uniform", also to take into account potential uniform swelling due to preservation in ethanol. A Procrustes pairwise (M)ANOVA with a residual randomization permutation procedure (1000 permutations, RRPP; was performed to find significant differences in the mean shape data based on the host associations of the specimens. The Procrustes pairwise ANOVA test compares the mean shapes of two specified groups by checking the relative distance between these two shapes (Goodall, 1991). In this way, it explains if the differences between the groups are large enough (e.g., significant) in comparison to the variation within the groups. The full dataset, including outgroup species, as well as a subset only including the "true" Pinnotherinae (i.e., excluding the members of Parapinnixa and Sakaina), was tested and compared. All species were labeled considering their host association: bivalve-, gastropod-, holothurian-, ascidian-, echinoid-, and brachiopod-associated, and tube/burrow-dwelling. The echinoderm associates were separated based on the external or internal nature of their symbiosis (ecto-or endosymbiotic). In addition, two outgroup species are labeled as "free-living", although they could have been dislodged from their in the dataset. These 21 species are often rarely caught, poorly described, or have a very questionable host association. Two of these species (Hospitotheres powelli Manning, 1993 andPinnotheres taichungae Sakai, 2000) were previously recorded as burrow-dwelling or free-living. It has been argued that they may have been dislodged from their host or that their host was destroyed during collection (de Gier & Becker, 2020;McDermott, 2009). Despite this, the choice was made to include them in the dataset, in order to test whether there is a predictive value in the analysis (see Discussion). | Phylomorphospace analyses In order to include the available phylogenetic information of 33 spe- Revell, 2012). Three convergence events were highlighted in the phylomorphospace using ggplot2. The phylogeny reconstruction is treated as an overlay on the presented morphospace. The branches were also used to statistically test three potential convergence events in the phylomorphospace plot. Similarity-based measures (C 1 to C 4 , and corresponding p-values) were calculated using the R package convevol v. 1.3 (Stayton, 2018) as described by Stayton (2015). Examples of their uses are presented by Serb et al. (2017), Zelditch et al. (2017), Stange et al. (2018), and Grossnickle et al. (2020). A custom R-script (Zelditch et al., 2017;Zelditch, pers. comm.) was used to run 1000 replicates to check the results from the convevol package. For the calculations of the C-values, PC-values were used from PC1 to PC3 (84.5% of the explained data). A phylogenetically informed ANOVA (Phylogenetic Generalized Least Squares; PGLS) was performed to investigate the impact of host choice on the shape variation in the data while controlling for the independence of the residuals from the phylogeny (Adams & Collyer, 2018;Mundry, 2014). This was done for 33 species, using the procD.pgls() command in geomorph, with Pagel's lambda (λ) (Pagel, 1999) set at 1.0 (a high phylogenetic signal-Brownian motion model). For comparison, a regular Procrustes ANOVA/regression, without the implementation of a phylogenetic framework, was performed for the 33 species (similar to the pairwise test explained above). In both analyses, a similar RRPP approach was used as mentioned above (1000 permutations). | Morphospaces and mean shapes The morphometric analyses of the scored landmarks revealed the overall variation in carapace shapes (for a morphospace plot with numbers indicating the species, see Appendix S2 (list) and S3 (figure)). When including the outgroup (non-pinnotherinae) species, the first two of 43 principal components (PCs) explain 74.4% of the variation in the data ( Figure 3). PC1 to PC11 together explain 99% of the data, meaning that the rest of the 32 PCs explain less than 1% of the data. Along these first two axes, the mean shapes change mainly in width and length: along PC1, the carapace changes from an elongated and rounder shape (PC1 min ) to a widened, angular shape, with more defined ocular cavities in dorsal view (PC1 max ). Along PC2, the widest point of the carapace seems to slightly shift from the anterolateral side (PC2 min ) to the posterolateral side (PC2 max ), meaning that the shape in the middle would have the widest point in the middle of the carapace. In addition, the rostrum seems to be much wider and more defined in specimens from the upper side of the plot (PC2 max ) ( Figure 3). This also means that a perfectly round species would approximately be found in the center of this plot (0,0). There is a clear separation between the ingroups and the outgroups, except for one outgroup species: Tetrias fischerii. Outgroup species with a similar host association to ingroup members (namely endosymbionts of holothurians or bivalves; see Appendix S2, S3) | 5 of 14 de GIER are closer to the ingroup than to the rest of the outgroup species, which are burrow-and tube-dwelling ( Figure 3). The significant difference between the point cloud of this host type and the rest of the host categories was confirmed by the pairwise Procrustes ANOVA (p < .01; Table 1). All species of the ingroup are covered by a vast cloud of bivalveassociated points (Figure 3; Appendix S2, S3). Although overlapping, significant results were found in the data by the pairwise ANOVA, taking all 43 PCs into account (Table 1). The five ascidian-associated species group on the left side of the plot. Ascidian-associated species were found to be shaped significantly different from gastropodassociated (p = .010) and externally echinoid-associated species (p = .027). In addition, a nearly significant difference was found between the mean shape of the ascidian-associated species and the bivalve (p = .056), and internal holothurian associates (p = .069). Bivalve-associated species were significantly different from gastropod associates (p = .044), internal associates of holothurians (p = .047), and external echinoid associates (p = .007). Between these last two groups, a significant result was found (p = .006). Lastly, internal holothurian associates were significantly differently shaped than gastropod associates (p = .015). The actual morphological differences between the carapace shapes are explained in detail below. When excluding the outgroup from the analyses, the bivalveassociated convex hull overlaps all but five specimens with a known host association (Figure 4). These species (the external echinoid associate Dissodactylus latus Griffith, 1987, the holothurian-associated Holothuriophilus trapeziformis Nauck, 1880, and the three gastropodassociated Mesotheres unguifalcula (Glassel, 1936), Orthotheres bayou Ho, 2016, andOrthotheres turboe Sakai, 1969) have a broader body shape than the rest of the ingroup (i.e., a higher AR carapace aspect ratio; Hultgren et al., 2022). Running the analysis with only pinnotherine members influences the p-values of the Procrustes pairwise ANOVA to be lower in all significant values from the analysis with the outgroup species included (Table 1), and the first two F I G U R E 3 Morphospace plot showing the total variation of dorsal carapace shapes of both the in-and outgroups. Warps show extreme shape variation along the first and second PCs. Colors of points and convex hulls correspond to host association type, and shapes give an indication if the species have an unknown, free-living, or generalist symbiotic lifestyle. Diamonds show the non-pinnotherine outgroups. Illustrated species correspond to linked datapoints in morphospace: top, Fabia tellinae Cobb, 1973(after Campos, 1996b; right, Glassella floridana (Rathbun, 1918) (after ; bottom, Durckheimia lochi Ahyong & Brown, 2003(after Ahyong & Brown, 2003; left, Austrotheres pregenzeri Ahyong, 2018(after Ahyong, 2018) (setae in illustrations omitted; crabs not to scale). Lastly, the rostrum of ascidian-associated species is slightly broader and more pronounced. | Phylomorphospace approach A pruned phylogeny tree was projected on the morphospace The three above-mentioned convergence events were statistically tested for their significance (Table 2), indicated by their similarity-based measures (C-values) and significance. A PGLS analysis was performed to test the dependency of the morphometric data, taking the phylogenetic history of the F I G U R E 4 Morphospace plot showing the total variation of dorsal carapace shapes of the ingroup. Mean shapes of major host-associated groups are plotted (black) and compared with the mean shape of the entire ingroup (gray/white). Colors of points and convex hulls correspond to host association type, and shapes give an indication if the species have an unknown or generalist symbiotic lifestyle. Note that the y-axis (PC2) is flipped compared with Figure 3, and the aspect ratio is reduced to 0.5 for comparison with other plots. de GIER included species into account. The PGLS resulted in an insignificant p-value (p = .823; R 2 = .151). Thus, the landmark (shape) data and the placement within the morphospace of the 33 included species are not associated with a host group, once phylogenetic nonindependence is taken into consideration ( Figure 5). The comparative Procrustes ANOVA test, excluding the phylogenetic F I G U R E 5 Phylomorphospace plot and the projected ultrametric phylogeny reconstruction. Specimens not included in the phylogeny are omitted in the morphospace for better readability. Colors of points and convex hulls correspond to the species' host association. PCs and corresponding shape changes along these axes same as in Figure 3. Three convergence events are highlighted with arrows in both the phylomorphospace (I to III) and the tree, of which three species (A to C) are illustrated next to the tree: A, Dissodactylus latus Griffith, 1987(after Griffith, 1987 | Shape differences Although the pairwise Procrustes ANOVA shows significant differences between the mean carapace shapes of the host-associated groups (Table 1), the differences appear to be very inconspicuous ( Figure 4). The mean shape of all bivalve-associated pinnotherine species included in this study was very similar to that of the entire set of the analyzed ingroup species. Therefore, assigning a particular pea crab carapace shape to a specific host group seems impos- Hultgren et al. (2022) already suggest that two bivalve-associated species (Scleroplax faba and S. littoralis; both included in the present analyses as outgroup) have evolved from having a wide carapace shape (as can be seen in the tube-and burrow-dwelling outgroup species) to having a relatively round carapace. The mean AR (carapace aspect ratio) of these two species was found to be significantly higher than the pinnotherine bivalve associates, but the green- (Table 2; following Stayton, 2015). Although this is a rather low value due to the distance between the three species, under a Brownian motion model, this result is significant (p < .001), indicative of "true" convergence between these members of the in-and outgroup. Similarly, the two holothurian-associated outgroup species (Pinnixa barnharti Rathbun, 1918 and Pinnixa tumida Stimpson, 1858) seem to have undergone a similar presumed convergence event, shifting away from the rest of the outgroups towards the ingroup (Figure 3). This possible convergence might be related to the endosymbiotic host choice of these two species, as is also discussed for S. faba and S. littoralis by de Gier and Becker (2020) and Hultgren et al. (2022). However, whether these two Pinnixa species are phylogenetically related is unknown, although P. barnharti is found in Californian waters (Zmarzly, 1992), whereas P. tumida is known from Japan and China (Dai & Yang, 1991), which might suggest they are not very closely related. DNA analyses are needed to investigate TA B L E 2 Similarity-based measures of converge for three presumed convergence events in pea crab species combinations (1000 replicates, PC1 to PC3; 84.5% of the data explained) Note: Due to Zaops ostreum (Say, 1817) being the ingroup species with the shortest overall distance to the two Scleroplax species, this species was chosen to represent the ingroup in this (III) calculation. p-Values indicating the probability that the degree of convergence exceeds what would be expected from a randomly evolving lineage are in bold if significant (p < .05). Gier & Becker, 2020). A potential correlation between the shape of the host bivalve and the carapace shape of the symbiont cannot be tested with the current datasets, but more symbionts of elongated bivalves can be found among the pinnotherine (some of which take wide and/or otherwise aberrant shapes: e.g., Raytheres (Campos, 2004), Serenotheres Ahyong & Ng, 2005, and Visayeres Ahyong & Ng, 2007(Ahyong & Ng, 2007Campos, 2002;Ng & Meyer, 2016)). | Convergence events and host specificity The C 1 value of this event shows an average of 84.5% convergence, with a highly significant probability (p < .001; Table 2). There are several other species with a wide carapace, of which no phylogenetic information was available (see Figure 4). convergence measure analysis for these two species, a C 1 value of 81.6% was found, with a significant p-value of .004 (Table 2). | Ancestral reconstructions and ecomorphological trends Besides analyzing convergence events, the phylomorphospace approach allows for a close examination of the evolution of shape in the deeper branches of the phylogeny (e.g., Ford et al., 2016). The currently presented phylogeny reconstruction "starts" with a somewhat widened outgroup species, Tetrias fischerii, and the much-more widened species Parapinnixa cortesi ( Figure 5). This first bivalveassociated species is plotted between the large cloud of ingroup species, and the main tube-and burrow-dwelling outgroups (including its currently designated sister species P. cortesi). In | Future perspectives A problem posed by the currently presented data was a large number of species with unknown host, presumed free-living, or with generalist host associations. Not all of these species will truly be free-living and might have been dislodged or wandering from their host organism when sampled (McDermott, 2009). Using the present morphospace plots, the association type (endo-or ectosymbiotic) might be speculated by looking at the data clouds (e.g., the outgroup species Pinnixulala heardi Felder & Palacios Theil, 2020b, whose carapace shape is perfectly in line with the other tube-and burrow-dwelling outgroups, as was already speculated by Felder and Palacios Theil (2020b); Figure 3). Within the ingroup, "predicting" a specific host type for species without a known association may be more difficult, and unexpected host associations might influence the shape of the convex hulls in the morphospaces and consequently influence the p-values of the pairwise (M)ANOVA (Table 1) using the currently presented methods for years, studying the evolution of shape as a result of ecological factors, mainly in vertebrates (Claverie & Wainwright, 2014;Curth et al., 2017;Dugo-Cota et al., 2019;Kulemeyer et al., 2009;Sherratt et al., 2019), and less so in invertebrates (Bush et al., 2006;Malcicka et al., 2017). This is due to the sampling limitation of selecting homologous 2D, or 3D landmarks in all samples (Zelditch et al., 2012). Firm homologous structures like skeletons seem to be easier to compare, but soft-bodied invertebrate taxa pose a problem in this respect. In the current study, the phylomorphospace approach shows already multiple presumed convergent evolutionary pathways in a limited phylogenetic framework. The presented data suggest that host-switching events could have had an important role in the evolution of carapace shapes of non-pinnotherinae pea crabs, moving from a tube/burrow-dwelling biology to a strictly endosymbiotic lifestyle within bivalves, in adult crabs. Within the pinnotherinae, however, host switches between phyla seem to have had almost no effect on the evolution of carapace shape. This suggests that a shift in lifestyle from ecto-to endosymbiotic could be the driver for carapace (and overall body) shape diversification, rather than betweenphyla host switches. This might also be the case in other symbiotic crustacean taxa with similar evolutionary switches in their lifestyles. For example, various palaemonid shrimp lineages have evolved from having a free-living lifestyle to a life in symbiosis with an invertebrate host (e.g., Frolová et al., 2022). In addition, some lineages have had multiple between-phyla host switches, some resulting in a shift from ecto-to endosymbiosis (Chow et al., 2021;Horká et al., 2016). These evolutionary pathways resulted in a wide range of morphological adaptations, ranging from changes in the morphology of the walking legs, eyes, and overall carapace shape (Dobson et al., 2014;Fransen, 1994Fransen, , 2002. Although not studied in detail, symbiotic amphipods from the family Leucothoidae (inhabiting coral rubble, but also bivalve, ascidian, and sponge hosts) might also have diversified in a similar matter (e.g., White, 2011). Lastly, the extremely specious copepod order Harpacticoida has had multiple shifts from a freeliving life to a commensal or parasitic ecto-or endosymbiosis. A wide range of vertebrate and invertebrate hosts are utilized by these copepods, which is possibly the driver for their body shape diversification (e.g., Huys, 2016). ACK N OWLED G M ENTS The FU N D I N G I N FO R M ATI O N This project was funded by Naturalis Biodiversity Center (Leiden, The Netherlands). DATA AVA I L A B I L I T Y S TAT E M E N T The data that supports the findings of this study are available in the supplementary material of this article.
5,677
2023-01-01T00:00:00.000
[ "Biology", "Geography" ]
VR Toolkit for Identifying Group Characteristics Visualising crowds is a key pedestrian dynamics topic, with significant research efforts aiming to improve the current state-of-the-art. Sophisticated visualisation methods are commonly used within modern commercial models, and can improve crowd management techniques and sociological theory development. These models often define standard metrics, including density and speed. However, modern visualisation techniques typically use desktop screens. This can limit the capability of a user to investigate and identify key features, especially in real-world scenarios such as control centres. Virtual reality (VR) provides the opportunity to represent scenarios in a fully immersive environment, granting the user the ability to quickly assess situations. Furthermore, these visualisations are often limited to the simulation model that has generated the dataset, rather than being source-agnostic. This paper presents the implementation of an immersive, interactive toolkit for crowd behaviour analysis. This toolkit was built specifically for use within VR environments and was developed in conjunction with commercial users and researchers. It allows the user to identify locations of interest, as well as individual agents, showing characteristics such as group density, individual (Voronoi) density, speed, and flow. Furthermore, it can be used as a data-extraction tool, building individual fundamental diagrams for all scenario agents, and predicting group status as a function of local agent geometry. Finally, this paper presents an evaluation of the toolkit made by crowd behaviour experts. Introduction Crowd simulations have become increasingly important over the last decades in multiple applications (e.g., building evacuation, entertainment and surveillance systems). Crowd analysis software [1] exists for architects and engineers to visually and quantitatively analyse crowd movement datasets to ensure that public spaces and buildings have efficient evacuation plans and enable a smooth flow of pedestrians. Furthermore, crowd analysis is also useful for monitoring interactions between individuals in a crowd [2], helping to advise officials on ways to mitigate the spread of viruses, particularly in dense cities. With so many different use cases for analysing crowds, there is a growing need for applications to visualise this data to make it easy for a user to gain valuable insights. Crowd data is a type of spatio-temporal data consisting of trajectories for multiple pedestrians. While software [3] and techniques exist to visualise crowds, there are two main limitations. First, these techniques are developed with the goal of displaying data on 2D media like desktop screens. Second, modern crowd analysis software packages have produced high quality visualisation engines, but these are typically limited to displaying the outputs from their own simulation model, rather than being source-agnostic. Virtual Reality (VR) provides the opportunity to mitigate against these limitations, providing a fully immersive visualisation environment, granting users greater spatial perception and unrestricted screen space, which is a fundamental requirement when analysing spatio-temporal data. VR can visualise any generic trajectory dataset, providing a sourceagnostic tool for visualisation. Visualising spatio-temporal data in a virtual environment using VR also provides new opportunities for the intuitive interaction with data. Natural gestures, such as grabbing, holding and pointing, provide a more interactive and simple method of manipulating data than conventional 2D media. Furthermore, VR stereoscopic technologies also facilitate improved immersion [4] and task involvement [5], two elements that resemble the state of flow described by [6]. Essentially, VR stereoscopic technologies capture the user's whole attention by eliminating the perception of external stimuli and concentrating the user's efforts on the task at hand. VR has also not been extensively studied [7] as a medium for comparative data visualisations. In the context of crowd analysis, it can provide a new method for comparing crowds of people in a way that preserves spatio-temporal relationships between them whilst still making it easy to identify subtle differences in the crowds. In addition to this, VR has been proven to give a better perception of depth compared to desktop screens [8]. Therefore, visualising 4D data (such as the movement of a virtual crowd) in VR allows users to better appreciate all dimensions of the data than if they were viewing it on a desktop screen. In the context of crowds, this improved depth perception can lead to better evaluation of distances between features within the crowd. Given a specific crowd to analyse, whether it is from collected crowd data or a simulation based on a model, it is beneficial to use a range of methods to gain valuable insights into the crowd. Some common features of crowd analysis include tracking individuals or groups of people in a crowd, as well as analysing statistics about the crowd such as its flow or density. Therefore, this study implemented several tools for crowd analysis: • Voronoi Density Visualisation Tool The toolkit provides the option to visualise an animated Voronoi diagram on the environment's floor that changes as the pedestrian simulation progresses. Each polygon represents the pedestrian's local Voronoi cell and each cell's tonality reflects its Voronoi density. The formulation for Voronoi density is provided in Section 2.2. • Area Analysis Tool Users can define rectangular areas on the environment's floor, generating a full range of statistics such as the average Voronoi density. The tool can also generate visualisations of these statistics, such as a graph showing the changes in average speed through time, or an animated and interactive density heat map. This density heat map can be used to give a finer-grained view of how density changes within the defined area. The toolkit produces these graphs on individual panels, and allows the user to re-position them anywhere in the scene. Examples of these panels are shown in Figure 5. VR thus provides the opportunity to completely remove the limitation in how much data a user can observe at any time, while simultaneously allowing the user to reorient and choose which data streams to observe. In evacuation planning, this tool can be used to analyse choke points such as doorways or corridors to see if at any point in the simulation these areas are subjected to dangerous levels of density or if the crowd is able to flow smoothly. • Pedestrian Analysis Tool Users can select pedestrians in the simulation, creating a subset of the data that only contains the trajectories of those pedestrians. The toolkit generates relevant statistics about the subset such as the minimum, average and maximum speeds and Voronoi densities of each pedestrian at each frame in the simulation. Similar to the area analysis tool, the pedestrian analysis tool can generate visualisations for these statistics, providing the same immersive benefits. In a density analysis task for event planning, this could be used to measure the maximal pedestrian density for all points in the simulation, ensuring it does not exceed a predefined value. • Social Group Identification Finally, the toolkit provides the option to identify and examine groups of pedestrians by providing a method to determine group affiliation, and alter the tonality of the pedestrians accordingly. These groups (social or familial) are initially identified based solely on proximity and shared movement direction, however any algorithm can be implemented. The toolkit provides a useful tool for surveillance tasks where the goal is to find a particular group. For such a task, this tool is used to identify all groups in the crowd, changing their tonality so they stand out, making for easier identification. This toolkit was developed to provide functionality to researchers and practitioners, who could then adapt and implement their own required models. As such the algorithm developed by this study was not tested against existing algorithms (such as [9]), or against existing datasets (such as [10]). Future work will develop this toolkit to include state-of-the-art theories surrounding social identification. Literature Review This section looks at current methods of analysing crowds, focusing on tracking people within crowds, analysing the density of crowds, and identifying social groups in crowds. First, the section describes methods for tracking and identifying social groups in crowds in terms of their benefits and how they are implemented. Second, this section illustrates both classical and Voronoi density, focusing on the benefits and drawbacks of each method, emphasising the justification for using Voronoi density over classical density. Finally, the visualisation in VR is evaluated in terms of its advantages over regular visualisation on 2D desktop screens. Tracking and Identifying Social Groups Tracking pedestrians is crucial for a wide range of applications such as surveillance or evacuation planning. In surveillance, tracking could be used to monitor the actions of a potentially dangerous individual in a crowded scene, and in evacuation planning tracking could be used to analyse the route a pedestrian takes from the top floor of a building to the evacuation exit. In most cases, this sort of analysis is influenced heavily by techniques in computer vision [11] [12], such as segmenting human figures or through clustering people walking close together into 'group tracks', from which individual people can be tracked. These techniques, however, are limited by the resolution and dimensionality of the data, which usually consist of 2D frames in a video that are often taken from poor quality CCTV footage. To represent a 3D scene in 2D, the scene has to be projected into 2D, removing the depth component resulting in occluded details and depth perception is also removed. The benefit of working with data in a 3D spatial context is it avoids the depth ambiguity that comes with 2D images, making it easier for a user to interpret. Whilst image segmentation techniques can be used for tracking, another approach is to identify pedestrians based on their spatial relationships to other pedestrians. X. Liu [13] illustrates this approach by considering the binary spatial relationships between individuals in a dense crowd and how these are usually preserved over time. An example of a binary spatial relationship between person A and person B might be that A is in front of B, or A is to the left of B. The nature of these relationships through time allow the identification and segmentation of individuals in frames. From these relationships, a probabilistic framework is used to help with tracking pedestrians in 3D scenes. While techniques like this are needed to work with raw data, the data used for the toolkit was labelled, with each pedestrian having a unique identifier, reducing the technical complexity of the tracking algorithm. Examining pair-wise relationships between individuals can be helpful in tracking individuals as well as identifying groups in crowds. In this case, groups are meant as collections of people such as friends, families or acquaintances walking together in a crowd. Usually, characteristics like gender, age and height help establish whether a group of pedestrians is a family or independent commuters. Humans can often identify these relationships subconsciously [14]. However, this becomes challenging when analysing real-world data. It requires rich datasets consisting of more than just trajectories of pedes-trians, and it is difficult to augment surveillance footage with these characteristics using traditional image-based methods. Social groups can also be predicted by observing whether pedestrians remain near each other for an extended period of time. The benefit of this method is that it only requires trajectory data, from which the positional and directional relations between individuals can be extracted and used to identify social groups. Yücel et al. [9] approach this by creating positional models based on interpersonal distance and directional models based on relative rotation throughout the entire pedestrians' trajectory to determine the likelihood of each pedestrian being in a group with another. They use this method to identify groups of pedestrians with approximately 85% accuracy. Density Analysis Methods When designing a building, the density of a crowd is studied to identify possible locations of bottlenecks which may pose a risk during an emergency evacuation. Classical density is defined as 'people per square metre' [15], and this has been used in planning building developments for analysing pedestrian comfort [16]. The main criticism of classical density is that it does not necessarily give an accurate depiction of density. In reality, classical density suffers from two main issues, outlined by Steffen et al. [17]: • The definition of whether a pedestrian is inside or outside of an area, as they are not a single discrete point. In general, this problem is solved by using the position of the centroid of the pedestrian's head to determine whether they are inside the measurement area. • Small measurement areas can lead to large spikes in density, and large measurement areas can lead to unrealistically low estimations of density. This is particularly prevalent when the measurement area is only large enough to fit one pedestrian, resulting in the density changing from 0 to maximal density each time a pedestrian enters the area. In contrast to classical density, Voronoi density [17] uses a Voronoi diagram [18] to find a density distribution for each of the pedestrians. A Voronoi diagram is defined by a set of sites from which every point in the space is assigned to if they are closer to that site than to any other, partitioning a plane into a set of Voronoi cells, one for each site. In the context of density calculations, each site represents a pedestrian's position − → x , where − → x = (x, y) at a particular moment in time and the surrounding cell A i represents the free area surrounding the pedestrian. The density distribution for each pedestrian is then given by the inverse of the cell size |A i |, and the density distribution for all persons is then given by: Following this, the density for a separate measurement area A can be defined as: The above formulation can be implemented by finding which cells intersect with the measurement area and computing a weighted average of the densities of each Voronoi cell, based on how much of the area is taken up by each cell. When compared to classical density, Voronoi density has less scatter and is scale-invariant (i.e., it is not dependent on the measurement area) [17]. The resulting density is intuitive, and takes into account the pedestrian's local conditions. Visualisation in Virtual Reality Crowd scenes can be represented as spatio-temporal data. A crowd occupies a space that could be 2D or 3D, and there is the extra dimension of time. Therefore, depending on how the space is defined, the spatio-temporal data for crowds can be considered 3D or 4D. Studies [19] [20] have been conducted on visualising data on screens through 3D rendering. These studies generally show that users find it particularly difficult in judging the size, position and depth of objects, leading to a poorer understanding of the data being shown. These difficulties arise due to the 3D data needing to be flattened in one dimension, depth, in order to display the data on 2D media. With VR, users can have a more immersive experience as they are no longer restricted to a computer screen but can view data in what appears to be a 3D space with six Degrees of Freedom (DoF). This extra dimension of space leads to users having improved depth perception of 3D objects [21], which in turn improves the user's understanding of the data. Additionally, 3D virtual environments provide further benefits by giving users the complete freedom to create 2D interfaces around themselves in a way that can improve multitasking and analytic reasoning. Examples of this can be seen in Figure 5. Toolkit Features The toolkit was developed within a standalone environment developed in Unity, which provided a way for the pedestrian trajectory data to be visualised as pedestrian objects moving through a virtual environment. To develop the toolkit, a variety of features were implemented to help with crowd analysis. The project was developed using the Unity game engine [22] (version 2019.4.0f1) and can be compiled into an Android application package (APK) that can be run on a standalone Oculus Quest head-mounted display (HMD). Voronoi Density Visualisation The density visualisations in this toolkit use Voronoi density over classical density as their preferred method of calculation. Figure 1 shows a visualisation of the local density of each pedestrian, by displaying an environment wide Users can extract subsets of the trajectory dataset spatially by defining a rectangular area on the ground. After selecting an area, users can view statistics specific to that area, such as the flow, average speed or Voronoi density in that area, as well as visualisations of these statistics. For this study, speed of a pedestrian was determined as the magnitude of the difference in position over the previous 1 second. The average velocity of a subset of pedestrians at a particular time was calculated using the total number of pedestrians in the subset, and the sum of their instantaneous speeds. Finally, the flow was calculated using by multiplying the average velocity by the total number of pedestrians. Figure 2 shows the features that are available for analysing area subsets. Users can see how statistics (Panel 1) such as the number of people or Voronoi density change in that area in real-time as the simulation runs. The toolkit also visualises various 2D relationships in the data (Panel 2) such as how the speed or (Voronoi) density change with. Each graph comes with an additional side panel, displaying relevant aggregated statistics throughout the course of the simulation. Currently, the toolkit only supports speed-time and density-time visualisations for area subsets. However, the graphing functionality has been designed to work with any form of 2D data. This means that it is easy to extend the tool to support additional relationships such as flow-density and speed-flow, with the user only having to provide the 2D data that they wish to visualise. To see how the density changes within the area subset throughout the simulation, users can view an interactive and animated Voronoi density heat map (Panel 3) for the area subset. They are also able to specify the time interval of the simulation that they wish to investigate. The Voronoi density for the area subset is found by first seeing which Voronoi cells intersect with the area subset. The overall density is then calculated by averaging the densities of the intersecting cells, weighted by the area of their intersection with the defined area subset. The density heat map as seen in Figure 3 splits the area measurement into square cells with a width of 0.5m. The method for calculating Voronoi density for an area subset is then applied to each square cell to find a Voronoi density for that cell. For efficiency, only the Voronoi cells that intersected with the original area subset are checked to see if they intersect with the cell, rather than all Voronoi cells in the environment. Pedestrian Analysis Tool Subsets of the trajectory data can also be created by selecting groups of pedestrians, with no restrictions on how many pedestrians can be selected. Similarly to the area analysis tool, Figure 4 shows the various data panels available to the user to aid with analysing the pedestrian subset. Users can see real-time statistics for the subset as the simulation plays. As each pedestrian has their own Voronoi density, the density-time graph for pedestrian subsets plots the minimum, average, and maximum densities, aggregated for the pedestrians in the subset, over the course of the simulation. A fundamental diagram is also available to the user wishing to examine speed-density relationships. This is computed for the pedestrian subset, plotting the average speed of the pedestrian at the observed range of densities. Social Group Identification The social group identification tool as seen in Figure 5, provides an easy-to-understand visualisation of the estimated social groups in the crowd. These social groups are precomputed from the dataset by considering entire pedestrian trajectories and whether multiple trajectories are similar (i.e., if their interpersonal distance and relative rotation are lower than a certain threshold throughout the duration of the simulation). A limit to the number of people that can exist in a single social group was set at 5, although this can be varied by the user. If a social group exists, the pedestrians are highlighted in that group in a unique colour, to enable to user to distinguish between different social groups and non-group pedestrians. The toolkit was designed to be as mod- Currently the toolkit uses Algorithm 1 which shows how two pedestrians are grouped, initially assuming none of the pedestrians are in groups. The algorithm iterates through each pedestrian, checking if they form a social group with any of their neighbours. This is done by finding the square of the interpersonal distance between both pedestrians and their relative rotations. If they are less than the predefined maximum distance (3 metres) and rotation (15 • ), then a value corresponding to the number of times this condition has been satisfied is incremented. If the two pedestrians are classified as a group for the majority of their trajectories (in this instance, at a minimum of 2 seconds less than the duration of their entire trajectory), then they are considered to exist in the same social group. The neighbours of the pedestrian are found by checking at the edges that make up that pedestrian's cell in the Voronoi diagram. A slightly different algorithm exists for the case where the given pedestrian is already in a group. If this is the case, the interpersonal distance is compared to the group's centre, and the relative rotation is compared to everyone in the group. Toolkit Performance and Scalability The aim for this toolkit is to be as widely available and usable as possible. As such, this project investigated the requirements for listing the toolkit on the Oculus Store. For an app to be listed on the Oculus Store, Oculus has created a set of performance targets that an app should meet before it can be released. To be applicable, the baselines for the app are the following: • 72 FPS (frames per second) required • 150-175 maximum draw calls per frame • 750,000 -1,000,000 maximum triangles per frame. The toolkit meets the targets for draw calls and triangles per frame, however, it does not meet the recommended FPS. For typical usage of the app, which includes running the pedestrian simulation and using each of the toolkit features, the toolkit achieves an average FPS of 60. The most performance-intensive component of the VR app is rendering the pedestrian simulation, which the toolkit has been built on top of. In terms of the toolkit features, the most performance-intensive feature is the area subset selection tool, which can cause the toolkit to momentarily freeze if the defined area is particularly large. The main cause of this is due to the heat map generation, which is computationally expensive. This was optimised in Unity by utilising Unity's Job System and the Burst compiler. Unity's Job System [23] is Unity's approach to multi-threading, which ensures that the created threads do not interfere with Unity's main thread. When using the Job System, the developer can also enable the Burst compiler [24] which is a special 'math-aware' compiler that can produce highly optimised native code. The combination of the Job System and the Burst compiler brought the time to create a heat map for a 30m x 20m area down from 18s to 0.8s. To evaluate the scalability of the area subset tool, the changes in FPS were observed after the generation of area subsets of increasing size, from 30 m 2 to 600 m 2 . Figure 6 shows the average FPS of the app, during which four area subsets were created with areas: 1 -30 m 2 , 2 -60 m 2 , 3 -300 m 2 , 4 -600 m 2 . The performance impact for creating the first three subsets is not large, but for 4 a considerable amount of time is spent creating the subset, causing the app to freeze. This limitation can be removed by pre-calculating the heat maps. Expert study An expert study was carried out to understand the potential applications and usability of this toolkit. Six industry professionals and researchers were contacted and shown videos of the use of this toolkit, before being asked to respond to a template survey. They pro- vided their responses to several questions surrounding the applicability of the toolkit for modern uses, as well as any desired areas of improvement. The questions asked the participants to rate the toolkit using a 5-point Likert scale (1: Not at all useful, 3: Moderately useful, 5: Very useful) in its ability to perform work in: • Research • Industry • Visualisation The roles of the participants and their initial scores are detailed in Table 1. Further comments, and suggested uses are detailed in Table 2. The output from the expert study shows that there is a large potential demand for this type of toolkit. However, there were mixed responses, with no clear indication as to a pattern. For example, the scores for the toolkit's impact on visualisation ranged from 2 to 5, while the scores for the its impact on research varied only from 3 to 3.5. However, this variation may be a result of the fact that the toolkit was shown using 2D videos, rather than using an HMD headset or other VR equipment. This was done as a result of logistical and time limitations due to the COVID-19 pandemic, and may have limited the potential to showcase the opportunities of the toolkit. There was also feedback suggesting improvements, such as incorporating Fruin LOS (Level Of Service) [25] categories, or the potential to reduce the complexity of the tool. Several responses revolved around the use of this toolkit for visualising of different conditions such as variable densities for large scale infrastructure, as well as the potential for its use in providing commercial clients a better experience using real-world data. Conclusion and future work This paper has detailed the development and evaluation of a VR toolkit that focuses on visualising and analysing the movement of pedestrian. This toolkit was developed to provide an environment in which the user can investigate aspects of crowd motion such as density, speed, or social groups. This approach builds on previous research that visualises the output of simulation models (often as part of the simulation toolkit itself), and The option to overlay graphs within the software itself is a new feature and may be useful for live monitoring or reproduction of live environments (i.e., prediction of crowd flows using real time data). From an analysis perspective, I have seen similar types of analyses from other commercial software, albeit the production of these simulations is likely to take longer than a Unity based product. N/A combines it with the potential for visualising real-world datasets, in an immersive and intuitive manner. This is particularly relevant given the modern requirements for crowd density management, as well as the increases in the VR commercial market size. The toolkit was shown to industry and academic experts, who provided feedback on its implementation and potential usage. This showed mixed results, with participants identifying high potential for the toolkit to be used commercially, while also commenting on the need for it to be low complexity. The authors welcome any feedback and proposals for collaborations for use of this toolkit, with a future aim to provide the environment as an open source resource, and with further possible extension into AR visualisations. Acknowledgment This work has been supported by H2020 EU project RISE, grant agreement No 739578.
6,360.2
2022-02-03T00:00:00.000
[ "Computer Science", "Sociology" ]
High-Resolution Reconstruction of the Maximum Snow Water Equivalent Based on Remote Sensing Data in a Mountainous Area Currently, the accurate estimation of the maximum snow water equivalent (SWE) in mountainous areas is an important topic. In this study, in order to improve the accuracy and spatial resolution of SWE reconstruction in alpine regions, the Sentinel-2(MSI) and Landsat 8(OLI) satellite data with the spatial resolution of tens of meters are used instead of the Moderate-resolution Imaging Spectroradiometer (MODIS) data so that the pixel mixing problem is avoided. Meanwhile, geostationary satellite-based and topographic-corrected incoming shortwave radiation is used in the restricted degree-day model to improve the accuracy of radiation inputs. The seasonal maximum SWE accumulation of a river basin in the winter season of 2017–2018 is estimated. The spatial and temporal characteristics of SWE at a fine spatial and temporal resolution are then analyzed. And the results of reconstruction model with different input parameters are compared. The results showed that the average maximum SWE of the study area in 2017–2018 was 377.83 mm and the accuracy of snow cover, air temperature and the radiation parameters all affects the maximum SWE distribution on magnitude, elevation and aspect. Although the accuracy of other forcing parameters still needs to be improved, the estimation of the local maximum snow water equivalent in mountainous areas benefits from the application of high-resolution Sentinel-2 and Landsat 8 data. The joint usage of high-resolution remote sensing data from different satellites can greatly improve the temporal and spatial resolution of snow cover and the spatial resolution of SWE estimation. This method can provide more accurate and detailed SWE for hydrological models, which is of great significance to hydrology and water resources research. Introduction Seasonal snowmelt in the mountainous areas affects the life of billion people around the world [1], it provides enough water for the snowmelt season in basin, and is also used for soil and cropping purpose in the later stages of snowmelt [2]. In the arid areas of high and middle latitudes in China's The geographical and climatic characteristics of the study area, and the data used in this study are described in Section 2. In Section 3, models and methodologies are described in detail. In Section 4, the spatial-temporal distribution characteristics of snow cover and cumulative SWE are analyzed respectively from the perspective of time and space. The discussion of data and results is given in Section 5. Study Area The study area (47)(48)(49) • N, 86-90 • E) is located in the Altay region of Xinjiang, China. The Altay region, as the region with the most abundant snow in winter in China, is known as the "snow capital of China". The Caiertes river, which originates from the Altay mountains in the north of the Altay region, is one of only two main sources of the Irtysh rivers in China that flow into the arctic ocean ( Figure 1a). There is no glacier in this region, and the river water source is not affected by glacier melt. It mainly depends on the supply of snowmelt runoff in the basin from spring to early summer [3]. Study Area The study area (47-49°N, 86-90°E) is located in the Altay region of Xinjiang, China. The Altay region, as the region with the most abundant snow in winter in China, is known as the "snow capital of China". The Caiertes river, which originates from the Altay mountains in the north of the Altay region, is one of only two main sources of the Irtysh rivers in China that flow into the arctic ocean ( Figure 1a). There is no glacier in this region, and the river water source is not affected by glacier melt. It mainly depends on the supply of snowmelt runoff in the basin from spring to early summer [3]. The Caiertes river basin (47-48°N, 89-90°E) is located in the southern slope of the Altay mountain in China. It is in the northeast-southwest direction and belongs to the northwest arid region of China. The elevation is 1150-3856 m ( Figure 1b); it is high in the northeast and low in the southwest. Snow melt begins in early march and ends in early August in this basin. The westerly circulation in summer brings sufficient moisture to the Atlantic Ocean and forms abundant precipitation [28]. Due to the blocking of Siberian high pressure in winter [29,30], it is difficult for the water vapor transported by the westerly wind to reach the study area, and the precipitation is mainly affected by the cold air in Siberia and the air flow in the Arctic Ocean [30]. During the study period, water vapor in the study area originated from the northwest direction. Remote Sensing Data In this paper, Landsat 8 and Sentinel-2 were used to obtain the snow cover variations in time and space. Landsat 8 is the 8th satellite to be launched by the National Aeronautics and Space The Caiertes river basin (47)(48) • N, 89-90 • E) is located in the southern slope of the Altay mountain in China. It is in the northeast-southwest direction and belongs to the northwest arid region of China. The elevation is 1150-3856 m ( Figure 1b); it is high in the northeast and low in the southwest. Snow melt begins in early march and ends in early August in this basin. The westerly circulation in summer brings sufficient moisture to the Atlantic Ocean and forms abundant precipitation [28]. Due to the blocking of Siberian high pressure in winter [29,30], it is difficult for the water vapor transported by the westerly wind to reach the study area, and the precipitation is mainly affected by the cold air in Siberia and the air flow in the Arctic Ocean [30]. During the study period, water vapor in the study area originated from the northwest direction. Remote Sensing Data In this paper, Landsat 8 and Sentinel-2 were used to obtain the snow cover variations in time and space. Landsat 8 is the 8th satellite to be launched by the National Aeronautics and Space Administration (NASA) on February 11, 2013, carrying Operational Land Imager (OLI) and Thermal Infrared sensors (TIRS) [31,32]. Among them, OLI sensor covers 9 bands, the swath is 185 × 185 km, the revisit interval is 16 days, and the spatial resolution is 15-30m. In this study, a total of 10 images covering 10 days of Landsat8 OLI L1T (Level 1 Terrain-corrected) were downloaded from the USGS website (http://www.usgs.gov/). Among these 10 images, three images are completely covered by cloud and were excluded in this study. Sentinel-2 includes two high-resolution multispectral imaging satellites, Sentinel-2a (S2A) and Sentinel-2b (S2B), both of which carry a Multi-Spectral Imager (MSI) with 13 bands. The two satellites were launched on June 23, 2015 and March 07, 2017 respectively. The revisit interval of one satellite is 10 days, the two satellites complement each other, and the revisit interval is 5 days when two satellites are available. The spatial resolution is 30-60m, and the swath is 290km [33]. 284 scenes covering 71 days of Sentinel-2 MSI L1C (Level-1C) multispectral data are used in this study, which were downloaded from the ESA website (https://scihub.copernicus.eu/). A total of 27 days of these Sentinel-2 data are completely obscured by the cloud and 176 scenes covering 44 days were used subsequently for snow mapping. Two days of Sentinel-2 data and Landsat 8 OLI data were duplicated, so a total of 49 days of data sources could be used to obtain snow cover information during the study period. The spectral information used for image preprocessing and snow and cloud recognition in the two data sources is shown in Table 1. Band3(30m) and Band6(30m) of Landsat 8 OLI, and Band3(10m) and Band11(20m) of Sentinel-2 are used for snow mapping, respectively. Preprocessing of Landsat-8 OLI and Sentinel-2 data was performed before snow and cloud mapping. The resolution of Landsat-8 OLI images was improved to 15 m by applying Gram-Schmidt Pan Sharpening in ENVI. Sen2cor tool was applied to L1C level data of Sentinel-2 for atmospheric correction to obtain Bottom-of-Atmosphere corrected reflectance of Sentinel-2 for snow mapping. LIC level data of Sentinel-2 was also used for cloud mapping in Section 3.3.2. MODIS data was also used in this study to compare with Sentinel-2 and Landsat 8. The snow cover fraction of MODIS is from MOD10A1 product, which is available from the National Snow and Ice Data Center (NSIDC) Distributed Active Archive Center (DAAC). Air Temperature Data The air temperature used in the reconstruction model was from the Global Land Data Assimilation System (GLDAS) data. The data was developed by the National Aeronautics and Space Administration (NASA) the Goddard Earth Sciences Data and Information Services Center (GES DISC) [34], with a Remote Sens. 2020, 12, 460 5 of 20 temporal resolution of 1 day. The spatial resolution could reach a maximum of 1/4 • in the study area, which cannot truly reflect the air temperature variation characteristics caused by elevation change in a small watershed. Therefore, the digital elevation model of SRTM with a finer resolution of 1/10 • was used to downscale the GLDAS air temperature in Figure 2. The absolute vertical error of SRTM DEM was less than 16 m [35]. Before elevation correction, SRTM elevation data after splicing and clipping were resampled and Gaussian-filtered to 1/10 • , and the original resolution of 1/4 • of GLDAS air temperature data was then increased to 1/10 • through the Gaussian filter. According to a certain lapse rate of air temperature, elevation correction of the filtered GLDAS's air temperature data was carried out to obtain the final air temperature with a resolution of 1/10 • . Remote Sens. 2020, 12, x FOR PEER REVIEW 5 of 21 The air temperature used in the reconstruction model was from the Global Land Data Assimilation System (GLDAS) data. The data was developed by the National Aeronautics and Space Administration (NASA) the Goddard Earth Sciences Data and Information Services Center (GES DISC) [34], with a temporal resolution of 1 day. The spatial resolution could reach a maximum of 1/4° in the study area, which cannot truly reflect the air temperature variation characteristics caused by elevation change in a small watershed. Therefore, the digital elevation model of SRTM with a finer resolution of 1/10° was used to downscale the GLDAS air temperature in Figure 2. The absolute vertical error of SRTM DEM was less than 16 m [35]. Before elevation correction, SRTM elevation data after splicing and clipping were resampled and Gaussian-filtered to 1/10°, and the original resolution of 1/4° of GLDAS air temperature data was then increased to 1/10° through the Gaussian filter. According to a certain lapse rate of air temperature, elevation correction of the filtered GLDAS's air temperature data was carried out to obtain the final air temperature with a resolution of 1/10°. In the Snowmelt-Runoff Model (SRM) or other air temperature elevation corrections, the air temperature lapse rate is usually set to a fixed value of −6.5 ℃/km [12,36]. However, in fact, the air temperature lapse rate changes with the change of season. Linear fitting of original GLDAS air temperature data and elevation for each day was performed in this study to calculate the optimized lapse rate k. When the determination coefficient R 2 was greater than or equal to 0.8, the optimized lapse rate k was used further in air temperature downscaling; when the determination coefficient R 2 was smaller than 0.8, the effective optimized lapse rate of adjacent time was used. In Figure 3, we can see that the rate before April is greater than −6.5 ℃/km, and that after April, it is almost less than −6.5 ℃/km, and the determination coefficient is mostly larger than 0.8. Section 5.2 describes the influence of dynamic estimation of the air temperature lapse rate on the maximum SWE accumulation in detail. In the Snowmelt-Runoff Model (SRM) or other air temperature elevation corrections, the air temperature lapse rate is usually set to a fixed value of −6.5 • C/km [12,36]. However, in fact, the air temperature lapse rate changes with the change of season. Linear fitting of original GLDAS air temperature data and elevation for each day was performed in this study to calculate the optimized lapse rate k. When the determination coefficient R 2 was greater than or equal to 0.8, the optimized lapse rate k was used further in air temperature downscaling; when the determination coefficient R 2 was smaller than 0.8, the effective optimized lapse rate of adjacent time was used. In Figure 3, we can see that the rate before April is greater than −6.5 • C/km, and that after April, it is almost less than −6.5 • C/km, and the determination coefficient is mostly larger than 0.8. Section 5.2 describes the influence of dynamic estimation of the air temperature lapse rate on the maximum SWE accumulation in detail. Remote Sens. 2020, 12, x FOR PEER REVIEW 6 of 21 Figure 3. The air temperature lapse rate after linear fitting in different seasons. The red points represent the air temperature lapse rate after linear fitting at a certain day, and the black points represent the determination coefficient, which is between 0 and 1. Radiation Data The net radiation index is the difference between the total incident energy of sunlight (the sum energy of long waves and short waves) and the total reflected energy from the ground, that is, downward radiation minus upward radiation. In this study, the net long-wave radiation is also from GLDAS with the temporal resolution of 1 day and the spatial resolution of 1/4°. Studies on the spatial Figure 3. The air temperature lapse rate after linear fitting in different seasons. The red points represent the air temperature lapse rate after linear fitting at a certain day, and the black points represent the determination coefficient, which is between 0 and 1. Radiation Data The net radiation index is the difference between the total incident energy of sunlight (the sum energy of long waves and short waves) and the total reflected energy from the ground, that is, downward radiation minus upward radiation. In this study, the net long-wave radiation is also from GLDAS with the temporal resolution of 1 day and the spatial resolution of 1/4 • . Studies on the spatial scale of the snow process have shown that the solar and thermal radiation inputs are biased at a resolution greater than 1/4 • , while the 1/10 • grid of the energy flux produces results equivalent to 30 m and sufficient for the prediction of snowmelt runoff [37]. Therefore, we downscaled the resolution of the net long-wave radiation from GLDAS to 1/10 • , which requires setting a Gaussian filter with a size of 401 × 401 centering on the pixel with a resolution of 1/10 • , that is, the net long-wave radiation values of each pixel and other pixels within the neighborhood of 20 km are weight-averaged to obtain the pixel's downscaled radiation values. Considering the influence of mountain topography on solar radiation and the accuracy of shortwave radiation, we used Himawari-8 to calculate the hourly Shortwave Downward Radiation (SWDR) with a 90 m resolution after terrain correction. Himawari-8 estimates SWDR at a 10-min scale, and its high temporal resolution makes it more sensitive to cloud-radiation interactions and variations in surface radiation over a short period of time. First, the components of solar direct and diffuse radiation were derived from the 10-min Himawari-8 data [38], then a shortwave topographic radiation model (SWTRM) was applied [39]. Combined with MOD10A1 and MYD10A1 albedo product of MODIS, the upward radiation energy of solar radiation was calculated to obtain the net radiation index of short-wave radiation. In Section 5.3, the maximum SWE accumulation calculated by using net short-wave radiation from GLDAS, and Himawari-8/MODIS with terrain correction are compared. The SWE Reconstruction Model Previous studies have shown that the reconstruction method can be used to estimate SWE at multiple spatial scales in complex terrain areas (mountain areas, river basins, etc.) [40,41]. Rittger et al. verified the reconstructed SWE with snow cover survey data and showed that the model could accurately estimate SWE values in different wet and dry years in different terrain environments [12]. It was also found that the model was very comparable to the measurement results of the NASA Airborne Snow Observatory (ASO) by measuring the snow cover area and improving the SRM model [2]. The purpose of the reconstruction is to calculate the maximum snow water equivalent for the whole season. The core idea is to carry out an inverse time series accumulation of the daily snowmelt amount from the time of complete snowmelt to the time of the beginning of snowmelt [12]: In Equation (1), SWE n is the equivalent of snow water at the beginning of melting and the maximum value of a pixel at all time points. It serves as the total solid precipitation during the accumulation period and stable period of the snow season, excluding water loss such as evapotranspiration and soil infiltration. SWE 0 is the snow water equivalent on the Nth day after the maximum SWE occurs. The reverse calculation starts from SWE = SWE 0 , taking one day as the time step, M j is the snowmelt amount on the day j, and SWE n is the snow water equivalent on the day n + 1. If n refers to the number of days between the date of the maximum SWE and complete snow melt, then SWE 0 = 0 and M 1 = 0. Generally, drier years have an earlier maximum SWE, while wetter years have a later SWE. Considering the actual situation of the region and the selection time of remote sensing images, March 2nd is taken as the peak of SWE in the snow season of the research area from 2017 to 2018. The Restricted Degree-Day Model In the simple degree-day snowmelt model [23,42], the daily snowmelt amount M j of each hydrological response unit (HRU, in pixels) is only related to air temperature: In Equation (2), M j (mm/day) is the ablation amount of one pixel on day j in the inverse time series, and the depth of snowmelt water is used to represent the amount of snowmelt. α (mm/(day • C)) is the degree-day factor; T mlt refers to the temperature that snow melt needs, which is generally considered to be the temperature of ice water mixture under standard atmospheric pressure, namely 0 • C; and T α represents the average daily air temperature and generally means the average of the maximum and minimum daily air temperature values [36]. When T α is less than or equal to T mlt , the snowmelt amount is 0. When T α is greater than T mlt , air temperature values are converted into snowmelt by the degree-day factor. Considering that the amount of snowmelt is related to the energy balance of the snow surface, in addition to the temperature, we added the daily net radiation value into the daily snowmelt model [43] and the restricted degree-day coefficient was used. The amount of ablation in each HRU in the restricted degree-day snowmelt model depends on the contribution of two parts: one is the air temperature value multiplied by the restricted degree-day factor (β r ), and the other is the ablation amount proportional to the net radiation index R d (W m −2 ). In Equation (3), m Q ((mm day −1 )/(W m −2 )) is similar to α in Equation (2), and is a physical constant used to convert energy into the water depth, and the value is usually set at 0.26 [12] .β r (mm/(day • C)) is a restricted degree-day factor, and the value is 1.5 [40,43], which is not equal to α in Equation (2), but both values are multiplied by the air temperature value T a . In Equations (2) and (3), the two snow melt models calculate the snow melt amount when the snow cover of each pixel is 100%. In practical conditions, pure snow pixels do not always exist. Therefore, we multiplied the snow melt amount of a single pixel M j by the snow cover fraction of the pixel f SCA,j (0%-100%), as the daily snow melt amount M j on the pixel scale: The maximum SWE of the whole snow season was then calculated by substituting the daily snow melt amount obtained from Equation (4) into Equation (1). Compared with the simple calculation method of the degree-day snowmelt water model in Equation (2), net radiation is introduced in Equation (3) to better describe the energy balance of snow. The heterogeneity of snow cover in space and the characteristics of snow cover in the watershed are considered in Equation (4) in a more objective and detailed way, thus snow melt can be estimated with a higher accuracy. Snow Detection There are many methods for the snow identification of remote sensing images, including the snow index method [44,45], the MODIS Snow Covered Area and Grain Size (MODSCAG) algorithm [46], machine-learning-based decision tree classification [33], etc. These methods have different adaptabilities to remote sensing images for different sources and different regions, among which the SNOWMAP algorithm based on the snow index method is the most widely used. The SNOWMAP algorithm was developed and tested using Landsat TM data [45], prior to the launch of MODIS [47]. According to the Remote Sens. 2020, 12, 460 8 of 20 results of manual visual interpretation, the algorithm is also applicable to the Sentinel-2 satellite image in the Caiertes river basin. Its physical basis is as follows: both snow cover and cloud have a high reflectance in the visible band, while snow cover has strong absorption characteristics in the SWIR band, but most clouds still have a high reflectance in the SWIR band. With such differences of spectral characteristics, the snow pixel can be easily identified from the cloud by setting a threshold for the Normalized Difference Snow Index (NDSI). The calculation of NDSI is In Equation (5), Green is the reflectance of snow in the green band and SWIR is the reflectance of snow in the SWIR band. For Sentinel-2, band3 (Green, 0.56 µm) and band11 (SWIR1, 0.61 µm) are used, and for Landsat 8 OLI data, band3 (Green, 0.525-0.600 µm) and band6 (SWIR1, 1.560-1.651 µm) are used. The study of snow cover in the Sierra Nevada in California, USA [44] and in the Indian Himalayan basin [48] has shown that NDSI ≥ 0.4 can be used as a criterion for identifying snow in Landsat 7 images. Negi et al. also found that this criterion could ignore the impact of slope and aspect changes caused by topography on the threshold [48]. In addition, Kulkarni et al. found that the threshold value of NDSI would not be affected by mountain shadow [49]. Based on these studies, the NDSI threshold of the SNOWMAP algorithm was set as 0.4, and when NDSI ≥ 0.4, the pixel was defined as snow. However, due to different observation times and multiple sources of images, NDSI thresholds could not be fixed in this study. In the process of snow identification, different thresholds were set according to different remote sensing images and sensing time. The thresholds of Sentinel-2 were between 0.35 and 0.45, and those of Landsat 8 were between 0.3 and 0.4. Snow and water also have similar spectral characteristics in the visible and SWIR bands. In order to further identify snow from water, we used the feature of the strong absorption of water in the NIR band, while the absorption of snow is weaker than that of water [47]. Therefore, another discriminating criterion of snow recognition was added into SNOWMAP: In Equation (6), NIR refers to the reflectance of the near-infrared band, corresponding to band8 of Sentinel-2 (0.842 µm) and band5 of Landsat 8 OLI (0.845-0.885 µm). When NDSI ≥ 0.4 and NIR ≥ 0.11, the pixel is recognized as snow cover. In addition, when the vegetation coverage is relatively high, the signal of snow cover observation will be attenuated, so some NDSI pixels lower than 0.4 can also be considered as snow cover. In this study, vegetation coverage products [42] were adopted to identify snow pixels affected by vegetation coverage. When the vegetation coverage is more than 40% and the conditions of NDSI ≥ 0.1 and NIR ≥ 0.11, the pixel is also considered as snow. Cloud Detection and Interpolation In snow cover extraction, the cloud cover and cloud discrimination algorithm are the main factors affecting the data continuity and accuracy. Before the snow cover classification based on NDSI and NDVI is carried out, an important step is to carry out preprocessing for the cloud removal of images. Due to the long period of this study, significant differences in climate and environment, and diverse cloud forms, we found that only one cloud detection algorithm could not be used for all images. Therefore, according to their morphology and spectral characteristics, clouds were classified into cirrus clouds, dense clouds, and ice clouds. Based on the threshold value of existing studies, Sentinel-2 was taken as an example to conduct an adaptive analysis for this research area and a new combined identification algorithm is proposed. Cirrus clouds are high-altitude clouds, for which the altitude is typically between 4500 and 10,000 m, and can be classified as ice clouds. They are composed of fine ice crystals which are relatively sparse in the upper air, so the clouds are relatively thin and they have good light transmission. This type of cloud is too thin, transparent, or translucent to be easily recognized. Hollstein et al. used a machine learning algorithm [50] to effectively identify cirrus clouds [33]. By training a given data set, it returns an optimized decision tree algorithm with a given depth. The purpose of this method is to find an optimal classification method without additional parameters. When the given depth is set to four layers, more than a 91% correct classification accuracy can be achieved. In this study, we found that this algorithm is very suitable for cirrus removal in the research area of the Caiertes river basin. The identification index used is In Equations (7) and (8), B, R, and S refer to the reflectance, ratio, and difference of the corresponding band of images, respectively. For example, B3 represents the reflectance of band3 of Sentinel-2 MSI, R(2,10) represents the ratio of band2 to band10, and S (11,10) represents the difference between band11 and band10. When Equations (7) and (8) are applied simultaneously, the pixel can be considered as cirrus. Dense cloud, also known as opaque cloud, is identified by the cloud mask algorithm of the Sentinel-2 LIC product [51]. Dense clouds and snow have a high reflectance in the blue region (B2, 0.49 µm), but snow has a much lower reflectance than dense clouds in the SWIR band (B11 and B12, 1.61 µm and 2.19 µm), so they can be separated by setting thresholds in the SWIR band. Additionally, some ice clouds show ice crystals on the cloud top due to their high altitude, resulting in low reflectance, similar to snow in SWIR band B11 and B12, which is difficult to distinguish. Therefore, a threshold of SWIR band B10 (1.375 µm) is needed to identify ice clouds with a higher altitude. In this step of ice cloud recognition, cirrus clouds at the same altitude will not be recognized, as they are transparent in the blue band B2 [51]. After obtaining the snow and cloud cover data during the study period, we interpolated the cloud pixels and pixels without effective data according to the continuous characteristics of snow cover in time and space. The linear interpolation method was used in time interpolation, which is robust and efficient. A three-dimensional Gaussian filter was used in spatial interpolation to make the snow cover more continuous in time and space [52]. By performing interpolation to the binary snow and cloud mapping results, we can get the daily fraction snow cover (f SCA,j ). Figure 4 shows the interpolation results in which the data loss caused by cloud cover is filled. The characteristics of snow cover changing with topography can also be identified from Figure 4. The interpolation method is also applicable to MOD10A1 snow cover products. Blue represents the area with complete snow cover ( = 100%), and red represents the area without snow ( = 0). White in (a) represents the area covered by cloud (i.e., the area to be interpolated), while the area mediated between blue and red in (b) is the area covered by incomplete snow ( is between 0 and 100%). Temporal Variation Time series snow cover fractions of every 300 m elevation band of the whole basin are shown in Figure 5. It shows that the snow cover in the whole basin began to melt on March 2, 2018, and nearly disappeared in late July. Snow cover was close to 100% on March 2 and almost zero in late July. The snow cover gradually decreases with time, but there will also be a sudden increase of snow cover, which is because of the phenomenon of new snow after the beginning of the ablation period. There are differences in the starting and disappearing time of snow cover at different elevations: the higher the altitude, the later the melting time, and the later the disappearing time. In the early stage of snowmelt (March), the snow at a low altitude (less than 2200 m) begins to melt first; the higher the altitude, the slower the snow area shrinks. In the middle period of snowmelt (April and May), the air temperature gradually increases, and the shrinking rate of snow cover slightly accelerates. In the later period of snowmelt (June and July), the low-altitude snow cover completely disappears, while the high-altitude snow cover (over 2800 m) begins to disappear, and ends in early August. The ablation process of each elevation zone also has some similar characteristics: the area shrinkage rate is basically similar, and the ablation takes about 2 months. The reasons for this phenomenon will be explained in Spatial Variation In order to show snow cover variation spatially, distribution maps of snow cover on the first day of each month in the ablation period are shown in Figure 6. The accumulated peak of snow cover is in early March, when almost the whole basin is covered with snow. However, some areas were affected by human activities, air temperature rising, radiation enhancement, and other factors at the valley bottom, resulting in snow melt and incomplete snow cover. From March to June, the snow cover gradually decreases, and the snow shrinks from the valley bottom to the peak. The time of the beginning and completion of melting of the low altitude is earlier than that of the high altitude, which is consistent with the conclusion in Section 4.1.1. Compared with Figure 5, it is found that the snow melt is basically completed in early July, and then, only a small part of the snow above the altitude of 2500 m exists. In early August, the snow cover is almost 0. Spatial Variation In order to show snow cover variation spatially, distribution maps of snow cover on the first day of each month in the ablation period are shown in Figure 6. The accumulated peak of snow cover is in early March, when almost the whole basin is covered with snow. However, some areas were affected by human activities, air temperature rising, radiation enhancement, and other factors at the valley bottom, resulting in snow melt and incomplete snow cover. From March to June, the snow cover gradually decreases, and the snow shrinks from the valley bottom to the peak. The time of the beginning and completion of melting of the low altitude is earlier than that of the high altitude, which is consistent with the conclusion in Section 4.1.1. Compared with Figure 5, it is found that the snow melt is basically completed in early July, and then, only a small part of the snow above the altitude of 2500 m exists. In early August, the snow cover is almost 0. valley bottom, resulting in snow melt and incomplete snow cover. From March to June, the snow cover gradually decreases, and the snow shrinks from the valley bottom to the peak. The time of the beginning and completion of melting of the low altitude is earlier than that of the high altitude, which is consistent with the conclusion in Section 4.1.1. Compared with Figure 5, it is found that the snow melt is basically completed in early July, and then, only a small part of the snow above the altitude of 2500 m exists. In early August, the snow cover is almost 0. (a-f) represent snow cover on the first day of March-August respectively; blue represents the snow covered area; red represents the no snow cover area; and other colors represent the areas with snow cover, but not complete cover, the snow cover of which is between 0 and 100%. Among them, yellow represents the area with snow cover of 0.5, and light blue represents the area with almost full snow cover. Temporal Variation We also divided the SWE according to an elevation band of 300 m, and then recorded statistics of its change characteristics with time. Figure 7 shows that the mean maximum SWE accumulation during the initial snowmelt period of the whole basin is 377.83mm. The SWE decreases to 0 gradually with time and lasts for a total time span of 5 months. The mean maximum SWE accumulation of each elevation band increases with the increase of altitude and also decreases with time in the ablation period. As the altitude increases, it takes longer for SWE to decrease to 0. Different from the retreat rate of snow cover in Section 4.1.1, the higher the altitude is, the faster the snow ablation rate will be. This is because the ablation rate is related to air temperature and the net radiation at the time of ablation, while the retreat of the snow cover area is not only related to the ablation rate, but also the SWE. Even though the melting rate in high-altitude areas accelerates in the later stage of snowmelt, the snow area shrinkage rate is close to that in low-altitude areas as the SWE at a high altitude is much larger than that in low-altitude areas. This also explains why there are different maximum SWE accumulation values at the beginning of ablation for different elevation bands, but the ablation process for all elevation bands takes about 2 months. (a-f) represent snow cover on the first day of March-August respectively; blue represents the snow covered area; red represents the no snow cover area; and other colors represent the areas with snow cover, but not complete cover, the snow cover of which is between 0 and 100%. Among them, yellow represents the area with snow cover of 0.5, and light blue represents the area with almost full snow cover. Temporal Variation We also divided the SWE according to an elevation band of 300 m, and then recorded statistics of its change characteristics with time. Figure 7 shows that the mean maximum SWE accumulation during the initial snowmelt period of the whole basin is 377.83mm. The SWE decreases to 0 gradually with time and lasts for a total time span of 5 months. The mean maximum SWE accumulation of each elevation band increases with the increase of altitude and also decreases with time in the ablation period. As the altitude increases, it takes longer for SWE to decrease to 0. Different from the retreat rate of snow cover in Section 4.1.1, the higher the altitude is, the faster the snow ablation rate will be. This is because the ablation rate is related to air temperature and the net radiation at the time of ablation, while the retreat of the snow cover area is not only related to the ablation rate, but also the SWE. Even though the melting rate in high-altitude areas accelerates in the later stage of snowmelt, the snow area shrinkage rate is close to that in low-altitude areas as the SWE at a high altitude is much larger than that in low-altitude areas. This also explains why there are different maximum SWE accumulation values at the beginning of ablation for different elevation bands, but the ablation process for all elevation bands takes about 2 months. Figure 8 shows the distribution of maximum SWE accumulation in the study area. SWE gradually increased from valley to peak, which is consistent with the conclusion in Section 4.2.1. In Figure 11a, the scatter density map of maximum SWE accumulation and elevation is shown. We found that the maximum SWE accumulation increases with elevation in the area which has an elevation below 2800 m. However, there was a turning point at 2800 m. SWE above 2800 m did not increase completely with elevation, but decreased more significantly. This conclusion is consistent with the results of Grünewald and Rittger et al. [12,53], which are caused by the topographic effect [12]. Spatial Variation period. As the altitude increases, it takes longer for SWE to decrease to 0. Different from the retreat rate of snow cover in Section 4.1.1, the higher the altitude is, the faster the snow ablation rate will be. This is because the ablation rate is related to air temperature and the net radiation at the time of ablation, while the retreat of the snow cover area is not only related to the ablation rate, but also the SWE. Even though the melting rate in high-altitude areas accelerates in the later stage of snowmelt, the snow area shrinkage rate is close to that in low-altitude areas as the SWE at a high altitude is much larger than that in low-altitude areas. This also explains why there are different maximum SWE accumulation values at the beginning of ablation for different elevation bands, but the ablation process for all elevation bands takes about 2 months. Figure 8 shows the distribution of maximum SWE accumulation in the study area. SWE gradually increased from valley to peak, which is consistent with the conclusion in Section 4.2.1. In Figure 11a, the scatter density map of maximum SWE accumulation and elevation is shown. We found that the maximum SWE accumulation increases with elevation in the area which has an elevation below 2800 m. However, there was a turning point at 2800 m. SWE above 2800 m did not increase completely with elevation, but decreased more significantly. This conclusion is consistent with the results of Grünewald and Rittger et al. [12,53], which are caused by the topographic effect [12]. The distribution of maximum SWE accumulation calculated by Sentinel-2/Landsat snow cover at a local scale is shown in Figure 9b,e. The application of high-resolution remote sensing data in the SWE reconstruction offers a detailed description of the local SWE spatial distribution over mountain terrain. However, the distribution of maximum SWE accumulation calculated by MOD10A1 snow cover product shown in Figure 9c,f hardly reflects the distribution characteristics of snow affected by local topography. Even in some local areas, as shown in Figure 9c, the snow cover area is underestimated due to the low spatial resolution of MODIS image, leading to inaccurate SWE calculation. Figure 8 shows that the maximum SWE accumulation occurs in areas with a high altitude (above 2800 m), where the slope, aspect, and other topographic characteristics are good for the storage of snow. In Figure 9, large SWE accumulation areas are caused by local topography, which means The distribution of maximum SWE accumulation calculated by Sentinel-2/Landsat snow cover at a local scale is shown in Figure 9b,e. The application of high-resolution remote sensing data in the SWE reconstruction offers a detailed description of the local SWE spatial distribution over mountain terrain. However, the distribution of maximum SWE accumulation calculated by MOD10A1 snow cover product shown in Figure 9c,f hardly reflects the distribution characteristics of snow affected by local topography. Even in some local areas, as shown in Figure 9c, the snow cover area is underestimated due to the low spatial resolution of MODIS image, leading to inaccurate SWE calculation. which is probably caused by the sliding of snow from the upper slopes. However, there is another possibility: the red area is located on the northern slope of the mountain, and the SWE of the northern slope is larger than that of the southern slope due to the influence of the water vapor source [30] and wind direction. Figure 8 shows that the maximum SWE accumulation occurs in areas with a high altitude (above 2800 m), where the slope, aspect, and other topographic characteristics are good for the storage of snow. In Figure 9, large SWE accumulation areas are caused by local topography, which means that snow can be stored in such local basins, as well as blowing snow. On the other hand, it is difficult for snow to be stored in areas with a steep slope. For example, the high SWE accumulation area such as the valley in Figure 9a-c has the characteristics of low-lying flat terrain, which is conducive to the accumulation of snow, while the slope of the yellow region is larger, which is not conducive to the retention of snow. In Figure 9d-f, the high SWE area is located under steep slopes, which is probably caused by the sliding of snow from the upper slopes. However, there is another possibility: the red area is located on the northern slope of the mountain, and the SWE of the northern slope is larger than that of the southern slope due to the influence of the water vapor source [30] and wind direction. Snow Cover of Sentinel-2/Landsat and MODIS The time series snow cover fraction and SWE of the whole basin derived from Sentinel-2/Landsat and MOD10A1 are shown in Figure 10. As shown in Figure 10, the snow cover fraction from MOD10A1 is significantly lower than that from Sentinel-2/Landsat. For the whole snowmelt season, the snow cover fraction is underestimated by 37.64% compared to snow cover fraction from Sentinel-2/Landsat. The maximum snow cover on March 2 was underestimated by 32.28%; The underestimated error of snow cover from March to May did not change significantly, which is about 30%. Figure 10b also shows that the SWE reconstructed by MOD10A1 snow cover area is lower than the SWE calculated by the snow cover parameter obtained in this study, and the SWE is underestimated by an average of 58.72% in the whole snowmelt season. From March to May, the underestimated error of SWE was close to that of 31.31% on average. Snow Cover of Sentinel-2/Landsat and MODIS The time series snow cover fraction and SWE of the whole basin derived from Sentinel-2/Landsat and MOD10A1 are shown in Figure 10. As shown in Figure 10, the snow cover fraction from MOD10A1 is significantly lower than that from Sentinel-2/Landsat. For the whole snowmelt season, the snow cover fraction is underestimated by 37.64% compared to snow cover fraction from Sentinel-2/Landsat. The maximum snow cover on March 2 was underestimated by 32.28%; The underestimated error of snow cover from March to May did not change significantly, which is about 30%. Figure 10(b) also shows that the SWE reconstructed by MOD10A1 snow cover area is lower than the SWE calculated by the snow cover parameter obtained in this study, and the SWE is underestimated by an average of 58.72% in the whole snowmelt season. From March to May, the underestimated error of SWE was close to that of 31.31% on average. Air Temperature The maximum SWE accumulation was calculated by using an optimized air temperature lapse rate and fixed lapse rate. The relationship between elevation and maximum SWE accumulation calculated by these two methods is shown in Figure 11a,b, respectively. For both results, the maximum SWE accumulation increases with increasing elevation under 2800 m, and the maximum SWE accumulation then starts to decrease with increasing elevation beyond 2800 m. The difference of the results by using the optimized air temperature lapse rate or fixed lapse rate lies in that the maximum SWE accumulation of the optimized lapse rate method is smaller than that of the fixed lapse rate method for elevations larger than 2300 m (Figure 11c). This is because the optimized air temperature lapse rate after ablation season fitting is lower than the fixed lapse rate for most of the time period. Therefore, the air temperature above average elevation calculated by using the optimized lapse rate is lower than that calculated by using the fixed lapse rate, while the air temperature below average elevation calculated by using the optimized lapse rate is higher than that calculated by using the fixed lapse rate. In addition, the snow melt time in low elevation areas is short Air Temperature The maximum SWE accumulation was calculated by using an optimized air temperature lapse rate and fixed lapse rate. The relationship between elevation and maximum SWE accumulation calculated by these two methods is shown in Figure 11a,b, respectively. For both results, the maximum SWE accumulation increases with increasing elevation under 2800 m, and the maximum SWE accumulation then starts to decrease with increasing elevation beyond 2800 m. The difference of the results by using the optimized air temperature lapse rate or fixed lapse rate lies in that the maximum SWE accumulation of the optimized lapse rate method is smaller than that of the fixed lapse rate method for elevations larger than 2300 m (Figure 11c). This is because the optimized air temperature lapse rate after ablation season fitting is lower than the fixed lapse rate for most of the time period. Therefore, the air temperature above average elevation calculated by using the optimized lapse rate is lower than that calculated by using the fixed lapse rate, while the air temperature below average elevation calculated by using the optimized lapse rate is higher than that calculated by using the fixed lapse rate. In addition, the snow melt time in low elevation areas is short and is not sensitive to the lapse rate, and the reconstructed SWE in the high-elevation area is more affected by the lapse rate used in air temperature downscaling. Remote Sens. 2020, 12, x FOR PEER REVIEW 16 of 21 Figure 11. (a-b) The relationship between maximum SWE accumulation and elevation in mountainous areas. (a) represents the calculated SWE by using optimized air temperature lapse rate, and (b) represents the calculated SWE using a fixed lapse rate of −6.5℃/km. (c)The scatter plot of the SWE difference between the SWE calculated by using optimized air temperature lapse rate and a fixed lapse rate of −6.5℃/km. Figure 12a,b shows the relationship between aspect and maximum SWE accumulation calculated by the Himiwari-8 solar radiation with terrain correction and the GLDAS short-wave radiation without terrain correction. Maximum SWE accumulation obtained from the two kinds of solar radiation data has the same trend with respect to aspect. The SWE in the north slope (0-45° and 315-356°) is larger than that in the south slope (135-225°), and the SWE in the east and west slope is close to each other. However, the SWE calculated with terrain correction is smaller than that without terrain correction in all directions in different levels. Figure 12c shows that the difference between the two results on the north slope is greater than that on the south slope. In addition, we also calculated the RMSE of these two kinds of radiation data on the north and south slope respectively to evaluate the difference between the two types of radiation data on the north and south slope. The RMSE of south slope and north slope is 182.67 and 269.01 respectively. The SWE values of the south slope are close to each other, while the value deviation of the north slope is large. Therefore, it can be concluded that the difference of SWE between north and south slope after terrain correction is smaller than that without correction. The solar radiation corrected by topography is weaker on the north slope than on the south slope due to the influence of mountain shadows. Therefore, not only natural factors such as topography, but also the calculation of solar radiation also affects the spatial distribution difference of maximum SWE accumulation on aspect. Error Analysis Wind plays a key role in sublimation [54], but the wind speed in the Altay mountains is small, which is negligible compared with snow melt caused by air temperature and radiation energy. The errors of the reconstructed model mainly come from the errors of the input parameters in the Figure 11. (a,b) The relationship between maximum SWE accumulation and elevation in mountainous areas. (a) represents the calculated SWE by using optimized air temperature lapse rate, and (b) represents the calculated SWE using a fixed lapse rate of −6.5 • C/km. (c)The scatter plot of the SWE difference between the SWE calculated by using optimized air temperature lapse rate and a fixed lapse rate of −6.5 • C/km. Figure 12a,b shows the relationship between aspect and maximum SWE accumulation calculated by the Himiwari-8 solar radiation with terrain correction and the GLDAS short-wave radiation without terrain correction. Maximum SWE accumulation obtained from the two kinds of solar radiation data has the same trend with respect to aspect. The SWE in the north slope (0-45 • and 315-356 • ) is larger than that in the south slope (135-225 • ), and the SWE in the east and west slope is close to each other. However, the SWE calculated with terrain correction is smaller than that without terrain correction in all directions in different levels. Figure 12c shows that the difference between the two results on the north slope is greater than that on the south slope. In addition, we also calculated the RMSE of these two kinds of radiation data on the north and south slope respectively to evaluate the difference between the two types of radiation data on the north and south slope. The RMSE of south slope and north slope is 182.67 and 269.01 respectively. The SWE values of the south slope are close to each other, while the value deviation of the north slope is large. Therefore, it can be concluded that the difference of SWE between north and south slope after terrain correction is smaller than that without correction. The solar radiation corrected by topography is weaker on the north slope than on the south slope due to the influence of mountain shadows. Therefore, not only natural factors such as topography, but also the calculation of solar radiation also affects the spatial distribution difference of maximum SWE accumulation on aspect. Solar Radiation Remote Sens. 2020, 12, x FOR PEER REVIEW 16 of 21 Figure 11. (a-b) The relationship between maximum SWE accumulation and elevation in mountainous areas. (a) represents the calculated SWE by using optimized air temperature lapse rate, and (b) represents the calculated SWE using a fixed lapse rate of −6.5℃/km. (c)The scatter plot of the SWE difference between the SWE calculated by using optimized air temperature lapse rate and a fixed lapse rate of −6.5℃/km. Figure 12a,b shows the relationship between aspect and maximum SWE accumulation calculated by the Himiwari-8 solar radiation with terrain correction and the GLDAS short-wave radiation without terrain correction. Maximum SWE accumulation obtained from the two kinds of solar radiation data has the same trend with respect to aspect. The SWE in the north slope (0-45° and 315-356°) is larger than that in the south slope (135-225°), and the SWE in the east and west slope is close to each other. However, the SWE calculated with terrain correction is smaller than that without terrain correction in all directions in different levels. Figure 12c shows that the difference between the two results on the north slope is greater than that on the south slope. In addition, we also calculated the RMSE of these two kinds of radiation data on the north and south slope respectively to evaluate the difference between the two types of radiation data on the north and south slope. The RMSE of south slope and north slope is 182.67 and 269.01 respectively. The SWE values of the south slope are close to each other, while the value deviation of the north slope is large. Therefore, it can be concluded that the difference of SWE between north and south slope after terrain correction is smaller than that without correction. The solar radiation corrected by topography is weaker on the north slope than on the south slope due to the influence of mountain shadows. Therefore, not only natural factors such as topography, but also the calculation of solar radiation also affects the spatial distribution difference of maximum SWE accumulation on aspect. Error Analysis Wind plays a key role in sublimation [54], but the wind speed in the Altay mountains is small, which is negligible compared with snow melt caused by air temperature and radiation energy. The errors of the reconstructed model mainly come from the errors of the input parameters in the Figure 12. (a,b) The relationship between the maximum SWE accumulation and aspect in mountainous areas at the beginning of snowmelt. (a) represents the calculated SWE after terrain correction, and (b) represents the SWE without terrain correction. (c) The scatter plot of SWE difference between the topographic corrected SWE and the non-corrected SWE. Error Analysis Wind plays a key role in sublimation [54], but the wind speed in the Altay mountains is small, which is negligible compared with snow melt caused by air temperature and radiation energy. The errors of the reconstructed model mainly come from the errors of the input parameters in the restricted degree-day model and errors in the model itself. Snow cover is one of the most important parameters affecting the accuracy of SWE reconstruction. Section 5.1 proved MOD10A1 snow products to be underestimated, due to the serious omission error caused by the resolution of MODIS 500 m in areas with mountain shadows and the edge of snow cover [55]. In this study, since the cloud may not have separated from the snow during snow recognition, it is possible to overestimate snow recognition from Landsat 8 and Sentinel-2 images. In this study, a revisit time of about 3-4 days can be achieved by combing Sentinel-2 and Landsat images, and this makes the retrieval of snow cover variation in fine spatial and temporal resolution possible. In order to obtain the snow cover under cloud, linear interpolation in time and Gaussian filter in space are performed. However, when a certain pixel is covered by cloud for a long period of time and in the meanwhile the snow cover changes significantly in this time period, linear interpolation is not accurate. Fortunately, this situation is rare. When more satellite images with tens of meters resolution are included, such as GF-1 and GF-6 satellites, this problem can be effectively solved. The degree-day factor in the restricted degree-day model depends on the air temperature data, which can only be calculated with the reanalysis data. The seasonal variation of degree-day factors is large, which requires a large amount of actual data correction. In addition, Brubaker et al. found that, in addition to air temperature, when using long and shortwave net radiation energy, the performance of the model was less dependent on the restricted degree-day factor and had greater universality. The model could be extended to areas lacking actual measurement experience [43]. Marks and Dozier confirmed that solar radiation in the Sierra Nevada, California, USA, contributes more to snow melt than air temperature parameters, but air temperature also plays a role in the studied region [56]. In Section 5.2, the SWE difference estimated before and after fitting of the linear air temperature lapse rate reaches 38.26mm to the maximum extent at the elevation (2800 m) of the SWE maximum, accounting for about 7.01% of the maximum estimated SWE in the same elevation. According to Mahat et al. [57] and Brubaker et al. [43], the conversion factor of the radiation parameter represents the coefficient of conversion of any form of energy (W/m 2 ) into the amount of snow melt (mm/day) when the snow temperature is equal to 0 • C, which is fixed in theory. In Section 5.3, the SWE calculated by GLDAS is larger than that calculated by SWDR after terrain correction, so the net short-wave radiation data of GLDAS is larger than the net short-wave radiation calculated by SWDR and MOD10A1 albedo products. This may be due to the high value of GLDAS, or the low value of SWDR and the high value of albedo. GLDAS long-wave radiation data without terrain correction will also affect the estimation of maximum SWE accumulation to some extent. In mountainous areas with high precipitation uncertainty, the accumulation of daily snowmelt can only reflect the change of new snow area in the snowless area, but not in the snow-covered area. The new snow will be considered as the original snowpack, and the estimated value of the maximum SWE accumulation will be overestimated. However, compared with other measurement methods, estimation of the reconstruction method is more in line with the actual situation [58]. The tree cover fraction is very low in the study area, so the influence of tree attenuation on radiation is not considered in this study. In further work, tree attenuation of radiation should be considered by calculating radiation under the tree canopy. Conclusions and Outlook In this study, the seasonal maximum SWE accumulation in a mountainous area has been reconstructed using multi-source remote sensing images with resolutions of tens of meters. The combination of Landsat 8 and Sentinel-2 satellite images improved the accuracy of snow cover parameters of the reconstructed model, which greatly improved the retrieval accuracy and spatial resolution of the SWE in mountainous areas. In this case, complex pixel unmixing efforts can be avoided when estimating the fractional snow cover. Snow in the Caiertes river basin disappeared in late July for five months. The melting date and vanishing date of snow at different altitudes are different, but they all last about 2 months. In the spatial distribution of snow cover, the melting and disappearing time of snow is getting earlier and earlier from the valley to the top of the mountain. The area shrinking rate of each elevation zone is basically similar. However, the higher the altitude, the faster the snow melts. The average maximum SWE accumulation of the whole basin at the beginning of snowmelt is 377.83 mm. In the early stage of snowmelt, the maximum SWE accumulation increases with elevation, but in the area above 2800 m, the SWE decreases with elevation. Compared with MOD10A1 snow cover products, the reconstruction technology can describe the maximum SWE accumulation distribution characteristics of local snow cover in mountainous areas with a higher accuracy by using high-resolution remote sensing images to extract the snow cover area. In addition to regional elevation, the slope, aspect, and other topographic features all affect the accumulation and storage of snow. The errors of the reconstructed model are mainly derived from air temperature and radiation, as well as the estimated values of the snow cover area. Therefore, the accuracy of each parameter of the reconstructed model is very important to the accuracy of the results. In future research, air temperature and radiation data with higher accuracy are expected to improve the accuracy of SWE. Other freely available remote sensing data with a tens of meters spatial resolution (such as the Chinese Gaofen-1 and Gaofen-6 satellite data) could also be added to the snow cover mapping scheme, which would improve the temporal resolution of snow mapping. Furthermore, to validate the absolute value of reconstructed maximum SWE results, snow depth and SWE ground stations are needed on the mountains in this basin in the future.
13,914.6
2020-02-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Argumentation Mining on Essays at Multi Scales Argumentation mining on essays is a new challenging task in natural language processing, which aims to identify the types and locations of argumentation components. Recent research mainly models the task as a sequence tagging problem and deal with all the argumentation components at word level. However, this task is not scale-independent. Some types of argumentation components which serve as core opinions on essays or paragraphs, are at essay level or paragraph level. Sequence tagging method conducts reasoning by local context words, and fails to effectively mine these components. To this end, we propose a multi-scale argumentation mining model, where we respectively mine different types of argumentation components at corresponding levels. Besides, an effective coarse-to-fine argumentation fusion mechanism is proposed to further improve the performance. We conduct a serial of experiments on the Persuasive Essay dataset (PE2.0). Experimental results indicate that our model outperforms existing models on mining all types of argumentation components. Introduction Argumentation mining (AM) is a challenging task in natural language processing . Recent research mainly involves independent sentences ; Bar-Haim et al., 2017;Niven and Kao, 2019;Reimers et al., 2019) and also essays (Levy et al., 2014;Habernal and Gurevych, 2017;Chernodub et al., 2019;Petasis, 2019). In this paper, we focus on argumentation mining on essays. Argumentation mining on essays aims to identify the types and locations of argumentation components from essay text . Typically, there are three argumentation types, namely major claims (MC), claim (C) and premises (P). Previous research (Levy et al., 2014; takes sentences as the smallest argumentative unit, and handles this task in a rough way. They firstly split the essay into several sentences, and adopt a sentence classification model to select and reserve sentences which may be promising to contain argumentation components. Then they further identify the exact boundaries of argumentation components in those sentences. These pipeline approaches fail to conduct effective argumentation mining, since they ignore the argumentation structure of the essay and only handle the task at sentence level. Recent research Chernodub et al., 2019) focuses on end-to-end neural models. They boil the task down to a sequence tagging problem, and handle it at word level instead of sentence level. Typically, neural network is employed as encoder for text representation, and Conditional Random Field (CRF) is employed as decoder to make final prediction. This word-level sequence tagging method can simultaneously identify the types and locations of all argumentation components. However, as shown in Figure 1, it can be observed that different types of argumentation components are at different levels: • Major claims serve the whole essay as the core opinions. They can be straightly proposed at the beginning of the essay, or summarized in the end. They are at essay level. International tourism is now more common than ever before The last decade has seen an increasing number of tourists traveling to visit natural wonder sights, ancient heritages and different cultures around the world. While some people might think that this international tourism has negative effects on the destination countries, I would contend that it has contributed to the economic development as well as preserved the culture and environment of the tourist destinations [MC]. Firstly, international tourism promotes many aspects of the destination country's economy in order to serve various demands of tourists[P]. Take Cambodia for example, a large number of visitors coming to visit the Angkowat ancient temple need services like restaurants, hotels, souvenir shops and other stores [P]. These demands trigger related business in the surrounding settings which in turn create many jobs for local people improve infrastructure and living standard [P]. Therefore tourism has clearly improved lives in the tourist country [C]. Secondly · · · To conclude, as far as I am concerned, international tourism has both triggered economic development and maintained cultural and environment values of the tourist countries [MC]. In addition, the authorities should adequately support these sustainable developments. • Claims serve specific paragraphs as the core statements. They can appear anywhere in a paragraph, either proposed at the beginning, or summarized in the end, and also given in the middle. They are at paragraph level. • Premises serve as all kinds of evidences to give reasons for major claims and claims. They can be logical statements, survey results, typical examples, public thoughts, expert suggestions, etc. They are at word level. Moreover, sequence tagging method utilizes classical CRF model to capture sophisticated dependency in a word-by-word way. Such method is thus appropriate to integrate local word-level information, but unsuitable for inference on long-distance text at essay level or paragraph level. To this end, we argue that different types of argumentation components should be handled at different levels. In this paper, we propose a multi-scale argumentation mining model. In order to mine major claims, we design essay-level argumentation extraction submodule based on multi-span extraction strategy. Besides, to mine claims, we design paragraph-level argumentation extraction submodule based on randomized extraction strategy. As for mining premises, we follow the word-level sequence tagging method. Finally, a coarse-to-fine argumentation fusion mechanism is proposed to further improve the performance. We carry out a serial of experiments on the Persuasive Essays dataset (PE 2.0) . The experimental results indicate that our model can significantly improve the performance as compared to state-of-the-art models, where our model respectively achieves 8.92% absolute improvement on overall performance, 14.89% absolute improvement on mining major claims, and 11.05% absolute improvement on mining claims. Moreover, we compare the performance of (i) multi-span extraction and randomized extraction (ii) argumentation extraction and argumentation tagging, which allow us to validate the effectiveness of our strategies of processing different types of argumentation components at their corresponding levels. The organization of this paper is as follows. Firstly we give a detailed explanation to our multi-scale argumentation mining model in Section 2. Then in Section 3, we introduce our experiments. The detailed experimental results are displayed and analyzed in Section 4. In Section 5, we give a brief overview of related work about argumentation mining on essays. Finally we draw our conclusion in Section 6. Multi-scale Argumentation Mining Model An overview of our multi-scale argumentation mining model is shown in Figure 2. For major claims, we design an essay-level argumentation extraction submodule based on multi-span extraction strategy in Section 2.1, where the whole essay is taken as the input of BERT encoder, and a pointer network is utilized to score each word and thus score all the candidate spans. By these scores and a set of reasonable rules, we rank and filter the candidate spans to select result spans. For claims, we design a paragraphlevel argumentation extraction submodule based on randomized extraction strategy in Section 2.2, where each paragraph is respectively taken as the input of BERT encoder to mine result spans, and result spans of each paragraph are gathered as the result spans of the corresponding essay. For premises, we design a word-level argumentation tagging submodule in Section 2.3, where the whole essay is taken as the input of BERT encoder, and CRF is utilized as decoder to obtain the tag sequence with the highest sequence score. Finally, a coarse-to-fine argumentation fusion mechanism in Section 2.4 is utilized to obtain the final results, since there may exist some overlaps on result spans of different argumentation types. Essay-level Argumentation Extraction for Major Claim Major claims are at essay level. For each essay, let E = {w 1 , · · · , w les } denotes the essay. To mine major claims, the input sequence is: where l es is the length of the essay. The sequence is encoded with BERT encoder (Devlin et al., 2019): Through the multi-head self-attention mechanism, BERT can perceive and more heavily weight the attentive words in the essay. This allows the model to capture essay context by multi-layer transformers. Then inspired by pointer networks (Vinyals et al., 2015), for each word w i in the essay, its embedding H i is utilized to score the word through a linear layer: where score s i is the start score for the word to be the start of a major claim span, while score e i is the end score. Then the cross entropy loss of start position and end position are respectively calculated, and the sum of start loss and end loss is employed as the final loss: where y s i is the start label for w i to be the start of a major claim span, 1 for golden start word while 0 for non-start word, and y e i is the end label. Moreover, as shown in Figure 1, there are some occasions where an essay contains more than one major claim spans. Actually, each essay has at least one major claim span, and two major claim spans in usual, where one is straightly proposed at the beginning, while another is summarized in the end. Hence, we adopt a multi-span extraction strategy during training, where all major claims in an essay are admitted. It indicates that start label y s and end label y e may be multi-one-hot labels: When prediction, all candidate spans are ranked according to their corresponding probability. The probability for a span starting from w i and ending at w j is defined as Equation 6: Then we propose a set of reasonable rules, which are based on common sense, to filter apparently wrong and overlapped candidate spans. The rules are explained in detail in Appendix 1. Finally we reserve top K as result spans for each essay. Paragraph-Level Argumentation Extraction for Claim Claims are at paragraph level. For each essay, firstly we respectively mine claims from each paragraph, and then gather the results for subsequent argumentation fusion on essays. Specifically, for each paragraph, let P = {w 1 , · · · , w lpa } denotes the paragraph. The input sequence is: where l pa is the length of the paragraph. The sequence is also encoded with BERT encoder for contextualized embedding: Then similar to the submodule for major claim in Section 2.1, the start and end score of a word comes from its embedding: and the sum of the cross entropy loss of start position and end position is adopted as final loss: Besides, as shown in Figure 1, a paragraph may contain one claim span, or none. Moreover, there are very few occasions where a paragraph contains more than one claim spans. Taking this into account, we adopt a randomized extraction strategy. It means that, if a paragraph contains more than one claim spans, then in each training epoch, only one span is admitted and other spans are ignored. The admitted one is randomly chosen in each epoch. Thus start label y s and end label y e may be one-hot labels for paragraphs which have at least one claim span, and full-zero labels for paragraphs which does not contain any claim span: Similarly, during prediction, all candidate spans are ranked according to span probability: Then the filtering rules in Appendix 1 are adopted to remove apparently wrong and overlapped candidate spans. Finally we keep top k as result spans for each paragraph, and gather them as result spans for the corresponding essay. Word-Level Argumentation Tagging for Premise Premises are at word level. We adopt word-level argumentation tagging through a BERT-CRF sequence tagging model to mine premises. For each essay, let E = {w 1 , · · · , w les } denotes the essay. The input sequence is: and the sequence is also encoded with BERT encoder for contextualized embedding: Then the embedding of each word is employed to score the word to be different tags through a linear layer: where k is the number of tag types, and score j i ( j ∈ {1, 2, · · · , k}) is the score of word i to be marked as tag j. In our research, we adopt the same tag configuration as Chernodub et al. (2019), which is a compound of BIO label and argumentation types. We also adopt a Conditional Random Field (CRF) model (Lample et al., 2016) as decoder. Specifically, for a predicted tag sequence t: t i is the predicted tag of the word w i , and the corresponding sequence score is: where A is trained one-step tag transition matrix. The final loss is defined as: where y t is tag sequence label, 1 for groundtruth tag sequence while 0 for others, and T is a set of all possible tag sequences. During prediction, the Viterbi algorithm is adopted for decoding to obtain tag sequence with the highest sequence score, which will be considered as the submodule prediction. Coarse-to-fine Argumentation Fusion As mentioned above, we have obtained result spans of different argumentation types at corresponding levels respectively. However, the result spans of different argumentation types might be overlapped. Hence we propose a coarse-to-fine method for the fusion of them. Specifically, let priority x denotes the priority of argumentation type x, where x ∈ {M C, C, P }. We follow the coarse-to-fine principle and set the highest priority for major claim, higher for claim, and the lowest for premise: priority M C > priority C > priority P Then for each essay, we keep three sets, which respectively contain the result spans of major claim, claim and premise. For each essay, if a result span from one set is overlapped with another result span from another set according to Algorithm 1 in Appendix 1, then we reserve the span from the set with higher priority, and remove another span from its corresponding set. In this way, all sets will not share any overlapped spans and the fusion procedure is accomplished. Experiments In this section, at first we introduce the dataset we utilize and show our experiment setup. Then we introduce the evaluation metrics. Finally we list the baselines that we adopt for comparison. Dataset PE 2.0 dataset 1 (2017), which is based on PE 1.0 dataset (2014), is one of the most classical and widely used datasets in argumentation mining on essays. PE 2.0 annotates three kinds of argumentation components, namely major claim (MC), claim (C) and premise (P). Many previous researches (Persing and Ng, 2016;Chernodub et al., Experiment Setup We implement our model with TensorFlow 1.14.0 and conduct our experiments on a computation node with a NIVIDIA RTX2080 GPU. In our experiments, pre-trained uncased BERT-base model 3 is adopted as encoder. We utilize BERTAdam optimizer with an initial learning rate of 5e-6, and choose a batch size of 4 to avoid out of memory problem, for BERT is extremely exhausting for memory. We also employ a hyper parameter optimization with dropout probability from {0.1, 0.2, 0.3}. In each case, we train 20 epochs, and choose model parameters with the best performance on the development set. Evaluation Metrics To accurately evaluate the performance of our model on mining all types of argumentation components, we employ following span-based evaluation metrics. For specific argumentation type, a prediction span of an essay is regarded as true only if it is exactly matched with a groundtruth span of the essay. We calculate mean precision P, mean recall R, as well as mean F1 score F of each essay on the test set. Furthermore, we employ macro F score defined in Equation 20 as overall evaluation metric: where n is number of essays on the test set. Besides, as previous research Chernodub et al., 2019), we also take the micro F score from Persing and Ng (2016) into account. The detailed definition of this metric is available in Appendix 2. Submodule Performance Experimental results on mining different argumentation types before fusion are summarized in Table 3. Our essay-level argumentation extraction submodule for major claim shows the best performance with the highest F1 score, as well as the highest precision and recall on mining major claims. As we have pointed out, major claims are at essay level. Thus BERT-CRF with essay as input performs better among all the sequence tagging models. However, it still conducts reasoning in a word-by-word way through CRF. As compared to the CRF, pointer network in our submodule can capture long-distance context information on essays. Therefore, the submodule significantly outperforms other word-level sequence tagging models. Besides, our paragraph-level argumentation extraction submodule for claim obtains the best performance with the highest F1 score on mining claims. The submodule also obtains near the best precision and recall. Claims are at paragraph level. BERT-CRF with paragraph as input shows better performance among all the sequence tagging baselines. As compared to it, our submodule utilizes pointer network to conduct reasoning on paragraphs. Thus, the submodule shows apparent advantages on mining claims. However, its F1 score of 53.54%, though the highest among all models, is relatively low compared to other argumentation types. This might because the submodule ignoring the information from other paragraphs in the identical essay. Nevertheless, it is a challenging trade-off problem from the later ablation studies in Section 4.3. Moreover, our word-level argumentation tagging submodule for premises has the best performance with the highest F1 score, as well as the highest precision on mining premises. It indicates that the pre-trained language model BERT is also powerful and adaptive to transfer in this task. Input Method We also verify the efficiency of our coarse-to-fine argumentation fusion mechanism in Table 4. For major claim, the performance remains the same after fusion since we set the highest priority for major claim, and do not remove any such span. For claim, the performance is apparently improved with higher F1 score, which comes from the significant increase of precision and relatively slight decrease of recall. For premise, the performance also gets slightly promoted. Besides, the overall performance also gets promoted after fusion with respectively 1.18% and 1.20% absolute increase of F micro and F macro score. All these improvements indicate that our coarse-to-fine argumentation fusion mechanism is effective. Multi-span Extraction or Randomized Extraction We respectively mine major claims and claims at different levels with different extraction strategy 6 . The results are summarized in Table 5. For major claims, under the same extraction strategy, extractions on essays significantly outperform extractions on paragraphs. However, the situation is exactly opposite for claims. Under the same extraction strategy, extractions on paragraphs obtain better performances. It shows that different type argumentations components should be handled at corresponding level. Moreover, no matter what argumentation type, on paragraphs, randomized extractions outperform multi-span extractions. And on essays, multi-span extractions are better than randomized extractions. It may indicate that multi-span strategies is appropriate at essay-level extractions, and randomized strategies is appropriate at paragraph-level extractions. Actually, in usual, an essay contains more than one claim spans, where multi-span extraction is more appropriate. However, on most occasions, a paragraph has at most one claim span, or does not have any span, where randomized extraction is more appropriate. The situation is similar for major claims. Therefore, the results show the effectiveness of our strategies chosen for different types of argumentation components. Argumentation Extraction or Argumentation Tagging We also try to mine premises with argumentation extraction method. The results are compared in Table 6. Our word-level argumentation tagging submodule for premise obtains the best performance with the highest F1 score as well as the highest precision and recall. This just indicates that premises are at word level, and argumentation tagging is more appropriate than argumentation extraction on mining them. modeled AM on essays as a sentence-level feature-based classification task, where each sentence is respectively classified through a set of linguistical features. firstly proposed a sequence tagging model to distinguish argumentation components and non-argumentation components, and employed a joint ILP (Integer Linear Programming) model to identify the types of argumentation components. However, they reported performance of different subtasks without overall performance. Potash et al. (2017) utilized pointer network to identify the types of argumentation components on the assumption that all argumentation components have already been identified, which means the exact boundaries of all the argumentation components are already available. further proposed a new end-to-end sequence tagging model, which firstly employs compound labels of BIO and argumentation types, and simultaneously identifies the types and exact locations of different argumentation components. Chernodub et al. (2019) tried to build application interface, which is called TARGER and is a BiLSTM-CNN-CRF sequence tagging model, for convenient argumentation mining on essays. Besides, latest research (Petasis, 2019;Spliethover et al., 2019) also aims to distinguish argumentation components from non-argumentation components with text segmentation based on sequence tagging models. Other work Peldszus et al., 2016;Skeppstedt et al., 2018) focuses on arg-microtext corpus , which contains 112 independent short texts, where each can be considered as one paragraph and contains about 5 argumentation components on average. Conclusion We propose a multi-scale argumentation mining model for argumentation mining on essays. Our model respectively mines different types of argumentation components at corresponding levels. Moreover, a coarse-to-fine argumentation fusion mechanism is adopted to further improve the results. The experimental results on PE 2.0 dataset indicate that our model obtains the state-of-the-art performance, where the model obtains significantly improved performance on mining major claims and claims. The results reveal the importance of argumentation mining at different levels on different argumentation types. In the future, we will try to mine different argumentation types with multi-task learning method. The results are displayed in Figure A.1. Different argumentation types show diverse error modes. For all the types, None-out is the dominate error. This may because of the few shot of argumentation components as compared to the non-argumentation ones. For major claims, None-out, Type-in, and None-in errors are serious. It may be a bit difficult for the model to distinguish major claims from non-argumentation components. Claims come with pretty critical None-out, Type-in, and Type-out errors. This may indicate the model tends to mistake claims for non-argumentation components, as well as confuse claims with other argumentation types. As for premises, None-out, Boundary, and Type-out errors take dominant positions. The model may get into trouble in identifying exact boundaries of premises and distinguishing premises from non-argumentation components. Besides, the model also tends to mistake premises for other argumentation types. A.4 Word-based Sequence Tagging Results Word-based sequence tagging results of different models are compared in Table A.1. Among all these models, BERT-CRF with essays as input shows the best word-based performance on all tag types. However, even this model, the F1 scores of major claim and claim are still low, where the F1 scores of B-MC and I-MC are both less than 70%, while the F1 scores of B-C and I-C are both less than 60%. Moreover, actually, for major claim, the minimum of the F1 scores of B-MC and I-MC can be considered as the upper boundary of corresponding span-based F1 score. The situation is similar for claim. That is to say, for these sequence tagging models, span-based F1 scores of major claim and claim will be respectively less than 64.58% and 58.51%. Therefore, sequence tagging models show extremely limited performance on mining major claims and claims. A.5 Machine Reading Comprehension Framework Inspired by and , we try to handle the task under the Machine Reading Comprehension (MRC) framework to further improve the performance on mining major claims and claims. As shown in Figure 1 in our paper, the title of an essay is a condensed summary of the essay, which explicitly points out the topic and even directly proposes the core opinion. Hence, we adopt essay title as query and guide information. We respectively employ new MRC inputs for our submodules in Section 2.1 and Section 2.2 to mining major claims and claims. More specifically, to mine major claims, for each essay, let T = {w 1 , w 2 , · · · , w lt } denotes the title, and E = {w 1 , w 2 , · · · , w les } denotes the essay. We concatenate the title and the essay text as MRC input: And the concatenation is encoded with BERT encoder: Similarly, to mine claims, for each paragraph, let T = {w 1 , w 2 , · · · , w lt } denotes the essay title, and P = {w 1 , w 2 , · · · , w lpa } denotes the paragraph. These two are also concatenated as MRC input: The concatenation is also encoded with BERT encoder: Then the subsequent argumentation extractions remain the same. The results are compared in Table A.2. MRC framework with essay title as query leads to worse performance. Actually, essay titles are diverse. They can be a statement, e.g. International tourism is now more common than ever before, a question, e.g. Can technology alone solve the world's environmental problems?, or a phrase, e.g. Living and studying overseas. It may be pretty difficult for the model to understand the role of the essay title as query. The essay title query will act as disturbing factor rather than guide information for argumentation mining. Hence, MRC framework with essay title as query fails to show promoted performance.
5,818.4
2020-12-01T00:00:00.000
[ "Computer Science" ]
A Modified Floor Field Model Combined with Risk Field for Pedestrian Simulation Microscopic evacuation models are of great value in both scientific research and practical applications. The floor field (FF) model is one of the most widely used models in previous research. However, the repulsion effect of hazard and the interaction between evacuees are not considered simultaneously.This paper proposes amodified floor fieldmodel combinedwith risk field and extended dynamic field to depict these features. The whole evacuation process is validated through a series of numerical simulations which are realized by C++ language. In addition, two different renewal mechanisms, namely, synchronous and asynchronous renewal mechanisms, are compared to validate the model parameters. Results show that the proposed model is able to partly reveal the typical pedestrian behaviors and the impacts of hazard on evacuation process. Introduction In recent years, the number of accidents caused by crowded people or emergencies has increased year by year.These accidents not only result in huge economic losses to the society but also do serious damage to human life and property.However, it is generally known that evacuation exercise under emergency may cause unnecessary casualties and disaccord with ethics in real life.To solve this problem, computer modeling and simulation become substitutes with the development of computer science and technology. Evacuation models can be split into two categories: the macroscopic models and the microscopic models [1].In macroscopic model, the representative one is the fluiddynamic model [2,3].In this kind of models, pedestrians are always described with fluid attributes and usually applied to the case of large crowd without thinking of their behaviors.Microscopic model can be further divided into two groups: continuous model with a representative of the social force (SF) model [4][5][6][7][8][9] and discrete model represented by the cellular automata (CA) model [10][11][12][13][14][15].The floor field (FF) model is one of the most widely used CA models in emergency evacuation research and is first proposed by Burstedde et al. [10].The proposed model consists of two fields: static field and dynamic field.During his research, the fermion is introduced to describe pedestrian's movement and the boson represents the pheromone particle.After that, lots of extended FF models are proposed by researchers in different aspects.Liao et al. [15] proposed FF model with the main static floor field and an extra substatic floor field to study the effects of the spatial distances from evacuees to the exits and the occupant densities around the exits upon the exitselection process.Zhao and Li [16] introduced game theory into FF model to study the inertia effects on strategy updating in emergency evacuation from a room with multiple exits.Hu et al. [17] proposed a novel three-dimensional cellular automata model with ladder factor.Wei et al. [18] put forward an idea of "virtual reference point" and propose a new method of building static floor field to solve the insufficient utilization of the exit region problem, while Xu et al. [19] pay attention to two cognitive coefficients of exit width and congestion degree around the exits to simulate the pedestrian evacuation behavior in a room with multiple exits. These above-mentioned models seem to solve the problems under normal situation.Scholars have studied the emergency evacuation as well with considering disaster factors.Yang et al. [20] propose an agent-based model to research subway station fire and simulate the emergency evacuation process.Smoke and fog effects are also studied by many researchers in various kinds of models [21][22][23].Lei et al. [24] research the fire evacuation based on the factory of Jilin poultry company using the floor field model.In this paper, the fire location is regarded as a static obstacle and the influence of the shut-off of exits is illustrated.In one of our previous researches [25], we propose a dynamic field model to preliminarily investigate the effect of guiding information during evacuation.But the proposed FF model is used in normal situation which is insufficient to describe the effects of disaster on evacuees' exit choice behavior and the impact of disaster location on the total evacuation efficiency.Therefore, in this paper, we propose a risk field model to describe the disaster and the above-mentioned problems are researched through numerical simulation.In addition, Moore neighborhood is introduced to discretize the evacuation space (see Figure 1) and two different renewal mechanisms are also taken into account. The remainder of this paper is organized as follows.First, in Section 2, we formulate our risk field model.Then, in Section 3, several simulations of an enclosed scenario based on the proposed methods are presented and the results are analyzed.Finally, in Section 4, we summarize the results and point out the future research. Static Floor Field Principles. According to the geometry of the room and door location, each cell is assigned a constant value representing its distance to the door.The closer the cell is to the exit, the smaller value it has.The static floor field of cell (, ) is denoted as static (, ) shown in Figure 2, which can be calculated as Varas et al. [26] proposed: (1) The room is divided in a rectangular grid.The exit door is assigned a value of "0." (2) Then all adjacent cells to the previous one (a "second layer" of cells) are assigned a value according to the following rules: (2.1) If a cell has value "," the adjacent cells in the vertical or horizontal direction are " + 1" and adjacent cells in diagonal directions are assigned a value of " + 1.5."This is a simple attempt to represent the fact that the distance between two diagonally adjacent cells is larger than that in horizontal or vertical directions.(2.2) If there are conflicts in the assignment of a value to a cell, because it is adjacent to cells with different floor fields, then the minimum possible value is assigned to the cell in conflict. (3) Then the third layer is calculated based on the second layer instead of the first layer. (4) The process is repeated until all cells are evaluated. (5) Cells represent walls and obstacles are given very high floor field values.This ensures that pedestrians will never attempt to occupy one of those cells.In this paper we choose 1000 as the value of the walls and fixed obstacles. In the proposed model, basic principles for pedestrian movements and avoiding collision are based on the rules as follows: (6) Each pedestrian chooses one of the eight adjacent cells at next time step depending on the transition probability algorithm (here the cell with the largest probability is chosen). (7) When two or more evacuees choose the same cell as their target, this cell will be randomly assigned to one of these pedestrians.() is inspired by the pheromone in chemotaxis initially proposed by Ben-Jacob [27].Ben-Jacob proposed a dynamic floor field to translate long-ranged spatial interaction into an attractive local interaction, but with memory, similar to the phenomenon of chemotaxis in biology.Kirchner and Schadschneider [28] propose a dynamic floor field which indicates a virtual trace in the shape of boson left by the pedestrians.At time step for all cells of the discrete space, the dynamic field is zero; that is, dynamic (,) () = 0. Whenever a pedestrian jumps from cell (, ) to one of the neighboring cells, dynamic (,) () at the original cell is increased by one.In other words, whenever a person moves from cell (, ) to another cell (, ), he drops a boson at cell (, ); that is, () at cell (, ) is the number of bosons.The boson has its 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 13. 5 12. 5 500 12. 12. The evacuation space contains four exits with a value of "0" and each exit is composed of cells (the green area).The outermost gray cells represent walls and are assigned a value of "500."The black rectangle means obstacles and is assigned a value of "500" as well.The area marked with red is the proposed "hazard area" and we will give a detailed description in Section 2.2. own dynamics, namely, diffusion and decay, which leads to broadening, dilution, and finally vanishing of the trace.In every time step, the bosons of cell (, ) diffuse and decay with probabilities and , respectively, so the extended dynamic floor field algorithm, the diffusion process, and decay process can be written as follows: Diffusion process is as follows: (2) Decay process is as follows: The extended dynamic floor field after combination is as follows: The sketch of risk field.5 * 5 red cells represent the hazard area with the maximum risk field value of " risk ≡ 0." Then the outside blue cells are assigned a value of "−1;" in a similar way, the green cells are assigned a value of "−2" and so on. It is worth speaking that we take into account Moore neighborhood to describe the diffusion process which is different from Kirchner and Schadschneider's work [28]; see Figure 1.In this research, the dynamic floor field is abstracted as the creation, diffusion, and effect of evacuation information between evacuees.The evacuation information incorporates voice, gestures, and any of the other body languages pedestrian made but not the emergency exit sign and broadcast, which differs from our previous research [25]. Risk field. Lei et al. [24] research the fire evacuation based on the factory of Jilin poultry company using the floor field model and the fire location is regarded as a static obstacle.However, to our point of view, the repulsive effect of disaster and obstacle on pedestrian cannot be considered as equivalent in most emergency scenarios.Therefore, on account of evacuation authenticity, we establish a risk field instead of considering disaster as fixed obstacle.The rules for calculating dynamic risk field are listed as follows: (1) The value of risk field is "0" until the hazard occurs. (2) In the research of Yamamoto et al. [29], burning area is introduced and pedestrian is considered to evacuate keeping a constant distance (0.4 m, 1.6 m, and 2.8 m, namely, 1 cell, 4 cells, and 7 cells) away from this area.Inspired by this research, we introduce a new concept which is called hazard area in this research. (3) Cao et al. [30] propose a fire repulsive field which is calculated inversely proportional to the distance from the fire location.However, we consider that the repulsive effect of evacuee on disaster is the same in hazard area; therefore, we define that every cell in the hazard area acquires the maximum risk field value " risk ≡ 0." Then the value decreases one per one row or column; see Figure 3. (4) If there are conflicts in the assignment of a value to a cell, because it is adjacent to cells with different risk fields caused by the other accident points, then the maximum possible value is assigned to the cell in conflict. Transition Probability. The transition probability of a pedestrian is decided by the interaction of static floor field, dynamic floor field, and risk field.The transition probability (,) of a pedestrian at time moving from cell (, ) to cell (, ) ((, ) ̸ = (, )) can be calculated as follows: Pedestrian will move to a cell with larger transition probability.If all eight neighboring cells are occupied or the target cell is located in the hazard area, (,) = 0.If a person is situated in the hazard area when accident occurs, he owns the top priority to move outside.The parameters , , and are the weight of static , dynamic , and risk , respectively, and Δ risk = risk (,) − risk (,) .If a cell is occupied with a person, (,) = 1; otherwise, (,) = 0. Similarly, if a cell is occupied with a fixed obstacle, (,) = 0; otherwise, (,) = 1. Simulations The proposed model is mainly used to simulate pedestrian dynamics in a room with multiexit and multiaccident points.It is known that regular rooms are very common in many pedestrian facility structures, such as classrooms, office buildings, and laboratories.When emergency occurs, room is the first place we need to escape from.Therefore, modeling and simulation of pedestrian dynamic in such scenario are of great significance.The simulation flow chart is shown in Figure 4. Firstly, in order to detect our model, we consider a room of 16 * 20 cells with 50, 100, 150, and 200 persons initially distributed randomly, which is equivalent to those in previous research [26].The exit is placed at the center of the left wall and the width is from one cell to 14 cells.The program is then repeated, and the evacuation time is calculated for different width of exit; see Figure 5. 500 times are averaged for each simulation due to the differing on the initial distribution of evacuees. In Figure 6 we research the correlation between average escape time and exit width.The results show that escape time decreases with the increasing exit width, eventually tending towards stability.The statistical regulation of average escape time needs to be explained here.When pedestrian is located in the exit cell at time step : (1) in our model, will be removed at the beginning of time step + 1 and the state of exit is refreshed to empty at the same; that is, this exit is able to be selected by other pedestrians at + 1; (2) Varas et al. [26] consider that will leave the room at + 1 and this exit can be selected as a target at + 2. In general, there is a difference of "evacuee number ()" between these two statistical regulations.Therefore, our results are identical to [26]. The validation is conducted with the synchronous renewal mechanism and the disaster factor is not yet considered.In the next chapter, we will do some experiments based on the proposed risk field model to research the impact of renewal mechanism on evacuation efficiency. Renewal Mechanism. Two different renewal mechanisms are introduced in this research, namely, synchronous and asynchronous renewal mechanisms.In the scenario of synchronous renewal mechanism, every evacuee will choose an empty cell simultaneously as their target position.If two or more evacuees choose the same cell as their target in the same time step, the one who has the lowest static floor field will move first and the other competitors will stay in the current cell; furthermore, those surrounded by pedestrians will stay as well; see Figures 6(a) and 6(b).In the situation of asynchronous renewal mechanism, the time step will be divided into several substeps on the basis of the number of evacuees.The one with the lowest static floor field (, ) will be calculated in the first substep and moves to the target cell (, ) according to the result of transition probability.In the second substep, cell (, ) is considered as an empty cell and can be occupied once again; see Figures 6(a In this scenario, the room is divided into 30 * 30 cells with different positions and numbers of doors.The parameters are set as follows: = 1, = 1,2, and = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.The two renewal mechanisms are tested in the discrete square room and 500 simulations are averaged for each data point; see Figure 7. In this simulation, the relationship between evacuation time and parameter in different renewal mechanisms is researched.Firstly, the scenario of 300 people with one exit on the middle of the upper wall is simulated and the coordinate of hazard point is (14,14).As Figure 7(a) shows, both two renewal mechanisms almost acquire the same result.On one hand, when the value of changes from 1 to 3, the number of total time steps increases obviously.On the other hand, with the increase of from 3 to 10, the simulation time tends to be stable.In the situation of 4 doors distributed on each wall, there is a large difference between these two kinds of renewal mechanisms.In Figure 7(b), a series of simulations are tried, and the results show that synchronous mechanism is more suitable for the low density circumstance.More specifically, with the increase of , the simulation time of 100 evacuees changes more obviously than that of 300 and 600 evacuees does.However, in the identical simulation scenario, synchronous renewal mechanism performs a contrary result.From Figure 7(c), we find out that the simulation time step ends higher after initially falling and this phenomenon becomes more apparent with the increase of evacuees. In summary, for multiexit circumstance, synchronous renewal mechanism is more suitable for low pedestrian density and the value of is more sensitive from 0 to approximately 3; moreover, synchronous renewal mechanism performs better in high density and the value of is susceptive from 0 to approximately 6.Nevertheless, for one exit scenario, these two renewal mechanisms are similar to each other. Distribution of Risk field. In this section, we try several simulations by using the synchronous renewal mechanism to research the relationship between the risk point position and evacuation time.The parameters are set as follows: = 1, = 2, and = 2.4, and the number of total evacuees is 100.To illustrate the effect of risk field (see Figure 8), we firstly conduct a group of experiments with treating the hazard as a fixed obstacle [24].As Figure 8(a) shows, the total time step is almost the same; in other words, the position of risk point has no influence on evacuation time.The results disagree with the facts. When the risk field is introduced into the simulation, the results varied widely, possibly because of the repulsion effect; namely, when accident occurs the evacuees prefer to avoid the hazard.As the risk point becomes closer to the exit, there is a lower probability that the pedestrian will choose to leave from that exit. In Figure 8(b), the total time step is inversely proportional to the distance between risk point and the exit.This phenomenon is more obvious when the value of risk point is Figure 1 : Figure 1: Moore neighborhood and the corresponding transition probability. Figure 2 : Figure2: Distribution of static floor field.The evacuation space contains four exits with a value of "0" and each exit is composed of cells (the green area).The outermost gray cells represent walls and are assigned a value of "500."The black rectangle means obstacles and is assigned a value of "500" as well.The area marked with red is the proposed "hazard area" and we will give a detailed description in Section 2.2. Figure 4 : Figure 4: Flow chart of the simulation process. Figure 5 : Figure 5: Correlation between the average escape time and exit width for a 16 * 20 room with a door located in the left. is the number of evacuees that are initially distributed at random. Figure 6 :Figure 7 :Figure 8 : Figure 6: (a) The initial condition with 9 evacuees in 25 cells ( = 0).(b) In the scenario of synchronous renewal mechanism, 8 evacuees shaped with black circle move to the target cells in time step 1 ( = 1) and the middle red triangular shaped one stands still.(c) In the scenario of asynchronous renewal mechanism, one person moves to the upper cell in the first substep ( = 1).(d) In the second substep ( = 2), the red triangular shaped person moves to the cell occupied in the previous substep.(e) When 9 evacuees have moved to the target cells ( = 9) and the total time step plus one ( = 1). Figure 9 : Figure9: The snapshots of simulation from = 0 to = 37. Green and red cells represent the exits and disaster location, respectively; the black cells depict the walls and obstacles; blue cells are evacuees.
4,561.2
2016-02-15T00:00:00.000
[ "Engineering", "Mathematics" ]
Likely Common Coronal Source of Solar Wind and 3He-enriched Energetic Particles: Uncoupled Transport from the Low Corona to 0.2 au Parker Solar Probe (PSP) observations of a small dispersive event on 2022 February 27 and 28 indicate scatter-free propagation as the dominant transport mechanism between the low corona and greater than 35 solar radii. The event occurred during unique orbital conditions that prevailed along specific flux tubes that PSP encountered repeatedly between 25 and 35 Rs during outbound orbit 11. This segment of the PSP orbit exhibits almost stationary angular motion relative to the rotating solar surface, such that in the rotating frame, PSP’s motion is essentially radial. The time dispersion often observed in impulsive solar energetic particle (SEP) events continues in this case down to velocities including the core solar-wind ion velocities. Especially at the onset of this event, the 3He content is much larger than the usual SEP abundances seen in the energy range from ∼100 keV to several MeV for helium. Later in the event, iron is enhanced. The compositional signatures suggest this to be an example of an acceleration mechanism for generating the seed energetic particles required by shock (or compression) acceleration models in SEP events to account for the enrichment of various species above solar abundances in such events. A preliminary search of similar orbital conditions over the PSP mission has not revealed additional such events, although favorable conditions (isolated impulsive acceleration and well-ordered magnetic field connection with minimal magnetic field fluctuation) that would be required are infrequently realized, given the small fraction of the PSP trajectory that meets these observation conditions. Introduction Solar energetic particle (SEP) events observed in interplanetary space have been associated both with flares and with coronal mass ejections (CMEs).Broadly speaking, the acceleration of SEPs is often associated with one or more of four processes: reconnection, compressions, shocks, and waveparticle heating/acceleration, with reconnection generally describing the acceleration of particles in solar flares (e.g., Zharkova et al. 2011) and diffusive shock acceleration for the case of CME-driven shocks (Reames 1999;Melrose 2009).Low in the corona, compressions likely serve as significant accelerators even prior to the development of shocks.Characteristic broken power-law spectra provide observational signatures of particle acceleration from the compressive structures in the low corona (Schwadron et al. 2015). It has been widely known that energetic particle seed populations are often rich with nearly scatter-free electrons and species such as 3 He, known to be flare associated (Mason et al. 1986;Reames 1999;Mason et al. 2002;Desai et al. 2003).The enhancements in energetic particle seed populations of 2019 April 18-24 (Schwadron et al. 2020) demonstrate how the early evolution of CMEs enhance the fluxes of energetic particle seed populations, which precondition the particle-acceleration process at distances farther from the Sun where compressions can steepen into shocks.The ISeIS observations from Schwadron et al. (2020) below 1 MeV show a very hard energy spectrum, indicating that it is likely a superposition of particles from multiple flares.The spectrum is close to the E −1.5 limit of possible stationary-state plasma distributions out of equilibrium (Livadiotis & McComas 2009, 2010). The compositional features observed for some of the events associated with SEP shock and compression acceleration processes rely on the prior production of energetic particle seed populations.The subsequent acceleration of seed populations by compressions and shocks occurs as structures propagate throughout the interplanetary medium.A key question remains as to how exactly energetic particle seed populations are formed to begin with.ISeIS data from Parker Solar Probe (PSP) close to the Sun provides essential new data Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. by disentangling isolated events from close to the Sun that feed into the seed populations of energetic particles.Raouafi et al. (2023a) demonstrated that ubiquitous magnetic reconnection at small scales generates tiny jets of hot plasma, known as jetlets (Raouafi & Stenborg 2014).They argued that these jetlets are the source of both regimes of the solar wind.Bale et al. (2023) proposed a model for the acceleration of solar wind in the low corona through reconnection between open field and closed loops.This model also results in a power-law tail in the energetic particles (protons and helium in the test cases), which may account for the transient energetic particle events seen, especially close to the Sun.Another mechanism has been invoked in an attempt to account for events with enhanced 3 He/ 4 He ratios, involving resonant acceleration by waves generated by outward-streaming beams of energetic electrons (Temerin & Roth 1992;Roth & Temerin 1997).This mechanism, though referred to in more recent literature (e.g., Reames 2015), has not gained broad acceptance because of perceived deficiencies in explaining the associated heavy-ion enrichment in the 3 He-rich events.However, Mitchell et al. (2020) argue that those deficiencies can be overcome, and suggest that current instabilities combined with parallel electric fields can account for the generation of seed populations with enrichment of particular ranges of charge-to-mass ratios depending on the wave-power frequency distribution.As the event observed by PSP on 2022 February 27 stands out both for its dispersion and for its high 3 He content, it may be that both features are a consequence of reconnection combined with wave-particle heating in the source.Only a small subset of jets/jetlets suggested by Raouafi et al. (2023a) as the source of the solar wind is thought likely to produce transient SEPs as seen in this event. SEP Event of 2022 February 27 The angular velocity of the orbital motion of the PSP (Raouafi et al. 2023b) spacecraft exceeds the angular rotation velocity of the surface of the Sun at perihelion, while farther out in PSP's orbit the Sun's surface angular velocity exceeds that of PSP.For about 2 days during both the inbound and outbound legs of the orbit, PSP's angular velocity roughly matches that of the Sun's surface.During these limited segments of the orbit, PSP effectively corotates over a fixed longitude in the rotating Carrington longitude system.Therefore, during these unique orbital segments, the source for the solar wind measured at PSP remains approximately fixed at the longitude directly beneath the spacecraft.This means that variations in the solar wind (to this approximation) represent time variations in source-region behavior, rather than spatial variations, which often dominate as PSP quickly changes source locations during other segments of its orbit. On 2022 February 27, PSP was outbound at a distance of about 0.11 au from the Sun, and was just entering a 2 day interval of matching the angular rotation rate of the solar surface (remaining fixed in Carrington longitude to within ∼±2.5°over the ensuing interval).This geometry is illustrated in Figures 1(a) and (b). Figure 1(a) shows the PSP orbit in inertial coordinates, with an inset in Carrington coordinates.That Carrington system is expanded in Figure 1(b), showing that the PSP orbital motion in (rotating) Carrington coordinates is primarily radial. In Figure 2, we present an overview of the event.At about 0830UT on February 27, energetic particles were first detected in the ISeIS (McComas et al. 2016) EPI-Hi sensor.Although the EPI-Hi protons remained at background, a distinct onset was seen in 4 He at energies as high at 20 MeV.Subsequently, the ISeIS EPI-Lo sensor measured 4 He as high as 2 MeV total energy, with lower energies following in a typical timedispersed sequence characteristic of sudden-onset SEP events.EPI-Lo also measured the dispersive event in protons (weakly) and 3 He (see Figures 4 and 5; the 3 He fraction relative to 4 He was the highest of the PSP mission for this otherwise rather minor event).The ISeIS data types that most clearly show the event time history are the EPI-Hi 4 He (inset, plotted in energy/ nucleon) and the EPI-Lo ion energy/nucleon derived from measuring their velocities (both in panel (c)).The remaining panels include PSP solar-wind energy flux in panel (d) (SWEAP; Kasper et al. 2016); magnetic field and RTN components in panel (a), |B|, elevation, and azimuth in panel (e) (FIELDS; Bale et al. 2016), and radio emissions in panel (b) (Pulupa et al. 2017).An unusual aspect of this event was the observation of dispersed ions all the way down to just above solar-wind energies, as discussed in Alnussirat et al. (2023).The magenta curve overlaid on the particle data traces the loci for the expected time of arrival at PSP as a function of energy/ nucleon for an impulsive injection in the corona occurring at the time of the first in the series of Type III events seen in the radio wave spectrogram.The time dispersion in the energetic particles accelerated in the impulsive event stands out clearly.Also clearly seen are dropouts in the energetic particle intensities, the most prominent being from 1030UT to 1150UT. In order to observe dispersion of this sort, the spacecraft must be magnetically well connected with the source location of the energetic particles, and the transport of the particles must be relatively scatter free.We now discuss the geometry of the magnetic field, and its relationship to the particle intensities. In Figure 3, we have reproduced the plots in Figure 2, and have added notation to call out changes in the magnetic geometry.We have reduced the large-scale fluctuations in the field angles to three basic categories: radial, oblique, and transverse.These orientations are encountered repeatedly over the course of the event, and changes in the energetic particle intensity appear to align with changes in the field orientation.We interpret this to mean that PSP repeatedly enters and leaves flux tubes with three different field orientations, presumably connected to three different coronal foot points.This sort of transition among three flux tubes multiple times is probably unique to this segment of the PSP trajectory, when the spacecraft is matching the rotation rate of the Sun's surface and so remaining connected with approximately the same longitude over an interval of 2 days.Clearly, in the cases of the transverse and oblique orientations, this geometry cannot persist all the way back to the coronal.These flux tubes must "bend" during their transport outward, since it is precisely the nonradial orientations that are populated with particles injected in this event.The solar-wind strahl electrons (not shown) exhibit unidirectional anti-sunward streaming throughout this interval, indicative of a topologically open field rooted in the low corona, consistent with the continuous connection with the acceleration site required to explain the dispersive transport signature. In addition to this apparent confinement of the event to specific flux tubes, the composition of the accelerated ions changes over the course of the event, indicating that it should not be considered simply an instantaneous energization of coronal plasma, but rather a process that begins with an impulsive onset, and evolves over a few hours such that the ongoing acceleration mechanism favors helium isotopes early in the energetic particle process and heavy ions (especially iron) later in the acceleration process. Composition In Figure 4, we present time-energy spectrograms for several major ion species.Panel (a) shows particles for which only time of flight (TOF) is measured; no solid-state detector (SSD) measurement of energy (E) is included.For these data, we know only the velocity of the ion, not its energy.So we calculate energy/nucleon, with the added assumption that the energy losses in the TOF system foils correspond to calibrated losses for protons.Generally, this is a good approximation, although in some cases helium can dominate this product.The advantage of this product is that the EPI-Lo efficiency is higher than for the TOF × E products, resulting in better statistics, and the low-energy threshold for this measurement is well below any of the TOF × E species products, since the ion need not be measured in the SSD.These data show clearly the onset for the EPI-Lo energy range, as well as the near-dropout in intensity for the radial field interval from about 1020UT to 1150UT (the intensity in this interval does not fall to zero, however). Following the dropout, the intensity increases again.The dispersion can no longer be followed in EPI-Lo because the energy of the leading edge is below the EPI-Lo energy range.It does continue in the SWEAP SPAN-I instrument (Livi et al. 2022), as noted earlier.The ion composition, however, is quite different at this time.Whereas 3 He was relatively abundant during the event onset (at higher energies/nucleon), it is no longer being energized sufficiently to be measured by EPI-Lo during this phase.Note that the ions observed at this time, being much farther in time from the leading edge of the event than any of the intensities were before the gap, represent a later phase in the event profile at the acceleration site.This indicates that, whereas4 He and 3 He were efficiently accelerated at the peak of the energy/nucleon values, the later phase of the energization in the corona also enriched heavier species, and especially iron.This could reflect a wave-particle mechanism for the energization (Temerin & Roth 1992;Roth & Temerin 1997;Mitchell et al. 2020) with more wave power at frequencies that favor the helium isotopes at the peak of the energization, and either a shift or a broadening to frequencies that favor iron energization later. Figure 5 provides a more quantitative look at the evolution of the composition over the course of the event.Panel (a) is a color scatter plot of the events within the apertures that received most of the counts during the event, and also have the highest mass resolution among the EPI-Lo apertures (i.e., those apertures with longer TOF path lengths).Two intervals are called out using colored bars along the timeline, and accumulated histograms for those intervals appear in the panels below, (b) for the early portion and peak energization portion of the event, and (c) for the reintensification after the dropout.The latter interval, as discussed above at the end of Section 2, is relatively iron-rich, whereas the earlier interval, from onset through the peak of the particle energization, is relatively iron-poor.This variation in composition over the course of the event favors a wave-particle mechanism over a shock-acceleration mechanism, with higher wave power in frequencies that include resonance with lighter species early in the event, shifting later in the event to wave power in frequencies that include resonance with heavier species, especially iron.Shock acceleration would be expected to accelerate all species proportionate to their abundances in the source region, inconsistent with the observed time evolution in composition.Furthermore, no shock or abrupt increase in solar-wind speed was observed at any point throughout this event.Rather, the solar-wind speed remained between 270 and 235 km s −1 throughout the interval over which the solar wind associated with the dispersion was observed. Dispersion In Figure 6, we return to the topic of dispersion that was discussed in Alnussirat et al. (2023).That paper showed conclusively that the dispersion observed at higher energies extended all the way down to about 2 keV in solar-wind protons, just above the core solar-wind plasma.Here, we have extended the dispersion curve, focusing for now only on the traditional, white curve delineating the first-arriving energetic particles associated with what is assumed to be a sudden, impulsive injection in the low corona.Following that curve forward in time, it naturally extends down into the energy band typically occupied by the solar-wind core plasma. In interpreting the data, it is useful to keep in mind that (in the approximation of radial, noncollisional propagation) this dispersion line should delineate the first-arrival time for ions even in the energy band encompassing the solar-wind core plasma.In that spirit, and following the structure in the core plasma energy flux as a function of time, the energy flux just above the dispersion curve is markedly lower (by over an order of magnitude) than the energy flux of the solar-wind plasma at and below that line.These lower-energy ions, again under the assumption of noncollisional, radial propagation, had to have been emitted from the low corona prior to the onset of the event.This structure (low-intensity flux just above the line, high-intensity flux at the line, and even higher, by nearly an order of magnitude, below the line) persists at least until about 1030UT on day 59, and affects the distribution down to about 200 eV at that point.This would be consistent with the particularly low density associated with the solar wind emitted at the injection site during, and even lower density for several hours following, the impulsive event.So while the eventassociated intensities (those on and just above the white dispersion line) at energies above about 1 keV are elevated, at lower energies they are depressed relative to the previously emitted solar wind (below the line).Whether this reflects a process that has moved the energy density from lower energies into higher energies in that locale, or the event simply takes place in a low-density region of the corona, is beyond the scope of this paper.Beyond that time (1030UT), the magnetic field orientation changes, suggesting that PSP has moved on to a different flux tube, which may no longer connect with the source region for this event.This would explain the abrupt ending of the low-intensity structure lying just above the dispersion curve at 1030UT. Exploring this event a bit further, we have included two dispersion curves on the plot, the white curve based on the high-energy ion onset, and the other (purple) displaced assuming release from a low coronal source 3 hr 20 minutes after that white curve.These curves are meant to guide the eye to a couple of features suggesting that, whereas the initial acceleration of the ions to high energies must have had a very abrupt onset, the source region may have continued to energize ions for more than 3 hr following that onset. The "late" (purple) dispersion curve, displaced later in time to bound the drop in intensity of the energetic ions greater than 100 keV, includes between it and the onset (white) dispersion curve the later enhancement in energetic hydrogen, and especially in energetic iron (see Figure 4).Those particles, extending for ∼3 hr 20 minutes after the first-arriving energetic particles, indicate that the region not only continued to produce (or at least release) energetic ions for hours after the onset, but also changed with regard to the composition of the energetic particles accelerated, favoring iron far more than at onset.Whether this is a different process, or simply the evolution of the same process into a parameter range that favors iron acceleration (we favor the latter), the data indicate that the region remained an active source for at least 3+ hr past the energetic ion onset. We have also undertaken a survey of the PSP data, restricted to the ∼2 days, inbound and outbound segments of the PSP trajectory during which the Carrington longitude of the subspacecraft point on the Sun remains nearly constant, to look for additional examples of dispersive events for which we can have some degree of confidence that PSP remains connected with the event source location in the corona throughout the dispersion.This search yielded no additional events.There are, to be sure, additional events for which dispersion can be observed in the energy range just above the solar-wind core energy band, but these all occur during intervals when the PSP Carrington longitude changes by typically tens of degrees over the course of the dispersion, which raises questions about whether the spacecraft remains connected with relatively similar coronal source conditions throughout the event.The trajectory restrictions we require are an attempt to minimize such ambiguity. Discussion and Summary At ISeIS energies (∼50 keV to 10 MeV) the initial injection was helium-rich, with distinct 3 He enrichment.Protons appeared to be considerably less abundant than helium at the high energies.Later in the event, iron also was seen to be enriched relative to its usual abundances.The progression from helium-rich to iron-rich over the evolution of the source-region acceleration suggests a likely wave-particle mechanism for the energization of the energetic ions.The event profile began with an impulsive phase, with sudden generation of suprathermal electrons (not directly observed at PSP, but inferred as they are necessary for generating the Type III event) as well as energetic helium 3 and helium 4 (but very little hydrogen, and only weak heavier ions).After about another hour, conditions further evolved such that the ion energization favored heavy ions, especially iron.This time development is not a propagation effect, but rather evolution in the source-region behavior.Because of the selectivity for particular ion masses at different stages of the source-region development, we favor a waveparticle mechanism for the ion energization, with helium being efficiently accelerated/heated during the initial impulsive phase, and iron favored later, presumably due to a shift or expansion in the peak wave power to include frequencies resonant with iron.The other clue is the Type III event, which indicates the presence of energetic electron-driven instabilities in the corona.Field-aligned streaming electrons can also drive broadband electrostatic waves, which in turn can effectively heat the plasma ions, especially for charge-to-mass ratios for which the wave power matches the gyrofrequencies (e.g., Mitchell et al. 2020 and references therein). The dispersed ions were observed all the way down to solarwind energies (down to ∼250 eV protons), an observation never seen at 1 au, observable on PSP presumably owing to its small radial distance. The continued dispersion over more than a day is also consistent with solar-wind plasma spectral features, suggestive of a flux tube containing (transient) relatively low density at solar-wind plasma energies.Taken together, these observations support a picture whereby, under favorable circumstances, particles may be transported effectively scatter free out to distances as large as ∼35 R s , even down to solar-wind energies. This would require little coupling of the plasma particles with each other during transport, and raises the question as to what the meaning of concepts like velocity, temperature, and a solar-wind frame mean under such conditions. The observed composition enhancements and time histories make this event a good candidate example of a class of events proposed (e.g., Schwadron et al. 2020) to generate seed populations that feed diffusive shock (or compression) accelerated SEP events.In this instance, there is no evidence for a shock, or even a compression, but the acceleration/heating that did lead to these compositional features may be a common precursor in larger SEP events associated with CMEs and shocks.Particle data are in the spacecraft frame of reference, as are the dispersion curves.Overlaid on the solar-wind plasma data is a green curve, the energy of a proton traveling at the measured solar-wind velocity, in the spacecraft frame.That characteristic energy remains below the dispersion curves until after 1030UT on day 59, consistent with the notion that the bulk of the solar-wind plasma observed over the interval of interest was emitted prior to the impulsive onset on day 58, and that from about 0200UT until 1030UT on day 59, the solar-wind plasma in the energy range above the dispersion curves (emitted after the impulsive event) is depleted relative to the pre-event solar wind. Figure 1 . Figure 1.Parker Solar Probe orbit 11 perihelion.The left panel presents the trajectory in inertial coordinates (white), with a yellow arrow that covers a 2 day segment of the outbound orbit.The inset in that panel, expanded in the right panel, shows the perihelion segment of the trajectory in Carrington coordinates, where the longitude rotates according to Carrington longitude.This system tracks the subsolar Carrington longitude of PSP.The 2 day yellow segment is nearly radial, indicating the subsolar Carrington longitude remains nearly constant (to within ∼±2.5°) over that interval. Figure 2 . Figure 2. (a) PSP FIELDS magnetic field, RTN coordinates; (b) PSP FIELDS plasma wave data, highlighting Type III events; (c) PSP ISeIS EPI-Lo energy/nucleon spectrogram (with EPI-Hi, small box inset) showing 4 He data in E/n; (d) PSP SWEAP/SPAN solar-wind proton energy density spectrogram; (e) PSP FIELDS magnetic field azimuth and elevation.The magenta curve drawn through the particle data traces the particle first-arrival time as a function of energy assuming an impulsive injection in the low corona at the time of the first Type III event. Figure 3 . Figure 3. Same as Figure2, with shading to highlight three specific ranges of magnetic field orientation (interpreted as repeated encounters with three flux tubes, each connecting with unique locations in the corona).These flux tubes are named for the orientation of the magnetic field relative to the radial direction. Figure 4 . Figure 4. EPI-Lo species spectrograms.Top panel is a TOF-only measurement, converted to energy assuming all ions are protons.For other species, data can be considered energy/nucleon.The other panels are labeled according to their species determined by TOF × E. These show clearly a relatively robust population of 3 He early in the event, while heavy ions such as oxygen and especially iron are enriched later in the event.The gap between about 1015UT and 1145UT is attributed to magnetic connection geometry (see Figure 3 and discussion). Figure 5 . Figure 5. (a) Mass scatter plot as a function of time; (b) and (c) summed histograms by epoch.The differences ( 3 He-rich at onset, heavy-ion-rich later) may reflect the time evolution of the acceleration process(es). Figure 6 . Figure6.Dispersion curves (first-arriving energetic particles, white; late, purple).This is similar to panels (c), (d), and (e) of Figure2, but for a full 2 day time interval.Particle data are in the spacecraft frame of reference, as are the dispersion curves.Overlaid on the solar-wind plasma data is a green curve, the energy of a proton traveling at the measured solar-wind velocity, in the spacecraft frame.That characteristic energy remains below the dispersion curves until after 1030UT on day 59, consistent with the notion that the bulk of the solar-wind plasma observed over the interval of interest was emitted prior to the impulsive onset on day 58, and that from about 0200UT until 1030UT on day 59, the solar-wind plasma in the energy range above the dispersion curves (emitted after the impulsive event) is depleted relative to the pre-event solar wind.
6,017.2
2024-04-01T00:00:00.000
[ "Physics", "Environmental Science" ]
Does hedonism influence real estate investment decisions? The moderating role of financial self-efficacy Abstract The goal of this paper is to emphasize the importance of prioritizing pleasure and enjoyment in the properties being invested in over financial returns. This research aims to determine the impact of hedonism on an individual’s real estate investment decisions, with financial self-efficacy acting as a moderator. The study employs a quantitative, cross-sectional research approach, and data was collected from retail investors (homeowners and prospective home buyers) using a structured questionnaire. A total of 375 responses were obtained through snowball sampling. Further, PLS SEM was taken into consideration to test research hypothesis. The study’s findings indicate that an individual’s hedonism value has a significant positive influence on real estate investment decisions. Moreover, we found that financial self-efficacy has a significant negative impact on hedonism and real estate investment. One possible reason is that individuals with high financial self-efficacy may be more likely to analyse the financial details of a real estate investment carefully and make decisions based on a well-informed understanding of the potential returns and risks. It has also been observed that both age and income contribute positively to the decision to invest in real estate. This means that a young person is more likely to make risky investments like buying real estate stocks, land, etc. When individuals become older, real estate investment in the form of houses increases in order to provide a secure and comfortable living space for themselves and their families. Finally, when income rises, individuals seem to be looking for a comfortable life, pleasure, happiness, and social recognition, which significantly influence the real estate investment decision. PUBLIC INTEREST STATEMENT The impact of hedonism and real estate investment decisions is a subject of significant interest in both academics and practitioners. This study examines the moderating role of financial selfefficacy in the relationship between hedonism and real estate investment decisions. The study finds that financial self-efficacy significantly moderates the relationship between hedonism and real estate investment decisions. Individuals with high levels of financial self-efficacy are less likely to be influenced by hedonistic factors in their real estate investment decisions. The findings suggest that financial self-efficacy plays a crucial role in shaping investment decisions, especially when hedonistic factors come into play. The study provides insights for policymakers and investors in understanding the underlying mechanisms that drive investment decisions and highlights the importance of financial literacy and education in enhancing financial selfefficacy. Introduction What motivates individual savings and which property investment options individuals' favour have drawn the attention of financial scholars and market participants. Several real estate investment strategies are available to create wealth (Tsou & Sun, 2021), which include direct investments in real estate projects like buying land, apartments, homes, or commercial structures (Feng, 2021) for rental purposes and indirect investments like buying real estate stocks, debentures, or a Real Estate Investment Trust (REIT) (Heaney et al., 2012). Each kind of investment has unique benefits and drawbacks, including those related to return rate, risk, and payback duration (Rattanaprichavej et al., 2020). REITs can be a more logical way to invest because they are judged by financial metrics like earnings, cash flow, and net asset value (Doug & Don, 2004, Gibilaro & Mattarocci, 2021, not by emotional factors like personal pleasure and satisfaction. Direct real estate investments, on the other hand, can be easier for small investors who value personal satisfaction more than financial returns. Also, the fact that banks use real estate as collateral security when lending money has turned it into an investment (Inoguchi, 2011;Lee & Koh, 2018). This is the main justification for why real estate is an investment rather than a consumer product. The real estate industry functions in a complex environment are also a fact (Studies et al., 2021). While academics have stressed the importance of rational thinking (Mydhili & Dadhabai, 2019;Zavadskas et al., 2005) and cognitive factors (Jamil, 2021;Shim et al., 2008;Waheed et al., 2020), which impact direct real estate investment, individuals look to their investment selections for "utilitarian" (capitalize on wealth) and "expressive" (using investment as a way to express personal beliefs) advantages (Sreekumar Nair et al., 2014). So, the classic wealth maximisation hypothesis, which ignores human values into account, leaves out a key factor that affects investing decisions (Pasewark & Riley, 2010). Since the beginning, the social sciences have put a lot of emphasis on understanding human values and how each person understands their own value system. Not just in the domains of "sociology, psychology, and anthropology," values have a significant role in economics and finance. Proponents of human values (Crosby et al., 1990;Feather, 1995;Lane et al., 2015;Schwartz, 1992) argue effectively for the emotional and directing roles of values in all parts of an individual's life. Values are used to describe societies and people, to monitor development over time, and to illuminate the driving forces behind attitudes and behaviour (Agyemang & Ansong, 2016). They proposed two scales that are commonly referred to as Schwartz's (1992) Value Survey (SVS) and Rokeach's (1973) Value Survey. The 10 distinct values suggested by SVS are theoretically drawn from the worldwide necessities of human life. Hedonism is likely the most sophisticated human virtue that has been discussed in the literature, among all the others. The term hedone, which means "pleasure," "enjoyment," or "delight" in Greek, is where hedonism gets its name (Rutkowski, 2017). Hedonism, in the view of Schwartz (1992) is associated with "pleasant existence" and "sensual fulfilment" for oneself. In psychology, hedonism refers to as pleasure looking for, which is the key reason for individual behaviour. People who are hedonists have a favourable attitude toward pleasure and actively pursue its benefits (Veenhoven, 2003). In the context of real estate investment, a hedonistic approach might involve prioritizing the enjoyment that a particular property or location will bring, rather than solely considering more practical or financial factors. For example, if an individual considers purchasing a vacation home, they prioritize the beauty of the location and opportunities for relaxation and recreation it offers, rather than solely focusing on factors, such as rental income potential or resale value. We contend that because individual investors are productive adherents of society, their decisions and behavioural progressions might be driven by their particular personal desires like hedonism. The core premise of this argument is that values guide behavioural progression and impacts one's own choice of action. As a result, it makes sense that people might want to incorporate these deeply held personal values into their financial choices (Agyemang & Ansong, 2016). In the past, researchers have made an effort to comprehend how human values affect individual investment decision-making in stock exchanges and investment choices (Agyemang & Ansong, 2016;Singla & Hiray, 2019). The significance of the study arises from the fact that behavioural finance researchers have begun to question the rationality (Barberis & Thaler, 2002;Bruin & Flint-Hartle, 2003;Zhang & Zheng, 2015) and market efficiency assumptions that underlie classic theories of finance and economics (Ruoxi, 2019). According to these academics, retail investors (non-professional) are not always logical and reasonable, and their decision-making is far more complicated than utility maximisation (Agyemang & Ansong, 2016;Feather, 1995;Kinatta et al., 2022;Lane et al., 2015;Pasewark & Riley, 2010;Singla & Hiray, 2019;Sreekumar Nair et al., 2014); therefore, hedonism values do influence their real estate investing decision. Financial self-efficacy influences domain-specific activities or both direct and indirect ways to perceive satisfactory positive outcomes that individuals often anticipate due to their higher predictive ability (Bandura, 1977(Bandura, , 2010Sabri et al., 2022). Financial self-efficacy may also help people reach their goals by controlling their behaviour (Lone & Bhat, 2022;Noor et al., 2020). Thus, decision-making requires information and confidence (Danes & Haberman, 2007). Previous research has suggested that self-efficacy can influence factors such as risk-taking behavior, goal setting, and decision-making confidence (Bandura, 1977) all of which could be relevant to the relationship between hedonism and real estate investment decisions. By considering the role of self-efficacy as a moderating variable, this study hopes to shed new light on the complex interplay between hedonism, financial self-efficacy, and real estate investment decisions. However, previous researchers have not focused much more attention on how a hedonistic value influences an individual's decision to make a real estate investment. So, it can be inferred that individuals with sound financial self-efficacy will be aware and make daring investment decisions. However, there is little pragmatic support for this claim. Hence, the current study addresses this gap by examining the moderating contribution of financial self-efficacy to the relationship between hedonism and real estate investing decisions. Based on the literature survey, the following are the two arising questions: RQ1: Does hedonism have a substantial effect on retail investor's investment decision-making in real estate? RQ2: Can an individual's level of financial self-efficacy have a noteworthy impact on hedonism to make sound real estate investment decisions either by strengthening or weakening their decisionmaking skills? The remaining portions of the paper are summarized as follows: Section "Related literature" discusses theoretical background with a survey of the literature and hypothesis development, section "Research methods" talks about the population and sample, measurements, and questionnaire. Section "Research methodology" explains about data analysis. Section "Practical implication" highlights about the managerial and societal implications of the findings of the study. Theoretical background From a theoretical perspective, the implications of hedonism on real estate investment decisions can be viewed through the lens of behavioural finance. This study of behavioural finance explores how investment decisions can be affected by psychological and emotional factors (Fu, 2022). One theory that may be relevant to the implications of hedonism on real estate investment decisions is prospect theory. According to Prospect theory, people have a tendency to base their decisions on their expected gains and losses they perceive, rather than on objective probabilities (Kahneman & Tversky, 1979;Tversky & Kahneman, 2000). While concerning real estate investment, hedonistic investors may be more likely to invest in properties hat offer the potential for pleasure and comfort, even if the potential returns are lower than other investment options (Alba & Williams, 2013). Another theory that may be relevant is the hedonic adaptation theory, which suggests that people have a tendency to adapt to their current level of pleasure and happiness, and that the pursuit of pleasure and happiness can become a never-ending cycle (Lyubomirsky, 2010;Yu & Jing, 2016), in order to sustain a certain level of pleasure and happiness. Hence, these theories suggest that hedonistic investors may be more likely to make real estate investment decisions. Even if the potential earnings are lower than those of other investment possibilities, hedonistic investors may be more likely to invest in real estate properties that have upscale amenities and recreational areas based on the properties' perceived potential for pleasure and comfort rather than the objective probabilities of financial returns, which can have a significant impact on the real estate market. Indian real estate market During the year 2023, the Indian real estate market was worth $200 billion and is expected to be worth $1 trillion by 2030. By 2025, it will contribute 13% to country's Gross Domestic Product. In real estate, there are three main categories, including residential, commercial, and retail, and it is projected that the development of nuclear families, rapid urbanization, and rising family income will continue to be the primary growth drivers of these categories (Moore et al., 2022). Although the sector has a remarkable profile, it lacks academic representation (Pandey & Jessica, 2019). Hedonism and real estate investment decision One of the values that consistently appears on scales designed to gauge value preference is hedonism. Abdolmohammadi and Baker (2006) studied the correlation between the accountants' personal values and moral reasoning and found that hedonism is one of the concerns involved in the list of terminal values, in which five parameters, such as "comfortable life, exciting life, happiness, pleasure, and social recognition" with the aim of verifying "Rockeach's four-factor (RVS) model and seven-factor classifications" using confirmatory analysis. Vilnai-Yavetz and Gilboa (n.d..), on the other hand, examined the impacts of instrumentality, aesthetics, and symbolism and analysed how these parameters adapt to customers shopping for their dress choices and found a significant relationship existed between hedonism and perceived receptiveness for all assumed business contexts, but an insignificant relationship was found between hedonism and instrumentality, revealed that hedonists live purely for enjoyment and consumption of possessions. Agyemang and Ansong (2016) aimed to provide theoretical and practical insights into the impact of personal values on investment decisions made by shareholders in Ghana. Their research revealed that shareholders hold certain value priorities with honesty, a comfortable life, and family security being particularly significant in both their personal lives and in investment decisionmaking. Sekscinska et al., (2018) investigated about the people's variations in terms of time perspectives (TPs) and risky financial decisions. The research emphases on the function that TPs play in elucidating because individuals make dangerous decisions regarding their finances. The findings indicate that chronic hedonistic TPs, both in the past and present, play a significant role in the selection of risky financial options. Even though the focus of the study was on TPs, it demonstrates a substantial effect of hedonism on risky investment choices by demonstrating a low willingness to investment and in taking financial risks. Amatulli and Donato (2019) illustrated that attitude, willingness to buy, and consumer orientation toward luxury products have been deeply measured to test the effect that exists between hedonic and utilitarian messages. Observations prove that luxury managers should follow hedonistic messaging, which builds close relationships with other managers and increases the attraction of customer perceptions towards their brand. Singla and Hiray (2019) examined impact of the value of hedonism (exciting life, pleasure, comfortable life, happiness, and social recognition), age, gender, and income on preferences of investments like stock exchange, bullion, gold, real estate, and fixed income options in India through structured equation modelling. The findings of the study found to be there was a substantial correlation between hedonism and investment preferences, like stock and property investments. It also found that age and income effect hedonism negatively. A questionnaire study was done by A. Khan et al. (2022) to learn about customers' intention to buy and how their perceptions of currency values were impacting their shopping trips in Pakistan. New perspectives on hedonism's nature, repurchase intentions, and the evolution of more enticing purchasing tactics that encourage customers to fully appreciate their purchases were revealed by the research. N Mahalakshmi and Munuswamy (2022) carried out a questionnaire survey to determine the influence of decision-making style on the choice of stock investments and found that hedonism had a negative influence on the choice of stock investment among millennial investors. Although the studies mentioned may not be directly related to investment decision-making, the research suggests that hedonism does indeed play a crucial role in the real estate decision-making process as a cognitive mechanism. "Happiness" is the main focus of hedonism (Bramble, 2016) and having wealth helps people be happier to some extent. Wealth creation is the goal of investment decision; With an extensive literature survey on hedonism measurement, there are various interpretations employed from various situations, which show that studies on examining the association between hedonism and real estate investment decisions are countable; as a result, there is a glaring knowledge deficit in this area. Thus, this article aims to explore how an individual's value of hedonism affects the person's decision to invest in real estate in India. Based upon the literature the hypothesis has been set as H1: Hedonism shows the significant effect on retail investors' investment decision in real estate. Financial self-efficacy The impression of self-efficacy in behavioural psychology, once mentioned as intelligence of selfagency, stood in a confidence that individual can achieve a given job and further generally cope with life's tasks (Lone & Bhat, 2022). Financial self-efficacy is the belief in one's ability to effectively manage their financial affairs (Zia-Ur- Rehman et al., 2021). Individuals with higher levels of financial selfefficacy are less likely to be influenced by hedonistic desires in their real estate investment decisions. They are more likely to consider the long-term financial implications of their investments and make decisions based on their financial goals and objectives as they are more likely to focus on the potential long-term appreciation and rental income of the property. In contrast, a person with low financial selfefficacy may be more susceptible to making investment decisions based on immediate gratification, such as the attractiveness of the property or the lifestyle it offers. Recent research has found that financial self-efficacy acts as a key role in financial decisions, such as financial management behaviour (Fathul Bari et al., 2020;Noor et al., 2020;Kusairi et al., 2020), women's financial behaviour (Farrell et al., 2016), financial satisfaction (Mubarik et al., 2020;Rehman et al., 2020), financial well-being (Lone & Bhat, 2022), investment intention (Elfahmi et al., 2020) and investment decisions (N. Khan et al., 2021). Therefore, financial self-efficacy can serve as a moderating factor in the relationship between hedonism and real estate investment decision-making. On seeing the above literature on financial self-efficacy, the hypothesis has been set as H2: Financial self-efficacy displays a significant outcome on weakening the influence of hedonism on retail investors' investment decision in real estate. Control variables In previous studies, demographic factors like age, gender, income, education, and risk tolerance had a significant impact on investment choices and decision-making (Chavali & Mohanraj, 2016;Geetha & Ramesh, 2012;Kellerman et al., 2020;Nasage, 2019;Wubie et al., 2015). Also, studies have been done in which demographic factors like age and income have an insignificant impact on investment choices (Singla & Hiray, 2019). This study uses the control variables age and income to evaluate how age and income affect respondents' real estate investment decisions. The hypothesized model is illustrated in Figure 1 Research method Researchers in the current study want to comprehend how hedonism affects individuals' real estate investment decisions in India and the part played by financial self-efficacy of individuals in strengthening or weakening the association. Dependent variable Real estate investment decision Control variable Age Income Population and sample The study's target audience consists of Indians, who prefer real estate as a form of investment. A sampling unit for the survey consisted of individuals who had invested in or were planning to invest in real estate (land, a flat, apartment, or constructing floors for rentals etc.). There is no reliable source in India where information about those who invest in real estate can be accessed. Therefore, there was no sampling frame available for the intended population. Multistage (in three stages) stratified sampling is employed to gather the data. The study adopted the sampling technique from a previous research paper (Pandey & Jessica, 2019). The first and second stages of stratification are based on region and location, respectively, and the third stage involves contacting the respondents using snowball and purposive sampling. The study used the sample size of the sample-to-item ratio (1:10) (F. Hair et al., 2014;Memon et al., 2020). Four hundred and one investors in real estate participated in the online poll and responded. Due to response issues, such as erroneous data, missing data, and incomplete polls, some responses are eliminated. Finally, 375 replies meet the criteria for further investigation, with a response rate of 93%. The questionnaire contains preliminary questions to determine potential participants for the study. These questions inquire whether individuals have invested in real estate or plan to do so, the type of real estate they have invested in or intend to invest in, and the timing of their investment, specifically whether they are first-time investors or not. If respondents answered affirmatively to question (i) regarding real estate investment, those respondents were considered a sample for the study. Measurements and questionnaire The hedonism was measured on a 1-5 Likert scale, with 5 representing "very high importance" and 1 signifying "very low importance" in the data collection process. A self-administered questionnaire was used and adopted from Singla and Hiray (2019). The scale measured on a Likert scale of "strongly disagree" to strongly agree" for investment decisions on real estate was adapted from Wangzhou et al. (2021) and financial self-efficacy was measured from Lown (2011). A list of constructs with their items is listed in Table 1. Methodology The methodology involved in the analysis of data can be segmented into two parts. The first part involves sociodemographic profiling, and their results are shown in Table 2. In the second part, structural equation modelling is conducted in two stages. The first of which involves an investigation to validate reliability, discriminant validity, and convergent validity for a measurement model using PLS 3.0, the partial least squares (PLS). In the second stage, the structural model was then calculated for its path coefficient and propensity to predict a significance threshold of 5% (J. F. Hair et al., 2017). Investors Socio demographic profiling The current research segments the respondents' age as ranging from 20 to above 60 years, which means that while a higher percentage (28) of the respondents belongs to 31-40 age group, 26% fall in the category of 41-50 years. Those between 51 and 60 years of age constitute 20% of the total sample size, and only 9% of the respondent population is from 20 to 30 years of age and comprised of 65.6% males and 34.4% females. Government and private sector employees constitute 25% and 21%, respectively, and the remainder belong to other groups. Out of the total respondents, 46% earn monthly incomes above INR 2 lakhs, 20% from the monthly income category of 150,001-200000 and the rest from other groups that are listed in Table 2. Reliability and validity The measurement model was assessed by indicator reliability, construct reliability, convergent validity, and discriminant validity. The indicator reliability, which was measured using factor loading, ranged from 0.830 to 0.882, i.e., over the 0.70 threshold (J. F. Hair et al., 2017), which validates the indicator reliability. The constructed reliability can be evaluated by Cronbach's alpha and composite reliability. The study had a composite reliability of 0.931, 0.936, and 0.942, respectively, which exceeds 0.8 for each latent variable meeting the criterion (Henseler et al., 2009). As a result, construct reliability was achieved. Next, for determining convergent validity, average variance extracted (AVE) was considered. Convergent validity is achieved if the constructs' AVE is 0.5 or above (Henseler et al., 2009). All conceptions had AVE values over 0.50. Hence, convergent validity was achieved. Finally, the values of the variance inflation factor (VIF) were less than 5, indicating multicollinearity was not present. The values of CA, CR, AVE, and VIF were listed in Table 3. Discriminant validity To assess the discriminant validity of the study, the Fornell-Larcker criterion and cross-loadings were considered. By considering the Fornell-Larcker criterion, which includes comparing the square roots of each construct's AVE with the correlation between constructs, discriminant validity was validated. Table 4 shows the findings of the Fornell-Larcker criterion. Cross loading is also another method to ensure discriminant validity (F. Hair et al., 2014). As a result, establishing discriminant validity at the item level requires that items related to the same construct have a high correlation and items related to distinct constructs have a very low correlation. With this, discriminant validity was justified in the model. Table 5 highlights the discriminant validity using cross loading. For the model fit, the SRMR value for the model was found to be 0.06, which is less than 0.08. So, the model was found to be fit. Hypothesis scrutiny Structural model analysis was implemented to evaluate the relationships within the conceptual framework after validating the variables' accuracy and reliability. Partial least square (PLS) analysed path coefficients and t-values at 5% level of significance. Hypothesis H1 intends to determine whether hedonism has a substantial impact on real estate investment decision-making. The findings revealed that the influence of hedonism on real estate investment decisions is 0.215 with a t-statistics value of 2.064, which means that hedonism and real estate investment decisions have an encouraging and statistically significant impact (β = 0.215, t = 2.064, p < 0.05). Thus, hedonism has a hugely positive and substantial influence on real estate investment decisions. Further, according to the effect size (f 2 ), the research construct displays a low effect size (Cohen, 1988) on real estate investment decisions (f 2 = 0.037). The values are noted and illustrated in Table 6. Moderation effect analysis A moderator is a third variable that alters the strength of exogenous and endogenous variables. In the study, it was hypothesized that financial self-efficacy moderates the relationship between hedonism and real estate investment decisions. The results showed a significant negative interaction effect of financial self-efficacy on hedonism and real estate investment decision (REID) (β = −0.153, t = 2.587, p < 0.05), indicating that the relationship between these variables is weakened by financial self-efficacy. This finding supports hypothesis H2 and is presented in Table 7. Figure 2 shows the interaction between hedonism and financial self-efficacy. Control variables on real estate investment decision The assessment of the sociodemographic factors (age and income) that were used as a control variable was used in the study. Based on the results of the analysis, both age (β = 0.390, t = 8.840, p < 0.01) and income (β = 0.320, t = 5.995, p < 0.01) were found to have significant differences on the real estate investment decision-making of the retail investors, which is shown in Table 8. The final structural research model is shown in Figure 2, as follows: Analysis of coefficient of determination (R-square) R square statistics reveals how much of the change in the exogenous variable is explained by the endogenous variable. According to Hair et al., (2011), in academic research, an R-squared value of 0.75,0.50, and 0.25 for a dependent variable can be considered substantial, moderated, and weak, respectively. The calculation results of the R square value of individual real estate investment decisions are 0.56, which suggests that the real estate investment decisions (REID) could be described by up to 56% of their predictor variable hedonism, and the model is found to be moderately fit as shown in Table 9. Discussions This study seeks to examine the impact of hedonism on real estate investment decisions of retail investors and the moderating effect of financial self-efficacy on hedonism and real estate investment decisions. The outcomes positively answer RQ1: Does hedonism impact real estate investment decisions? The findings demonstrate that hedonism has a positive and substantial impact on real estate investment decisions, which is also in line with previous research on hedonism and investment choices (Singla & Hiray, 2019). The possible reason is that real estate investment is a high-risk investment that creates pleasure, excitement, a comfortable life, and social recognition that individual endeavour for because of capital appreciation and the generation of passive income from rentals (Rattanaprichavej et al., 2020). The fundamental financial fund approach includes both risky and risk-free assets (Hu et al., 2021). This means that an individual who subscribes to hedonistic principles and prioritizes pleasure and enjoyment in their decision-making may be more likely to take risks and pursue investments that align with their personal preferences. Addressing the study question RQ2, "Is financial self-efficacy a significant factor in strengthening or weakening retail investors' investment decision-making in real estate?" The findings show that financial self-efficacy has a strong detrimental impact on hedonism and real estate investment decisions. Relying on the research findings, it can be explained that a real estate investor's financial self-efficacy gives them greater control over their decision-making; they may affect it and make a better choice irrespective of pleasure, excitement, and adventures. In addition, individuals with high financial self-efficacy are more likely to make informed decisions based on their financial goals and objectives, while those with low financial self-efficacy may be more susceptible to making impulsive decisions based on their hedonistic desires. So, financial self-efficacy can play Age demographics have been demonstrated to have a significant influence on real estate investing decisions. An individual at a young age seems to take risky investment choices and decisions so that they will invest in real estate stocks, land, etc. When the ages of individuals grow, because of family size and commitment, their investment in real estate in the form of houses increases. In terms of income, when income increases, individuals seem to look for a comfortable life, pleasure, happiness, and social recognition, which significantly influences real estate investment decisions. Implication from real estate investors' perspective as a buyer If real estate investors as a buyer prioritize hedonistic desires, it may result in more emotionally driven decision-making. For example, a buyer might be more likely to choose a property that offers a luxurious lifestyle, regardless of whether it is a sound financial investment. However, it is also possible that a hedonistic perspective could lead buyers to prioritize properties that offer a high degree of comfort and relaxation, which could ultimately contribute to their overall well-being and happiness. Real estate investors should be aware of the potential impact of hedonism on real estate investment decisions and seek to balance personal enjoyment with financial considerations to make informed investment decisions that align with their values and lead to greater satisfaction. Furthermore, real estate investors with high levels of financial self-efficacy may be better equipped to make investment decisions that balance personal enjoyment with financial considerations, potentially leading to better investment outcomes. Implication from real estate investors' perspective as a seller The implications of hedonism on real estate investment decisions from a seller's perspective may include the following: (1) Understanding buyer preferences: If sellers can identify which aspects of a property are most likely to provide hedonic value to buyers, they can tailor their marketing strategies to highlight those features. (2) Pricing strategy: Hedonic value can impact the perceived value of a property, which may influence pricing decisions. Sellers may choose to price their property higher if they believe it offers unique hedonic benefits that are not readily available in other properties. (3) Property maintenance: Maintaining the hedonic value of a property is important to attract buyers. Sellers should invest in maintaining the aesthetics, functionality, and overall ambiance of their property to ensure it continues to provide hedonic benefits to buyers. Managerial implication Managerial implications of hedonism in real estate investment decisions include prioritizing properties that offer amenities that appeal to hedonistic investors, such as luxury features and recreational facilities. One could expect that an individual's values could have an impact on the investments they make. As a result, the study gives advice to the managers who provide investing services to their investors to make sure that in addition to looking at a person's demographics and risk profile, they also consider their value system. This will enable them to better serve and target their customers. Marketing properties in a way that emphasizes their hedonistic appeals, such as by highlighting the property's proximity to recreational activities or its luxurious amenities. Being aware that hedonistic investors may be willing to pay a premium for properties that offer a high level of pleasure and comfort. Also, the interventions have to enhance FSE, which means intensifying the degree of individuals. Given that financial self-efficacy is reliant on an individual's financial knowledge and expertise, efforts aimed at developing financial capability and strengthening investment stance skills should be targeted towards persons with lower levels of FSE. However, FSE may have a crippling impact due to a misalignment between investors' perceived capacity to bear financial distress and their levels of monetary understanding (Hu et al., 2021;Tang et al., 2019). Effective interference strategies may necessitate an additional precise alignment of perceived financial self-efficacy. However, it is important to note that hedonism can be a double-edged sword, as too much emphasis on pleasure and comfort can lead to overindulgence and financial problems. Therefore, it is important for real estate investors and managers to consider both the short-term and long-term consequences of their investment decisions. Societal implications Increased demand for properties that offer luxury amenities and recreational facilities, leading to the development of more high-end properties and the revitalization of certain neighbourhoods. As a consequence of the rise in demand for luxury properties, employment in the construction and hospitality sectors will be created. The potential for increased property values in areas where luxury properties are being developed, which can benefit local residents and businesses. However, there can also be negative societal implications associated with hedonism in real estate investment decisions, such as: Displacement of lower-income residents and small businesses as a result of gentrification and rising property values in certain neighbourhoods. Widening income and wealth inequality as luxury properties become increasingly unaffordable for most people. Overdevelopment and the erection of enormous luxury properties have an adverse impact on the environment and natural resources. It is important to consider these potential societal implications when making real estate investment decisions and to strive for balance and sustainable development. Conclusion, limitations, and future scope This research study seeks to study the influence of hedonism and real estate investment decisionmaking while considering the moderating effects of financial self-efficacy, age, and income as control variables. The required data has been gathered through an online questionnaire, and a sample of 375 individual investors was used to confirm the model using SEM. The findings indicate that all aspects of hedonism have a substantial impact on real-estate investing decisions. Financial self-efficacy, which symbolises an investor's self-control and financial status, assists them in sustaining a positive attitude toward investment. However, financial self-efficacy has negatively eroded the correlation between hedonism and real estate investing decisions. Furthermore, age and income affect real estate investing decisions. This research has quite a few limitations. To start with, the study was cross-sectional in nature; longitudinal research would be preferable to capture changes and differences in human behaviour over time. It is vital to note that this study was restricted to individuals who were keen to share their real estate investment details. Next, while hedonism is one potential factor that could influence real estate investment decisions, it is important to consider a variety of other factors as well, including financial goals, property features, and location. Further studies can be done by using other variables like risk absorption capacity, financial literacy as a mediating variable, and other sociodemographic variables like occupation and education.
7,956.2
2023-05-29T00:00:00.000
[ "Economics", "Business" ]
Optical Feedback Sensitivity of a Semiconductor Ring Laser with Tunable Directionality We discuss the sensitivity to optical feedback of a semiconductor ring laser that is made to emit in a single-longitudinal mode by applying on-chip filtered optical feedback in one of the directional modes. The device is fabricated on a generic photonics integration platform using standard components. By varying the filtered feedback strength, we can tune the wavelength and directionality of the laser. Beside this, filtered optical feedback results in a limited reduction of the sensitivity for optical feedback from an off-chip optical reflection when the laser is operating in the unidirectional regime. Introduction Many studies have shown that semiconductor lasers are very sensitive to optical feedback, i.e., to part of the laser light being reflected back into the laser cavity with a delay [1][2][3][4][5][6]. Such coherent optical feedback (COF) is often difficult to avoid in practical systems, as it can be caused, for example, by reflections from a fiber tip or from other boundaries between materials with different refractive indices in the optical system to which the laser beam is coupled. COF can lead to linewidth narrowing for very weak feedback [2], but for larger feedback strengths it will typically introduce unwanted instabilities in the laser output [3]. For example, it has been shown that COF can lead to linewidth broadening [4], chaotic intensity fluctuations [5] and coherence collapse [6]. In order to avoid or suppress the COF-induced instabilities, several approaches have been investigated [7][8][9]. The most straightforward way to avoid them is to place an optical isolator with a large isolation ratio at the output of the laser. This works well to avoid COF-induced dynamics, but is an expensive approach as the isolator needs magneto-optic materials that-for technological reasons-cannot easily be integrated on the laser chip. Moreover, the optical isolator needs to be accurately aligned with the laser chip to avoid propagation losses of the emitted beam. Because of the high cost of such external isolators, there is considerable interest in other approaches to achieve the goal of suppressing the COF-induced dynamics in a semiconductor laser. A laser with a ring-shaped cavity is inherently interesting for the purpose of suppressing feedback dynamics, as any externally reflected light will be re-injected in the cavity in the direction opposite to that of the initially emitted beam: imagine such a ring laser to emit in the clockwise (CW) directional mode, optical feedback will then result in part of this beam being coupled into the counterclockwise (CCW) directional mode. In [7], a weak optical isolator is integrated in the laser cavity in order to make one of the directional modes dominant, such that the COF is injected in the directional mode that is switched-off, hence reducing its destabilizing effect. But this approach requires complex components in the laser cavity to achieve the required weak optical isolation, making the laser system difficult to control. Another ring-laser based device was studied in [9], where the fabrication process of the semiconductor ring laser (SRL) is optimized to such a degree that coupling between the directional modes through backscattering is very low. This results in unidirectional operation (i.e., the laser emits in one of the directional modes) of the fabricated SRLs, which leads to a strong suppression of feedback-induced dynamics [8] as compared to a Fabry-Perot laser fabricated on the same chip. However, when using generic integration platforms-which are not optimized for one specific purpose-the backscattering will typically be much higher, resulting in bidirectional operation (i.e., the power in the two directional modes being roughly equal) of fabricated SRLs [10,11]. In this paper, we investigate the feedback sensitivity of an SRL that we designed and fabricated using the generic JeppiX fabrication platform [12]. Because of a substantial amount of backscattering between the directional modes, the SRL itself will typically emit bidirectionally. In this design, we included on-chip filtered optical feedback (FOF) paths that have been shown [11] to make the SRL emit in a single-longitudinal mode. Controlling the FOF also allows us to tune the emitted wavelength of the SRL. Moreover, as we will discuss in the next sections, the FOF in this SRL has as a side effect that it makes the emission (somewhat) unidirectional. Based on the above mentioned work in [8] on unidirectional SRLs, we thus expect our SRL design to be less sensitive to optical feedback from off-chip reflections. In order to check the effectiveness of this approach, we experimentally and numerically study in this paper the sensitivity of our SRL design to undesired external optical feedback. The remainder of the paper is structured as follows. In Section 2 we describe the layout of the SRL and we detail the experimental setup. The results of the experiments and numerical simulations are shown in Section 3, whereas Section 4 is devoted to the discussions of the results. Finally, we end the paper with conclusions in Section 5. Layout of the SRL The layout of the device is illustrated by the picture shown in Figure 1. It has been fabricated using the standard building blocks from the Oclaro foundry, and a detailed description of the layout is given in [11]. As can be seen in Figure 1, the SRL has a racetrack-shaped geometry and optical gain is provided by two semiconductor optical amplifier (SOA) sections that are electrically interconnected. The laser cavity also contains two 2 × 2 multi-mode interference (MMI) couplers, which each couple 50% of the light out of the cavity. The outputs of the top MMI are coupled to the edges of the laser chip such that the CW and CCW modes can be measured. The bottom MMI in Figure 1 couples to two FOF branches. Each of these branches consists of a phase shifter (PS), an SOA and a distributed Bragg reflector (DBR). These components can be electrically tuned by adapting the current injected in the attached contact pad, such that we have control over the center wavelength (by changing the DBR current), the strength (by changing the SOA current) and the phase (by changing the PS current) of the FOF. Feedback arms 1 and 2 are used to control the FOF into the CW and CCW directions, respectively. Experimental Setup To measure the static and dynamic characteristics of the SRL, we used the setup that is schematically depicted in Figure 2. The SRL was mounted on a temperature-controlled heat sink, with which we stabilized the temperature of the laser chip at 21 °C. In principle, each of the contact Experimental Setup To measure the static and dynamic characteristics of the SRL, we used the setup that is schematically depicted in Figure 2. The SRL was mounted on a temperature-controlled heat sink, with which we stabilized the temperature of the laser chip at 21 • C. In principle, each of the contact pads visible in Figure 1 can be connected to a current source using electrical contact probes, but for the work presented in this paper only the laser pad and the SOA pad in feedback arm 2 were contacted. This allowed us to change the laser's injection current I laser and the current I SOA1 that controls the strength of the FOF of arm 2. It should be noted that we have obtained similar results when using FOF from feedback arm 1, with the difference being that the roles of the CW and the CCW modes are then reversed. Light emitted in the CW and in the CCW direction was collected outside the laser chip using lensed fibers. Light emitted in the CCW direction was sent through a feedback loop, and was coupled back with a time delay of about 50 ns into the CW directional mode. The COF feedback loop consisted of a circulator, an external SOA, an optical bandpass filter, a 2 × 2 single-mode splitter and a polarization controller. The circulator directed the CCW light from the laser towards the external SOA. The current I SOA2 injected in this external SOA was used to control the COF strength. Next, the amplified light was sent through a tunable bandpass filter with a bandwidth of 0.3 nm of which the center wavelength was tuned to the SRL's wavelength. This tunable filter was needed to remove the amplified spontaneous emission noise-introduced by the external SOA-from the feedback signal. The polarization controller was used to adjust the polarization of the re-injected light such that it matched the emitted polarization direction. Light was re-injected into the SRL chip using the third port of the circulator. The splitter coupled 50% of the light out of the feedback loop such that we could measure its temporal and spectral properties. The optical spectrum was measured with a scanning spectrum analyzer set at a resolution of 0.02 nm. Time traces of the intensity fluctuations were measured using a 12 GHz photo-detector coupled to a fast oscilloscope of which the input bandwidth was set at 13 GHz in the experiments discussed in Section 3. Experimental Setup To measure the static and dynamic characteristics of the SRL, we used the setup that is schematically depicted in Figure 2. The SRL was mounted on a temperature-controlled heat sink, with which we stabilized the temperature of the laser chip at 21 °C. In principle, each of the contact pads visible in Figure 1 can be connected to a current source using electrical contact probes, but for the work presented in this paper only the laser pad and the SOA pad in feedback arm 2 were contacted. This allowed us to change the laser's injection current Ilaser and the current ISOA1 that controls the strength of the FOF of arm 2. It should be noted that we have obtained similar results when using FOF from feedback arm 1, with the difference being that the roles of the CW and the CCW modes are then reversed. Light emitted in the CW and in the CCW direction was collected outside the laser chip using lensed fibers. Light emitted in the CCW direction was sent through a feedback loop, and was coupled back with a time delay of about 50 ns into the CW directional mode. The COF feedback loop consisted of a circulator, an external SOA, an optical bandpass filter, a 2 × 2 single-mode splitter and a polarization controller. The circulator directed the CCW light from the laser towards the external SOA. The current ISOA2 injected in this external SOA was used to control the COF strength. Next, the amplified light was sent through a tunable bandpass filter with a bandwidth of 0.3 nm of which the center wavelength was tuned to the SRL's wavelength. This tunable filter was needed to remove the amplified spontaneous emission noise-introduced by the external SOA-from the feedback signal. The polarization controller was used to adjust the polarization of the re-injected light such that it matched the emitted polarization direction. Light was re-injected into the SRL chip using the third port of the circulator. The splitter coupled 50% of the light out of the feedback loop such that we could measure its temporal and spectral properties. The optical spectrum was measured with a scanning spectrum analyzer set at a resolution of 0.02 nm. Time traces of the intensity fluctuations were measured using a 12 GHz photo-detector coupled to a fast oscilloscope of which the input bandwidth was set at 13 GHz in the experiments discussed in Section 3. Rate-Equation Model The behavior of the SRL under the effect of FOF and/or COF can be simulated using different models [13,14]. In this work, we used a two-directional mode rate equation model of the SRL [15], extended with Lang-Kobayashi terms, to take into account the optical feedbacks [16]. The equations of this models are: . Equations (1) and (2) describe the evolution of the slowly varying complex electric fields E cw and E ccw of the CW and CCW directions, respectively. The number of carriers, N, is described by Equation (3). We have limited ourselves to one longitudinal mode (LM). The values of the different parameters are as follows: κ = 200 ns −1 is the field decay rate, α = 3.5 is the linewidth enhancement factor, µ = 1.2 is the normalized injection current, γ = 0.4 ns −1 is the carrier inversion decay rate. The effect of the backscattering is taken into account using the dissipative backscattering parameter k d = 0.2 ns −1 and the conservative backscattering parameter k c = 0.88 ns −1 which have been used for both of the two directional modes. The differential gain functions are given by: where s = 0.005 is the self-saturation and c = 0.01 is the cross-saturation between the two directions of the same LM. η 1 represents the strength of the COF. τ 1 is the delay time of the COF which is measured in our setup to be 50 ns. η 2 represents the strength of the FOF. As the FOF couples the CW mode back into the CCW mode, we only include an FOF term in Equation (2). The bandwidth of the filter in the feedback loop is adiabatically eliminated from Equation (2) as this filter bandwidth is much larger than the bandwidth of the fluctuations in E cw and E ccw . τ 2 is the propagation time in the FOF section which is integrated on the chip and is very small. Therefore, we take τ 2 equal to zero in the simulations. Here it is important to mention that the feedback scheme in this study is different from the feedback scheme which has been discussed in [17,18], where self-feedback has been investigated. The last terms in Equations (1) and (2) represent the effect of spontaneous emission noise coupled to the CW/CCW modes [18,19]. D represents the noise strength expressed as D = D 0 (N + G 0 N 0 /κ), where D 0 is the spontaneous emission factor, G 0 = 10 −12 m 3 s −1 is the gain parameter, N 0 = 1.4 × 10 24 m −3 is the transparency carrier density. ξi (t)(i = cw, ccw) are two independent complex Gaussian white noises with zero mean and correlation t . Time is rescaled to photon lifetime τ ph = 5 ps. Experimental Results Using the setup of Figure 2, we first measured the static characteristics of the studied SRL. The output power of the two directional modes is shown in Figure 3 as a function of the laser injection current (without pumping the SOAs in the FOF arms). The threshold current of this device was 34 mA. For all currents not too far above threshold, the power in the two directional modes was roughly equal, showing that this SRL always operates in the bidirectional regime [13], which indicates that there was a substantial amount of backscattering in SRLs fabricated on the used platform. For some laser bias currents, the SRL emitted in a single longitudinal mode, but for most values of the laser injection current, the laser emitted multiple longitudinal modes. The longitudinal mode spacing was measured to be 0.2 nm. The DBRs in the FOF arms have a peak intensity reflection of 0.58 and a reflection bandwidth of 2 nm. In [11] we have shown that a sufficiently large amount of feedback in either of the FOF channels resulted in single longitudinal mode operation, that the wavelength of the emitted mode could be changed by changing the DBR center reflection wavelength, and that this wavelength could be fine-tuned using the phase shifters in the FOF arms. reflection of 0.58 and a reflection bandwidth of 2 nm. In [11] we have shown that a sufficiently large amount of feedback in either of the FOF channels resulted in single longitudinal mode operation, that the wavelength of the emitted mode could be changed by changing the DBR center reflection wavelength, and that this wavelength could be fine-tuned using the phase shifters in the FOF arms. . If we only applied FOF in one of the arms, the FOF had an additional effect that made the SRL somewhat unidirectional. This is illustrated by the measurement shown in Figure 4, where we plot the power in the two directional modes as a function of the current ISOA1 injected in the SOA of FOF arm 2 in Figure 1. The laser current Ilaser was kept constant, as shown in Figure 4, at a value of 60 mA. For low values of ISOA1, most power was emitted in the CW direction. But as ISOA1 was ramped up, the power in the CCW direction gradually increased at the expense of the power in the CW direction. This is to be expected from the feedback configuration used in this experiment as the FOF in feedback arm 2 coupled light from the CW direction into the CCW direction. The power distribution over the two directional modes is further detailed at the right-hand side of Figure 4, where we plot the ratio between the power in the CCW direction and the power in the CW direction. This so-called directional mode suppression ratio (DMSR) increased most strongly when ISOA1 increased from 0 to 11 mA, and then continued to increase at a slower pace for still higher values of ISOA1. Based on Figure 4, we identified three interesting bias points (indicated by the black arrows) at which we wanted to investigate the sensitivity to COF. The first bias point, BP1, corresponds to ISOA1 = 0 mA, as in that case there was no FOF and we measured the feedback sensitivity of the SRL itself. The second bias point, BP2, that we would further investigate corresponds to ISOA1 = 11 mA, as in this case the FOF clearly favored the CCW directional mode. Finally, the third selected bias condition, BP3, corresponds to ISOA1 = 30 mA and in this case the directional mode suppression ratio was greatest. For BP2 and BP3, the SRL emitted a single longitudinal mode whose wavelength of 1551.555 nm was determined by the reflection spectrum of the DBR in feedback arm 2. For BP1, the output of the SRL was also single-mode but the emission wavelength of 1538.405 nm was determined by the gain maximum. If we only applied FOF in one of the arms, the FOF had an additional effect that made the SRL somewhat unidirectional. This is illustrated by the measurement shown in Figure 4, where we plot the power in the two directional modes as a function of the current I SOA1 injected in the SOA of FOF arm 2 in Figure 1. The laser current I laser was kept constant, as shown in Figure 4, at a value of 60 mA. For low values of I SOA1 , most power was emitted in the CW direction. But as I SOA1 was ramped up, the power in the CCW direction gradually increased at the expense of the power in the CW direction. This is to be expected from the feedback configuration used in this experiment as the FOF in feedback arm 2 coupled light from the CW direction into the CCW direction. The power distribution over the two directional modes is further detailed at the right-hand side of Figure 4, where we plot the ratio between the power in the CCW direction and the power in the CW direction. This so-called directional mode suppression ratio (DMSR) increased most strongly when I SOA1 increased from 0 to 11 mA, and then continued to increase at a slower pace for still higher values of I SOA1 . Next, we measured time traces of the intensity in the CCW direction for different values of the current ISOA2 injected in the external SOA. We first calibrated the amplification of the external amplifier by measuring the power transmitted through the external SOA as a function of its bias current (while keeping the laser current Ilaser and the FOF current ISOA1 constant). For small values of ISOA2, the CW intensity was rather constant with some noise-induced fluctuations around the steady state. This is illustrated in Figure 5 (left) at a setting (Ilaser, ISOA1, ISOA2) = (60 mA, 11 mA, 500 mA). Increasing ISOA2 eventually led to undamping of the relaxation oscillations as illustrated in In order to quantify the strength of the feedback-induced dynamics in a simple way, we used the following metric: we extracted the rescaled STD as the ratio between the standard deviation of the laser intensity fluctuations σlaser and the mean value of the detector signal. Calculating this ratio is equivalent to rescaling the time traces such that the average value of the detector signal is equal to one. We performed this rescaling of the STD to make the extracted values independent of the average power coupled to the read-out fiber. The noise of the oscilloscope and the photo-detector are Based on Figure 4, we identified three interesting bias points (indicated by the black arrows) at which we wanted to investigate the sensitivity to COF. The first bias point, BP1, corresponds to I SOA1 = 0 mA, as in that case there was no FOF and we measured the feedback sensitivity of the SRL itself. The second bias point, BP2, that we would further investigate corresponds to I SOA1 = 11 mA, as in this case the FOF clearly favored the CCW directional mode. Finally, the third selected bias condition, BP3, corresponds to I SOA1 = 30 mA and in this case the directional mode suppression ratio was greatest. For BP2 and BP3, the SRL emitted a single longitudinal mode whose wavelength of 1551.555 nm was determined by the reflection spectrum of the DBR in feedback arm 2. For BP1, the output of the SRL was also single-mode but the emission wavelength of 1538.405 nm was determined by the gain maximum. Next, we measured time traces of the intensity in the CCW direction for different values of the current I SOA2 injected in the external SOA. We first calibrated the amplification of the external amplifier by measuring the power transmitted through the external SOA as a function of its bias current (while keeping the laser current I laser and the FOF current I SOA1 constant). For small values of I SOA2 , the CW intensity was rather constant with some noise-induced fluctuations around the steady state. This is illustrated in Figure 5 (left) at a setting (I laser , I SOA1 , I SOA2 ) = (60 mA, 11 mA, 500 mA). Increasing I SOA2 eventually led to undamping of the relaxation oscillations as illustrated in Figure 5 (middle) for (I laser , I SOA1 , I SOA2 ) = (60 mA, 11 mA, 600 mA). This marks the onset of the COF-induced dynamics. For larger values of the COF strength, the feedback-induced dynamical fluctuations became stronger and more complex as illustrated in Figure 5 (right) for (I laser , I SOA1 , I SOA2 ) = (60 mA, 11 mA, 700 mA). In order to quantify the strength of the feedback-induced dynamics in a simple way, we used the following metric: we extracted the rescaled STD as the ratio between the standard deviation of the laser intensity fluctuations σ laser and the mean value of the detector signal. Calculating this ratio is equivalent to rescaling the time traces such that the average value of the detector signal is equal to one. We performed this rescaling of the STD to make the extracted values independent of the average power coupled to the read-out fiber. The noise of the oscilloscope and the photo-detector are compensated for when extracting the value of σ laser from the time traces by assuming that the noise of these sources is Gaussian and is independent from the fluctuations in the laser's intensity. To perform this compensation, we measured a time trace of the detector signal (using the same oscilloscope settings as when measuring the laser's intensity) without optical input to the detector. From this time-trace, we determined the standard deviation σ det of the detector and oscilloscope noise (the mean value of the detector and oscilloscope noise was measured to be close to zero). Using the standard deviation σ timetrace extracted from the intensity time trace, we estimate the standard deviation of the intensity fluctuations σ laser to be σ laser = σ 2 timetrace − σ 2 noise . In Figure 6 we plot the value of the rescaled STD for the three bias conditions BP1, BP2 and BP3 mentioned above. The COF signal strength, plotted on the horizontal axis of Figure 6, was changed by changing I SOA2 and was obtained by measuring the optical power after the splitter in Figure 2 when the feedback loop was open. For each of the bias conditions, the STD was small for small values of the COF strength, as there were not yet any feedback-induced dynamics in the time traces. When increasing the COF strength, we can see in Figure 6 that the onset of the feedback-induced dynamics was lowest for bias condition BP1, i.e., without FOF to stabilize the laser. When FOF was applied (see measurements for BP2 and BP3 in Figure 6), the onset of the COF dynamics was shifted to larger values of the feedback strength, but this shift was not large for BP2 and BP3: the shift in the onset when comparing BP1 to BP2 was roughly a factor of 2 and was thus rather modest as compared to the suppression of feedback dynamics in strongly unidirectional SRLs [7,8]. Moreover, when increasing the FOF strength from BP2 to BP3, we actually observed a slight drop in the onset of the COF dynamics. The experiments thus show only a limited effectiveness of the proposed FOF scheme to suppress these dynamical fluctuations, and this effectiveness is furthermore dependent on the exact value of the applied FOF strength. The reason behind these observations will be clarified based on numerical simulations of the system in Section 3.2. Photonics 2019, 6, x FOR PEER REVIEW 7 of 11 scheme to suppress these dynamical fluctuations, and this effectiveness is furthermore dependent on the exact value of the applied FOF strength. The reason behind these observations will be clarified based on numerical simulations of the system in Section 3.2. Results from Numerical Simulations Using the rate-equations that have been introduced in Section 2.3, we performed a series of numerical simulations that mimic the experiments described above. In these simulations we set the normalized injection current to 1.2 and we selected particular values for the FOF and COF strengths in order to simulate time-traces of the directional powers. We remark here that we have obtained similar behavior for other values of the pump strength. From these time traces, we then extracted the STD of the intensity fluctuations in a similar manner to that used in the experiments represented in Figure 6. We show in Figure 7 (left) the simulated time traces when the strength of the COF was 1 = 0.4 ns -1 (as this is a good setting to show the effect of the FOF on the onset of the laser dynamics). In the red time trace of Figure 7 (left), FOF was not used whereas the FOF strength was set to 2 ns -1 in the blue time trace of Figure 7 (left). Using FOF, the intensity fluctuations in the time trace became smaller as compared to the case without FOF. We also notice that the average intensity in the CCW direction increased due to the FOF, as it enhances the CCW mode (see also Figure 4). As a result, the rescaled STD was smaller for the trace in Figure 7 (left) corresponding to 2 = 2 ns -1 . The rescaled STD of the time traces was measured in the experiments to be 0.02. We used this value to estimate D0 to be 2 × 10 −6 ns −1 in order to find the same rescaled STD in the simulations without COF. Similarly to the experiments, we started by calculating the mean value and the STD of the time traces without FOF ( 2 = 0 ns −1 ). We increased the strength of the COF by increasing 1 from 0 to 1.0 ns −1 in steps of 0.05 ns −1 while the rest of the parameters were fixed ( 2 = 0 ns −1 ). Next, we repeated the calculations of the mean value and the STD of the time traces, but this time with FOF by setting 2 to 3 ns −1 , 5 ns −1 and 8 ns −1 , while the rest of parameters were kept unchanged. We plot the Photonics 2019, 6, x FOR PEER REVIEW 7 of 11 scheme to suppress these dynamical fluctuations, and this effectiveness is furthermore dependent on the exact value of the applied FOF strength. The reason behind these observations will be clarified based on numerical simulations of the system in Section 3.2. Results from Numerical Simulations Using the rate-equations that have been introduced in Section 2.3, we performed a series of numerical simulations that mimic the experiments described above. In these simulations we set the normalized injection current to 1.2 and we selected particular values for the FOF and COF strengths in order to simulate time-traces of the directional powers. We remark here that we have obtained similar behavior for other values of the pump strength. From these time traces, we then extracted the STD of the intensity fluctuations in a similar manner to that used in the experiments represented in Figure 6. We show in Figure 7 (left) the simulated time traces when the strength of the COF was 1 = 0.4 ns -1 (as this is a good setting to show the effect of the FOF on the onset of the laser dynamics). In the red time trace of Figure 7 (left), FOF was not used whereas the FOF strength was set to 2 ns -1 in the blue time trace of Figure 7 (left). Using FOF, the intensity fluctuations in the time trace became smaller as compared to the case without FOF. We also notice that the average intensity in the CCW direction increased due to the FOF, as it enhances the CCW mode (see also Figure 4). As a result, the rescaled STD was smaller for the trace in Figure 7 (left) corresponding to 2 = 2 ns -1 . The rescaled STD of the time traces was measured in the experiments to be 0.02. We used this value to estimate D0 to be 2 × 10 −6 ns −1 in order to find the same rescaled STD in the simulations without COF. Similarly to the experiments, we started by calculating the mean value and the STD of the time traces without FOF ( 2 = 0 ns −1 ). We increased the strength of the COF by increasing 1 from 0 to 1.0 ns −1 in steps of 0.05 ns −1 while the rest of the parameters were fixed ( 2 = 0 ns −1 ). Next, we repeated the calculations of the mean value and the STD of the time traces, but this time with FOF by setting 2 to 3 ns −1 , 5 ns −1 and 8 ns −1 , while the rest of parameters were kept unchanged. We plot the Results from Numerical Simulations Using the rate-equations that have been introduced in Section 2.3, we performed a series of numerical simulations that mimic the experiments described above. In these simulations we set the normalized injection current to 1.2 and we selected particular values for the FOF and COF strengths in order to simulate time-traces of the directional powers. We remark here that we have obtained similar behavior for other values of the pump strength. From these time traces, we then extracted the STD of the intensity fluctuations in a similar manner to that used in the experiments represented in Figure 6. We show in Figure 7 (left) the simulated time traces when the strength of the COF was η 1 = 0.4 ns -1 (as this is a good setting to show the effect of the FOF on the onset of the laser dynamics). In the red time trace of Figure 7 (left), FOF was not used whereas the FOF strength was set to 2 ns -1 in the blue time trace of Figure 7 (left). Using FOF, the intensity fluctuations in the time trace became smaller as compared to the case without FOF. We also notice that the average intensity in the CCW direction increased due to the FOF, as it enhances the CCW mode (see also Figure 4). As a result, the rescaled STD was smaller for the trace in Figure 7 (left) corresponding to η 2 = 2 ns -1 . The rescaled STD of the time traces was measured in the experiments to be 0.02. We used this value to estimate D 0 to be 2 × 10 −6 ns −1 in order to find the same rescaled STD in the simulations without COF. Similarly to the experiments, we started by calculating the mean value and the STD of the time traces without FOF (η 2 = 0 ns −1 ). We increased the strength of the COF by increasing η 1 from 0 to 1.0 ns −1 in steps of 0.05 ns −1 while the rest of the parameters were fixed (η 2 = 0 ns −1 ). Next, we repeated the calculations of the mean value and the STD of the time traces, but this time with FOF by setting η 2 to 3 ns −1 , 5 ns −1 and 8 ns −1 , while the rest of parameters were kept unchanged. We plot the rescaled STD from the simulations in Figure 7 (right) as a function of the COF strength η 1 . At low values of the COF strength, the STD is relatively small and remains approximately constant when changing the COF strength. The onset of COF-induced dynamics is visible in these curves as the point at which the STD starts to rapidly increase with increasing COF strength. Similarly to the experiments, the onset happened first for the laser without FOF around η 1 = 0.2 ns −1 . When FOF was applied, the onset first shifted to larger COF strengths, but this shift is albeit rather limited. When further increasing the FOF strength, the onset of the dynamics shifted erratically and we did not observe a continuous increase in the onset. These numerical results thus agree qualitatively with our experimental trends and observations discussed in Section 3.1, and show that the FOF scheme presented in Figure 1 does not really help to reduce the COF-induced dynamics. Photonics 2019, 6, x FOR PEER REVIEW 8 of 11 rescaled STD from the simulations in Figure 7 (right) as a function of the COF strength 1 . At low values of the COF strength, the STD is relatively small and remains approximately constant when changing the COF strength. The onset of COF-induced dynamics is visible in these curves as the point at which the STD starts to rapidly increase with increasing COF strength. Similarly to the experiments, the onset happened first for the laser without FOF around 1 = 0.2 ns −1 . When FOF was applied, the onset first shifted to larger COF strengths, but this shift is albeit rather limited. When further increasing the FOF strength, the onset of the dynamics shifted erratically and we did not observe a continuous increase in the onset. These numerical results thus agree qualitatively with our experimental trends and observations discussed in Section 3.1, and show that the FOF scheme presented in Figure 1 does not really help to reduce the COF-induced dynamics. To further elucidate the stabilizing effect of the FOF on the SRL's dynamical behavior, we computed and analyzed the so-called Lyapunov exponents, λi, from the model described in Equations (1)-(3) without noise (setting D = 0). By studying the Lyapunov spectrum, we tried to understand how FOF influences both the stability and complexity of the chaotic dynamics that might have arisen. For the computation of the Lyapunov exponents, we applied the ideas of Farmer [20] to our case. Specifically, we integrated the corresponding delay differential equations with an Euler method. This converts the original delay differential equations in a map. We computed the Lyapunov exponents of this map. Only a finite portion of the infinite set of λi can be determined by such a numerical analysis. In Figure 8, we present the five largest Lyapunov exponents vs. the COF strength η1. Due to the field nature of the equations, one exponent will always be zero. If only the maximal exponent is zero, the SRL will be emitting in a continuous wave. If two exponents are zero, while the others are all negative, the laser output will be periodic. If more exponents are zero, the dynamics can correspond to either periodic or quasi-periodic behavior. Once the maximal Lyapunov exponent becomes positive, the SRL will be operating chaotically. From Figure 7 (right) and Figure 8 (left), in the case of no filtered feedback, the increase of the STD around 1 = 0.1 to 0.4 ns -1 can be attributed to a bifurcation from continuous wave emission to periodic oscillations. It is only later, after a regime of quasi-periodic behavior, that the laser became chaotic (around 1 = 0.8 ns -1 ). With FOF ( 2 = 3.0 ns -1 ), in Figure 8 (middle), below 1 = 0.7 ns -1 , the SRL with filtered feedback was continuously lasing except for some very small windows of periodic behavior. While this seems to indicate that the SRL would be more stably lasing, the negative Lyapunov exponents were now much smaller in amplitude. This indicates that the SRL would be much easier to destabilize due to noise, for example. The bifurcation to chaotic behavior hardly moved and still appeared at feedback strengths around 1 = 0.8 ns -1 . However, its accompanying positive Lyapunov exponents were increased significantly, indicating a more complex and less damped dynamical chaotic behavior. For 2 = 8.0 ns -1 (Figure 8 (right)), it is clear that the large region of chaos shifted to lower values of 1 ( 1 ≈ 0.4 ns -1 ). Around 1 = 0.2 ns -1 , the laser was first destabilized as a small window of mildly chaotic behavior appeared (i.e., only one of the Lyapunov exponents was positive). This onset of chaotic oscillations corresponds to the abrupt change in the rescaled STD observed numerically in To further elucidate the stabilizing effect of the FOF on the SRL's dynamical behavior, we computed and analyzed the so-called Lyapunov exponents, λ i , from the model described in Equations (1)-(3) without noise (setting D = 0). By studying the Lyapunov spectrum, we tried to understand how FOF influences both the stability and complexity of the chaotic dynamics that might have arisen. For the computation of the Lyapunov exponents, we applied the ideas of Farmer [20] to our case. Specifically, we integrated the corresponding delay differential equations with an Euler method. This converts the original delay differential equations in a map. We computed the Lyapunov exponents of this map. Only a finite portion of the infinite set of λ i can be determined by such a numerical analysis. In Figure 8, we present the five largest Lyapunov exponents vs. the COF strength η 1 . Due to the field nature of the equations, one exponent will always be zero. If only the maximal exponent is zero, the SRL will be emitting in a continuous wave. If two exponents are zero, while the others are all negative, the laser output will be periodic. If more exponents are zero, the dynamics can correspond to either periodic or quasi-periodic behavior. Once the maximal Lyapunov exponent becomes positive, the SRL will be operating chaotically. From Figure 7 (right) and Figure 8 (left), in the case of no filtered feedback, the increase of the STD around η 1 = 0.1 to 0.4 ns -1 can be attributed to a bifurcation from continuous wave emission to periodic oscillations. It is only later, after a regime of quasi-periodic behavior, that the laser became chaotic (around η 1 = 0.8 ns -1 ). With FOF ( η 2 = 3.0 ns -1 ), in Figure 8 (middle), below η 1 = 0.7 ns -1 , the SRL with filtered feedback was continuously lasing except for some very small windows of periodic behavior. While this seems to indicate that the SRL would be more stably lasing, the negative Lyapunov exponents were now much smaller in amplitude. This indicates that the SRL would be much easier to destabilize due to noise, for example. The bifurcation to chaotic behavior hardly moved and still appeared at feedback strengths around η 1 = 0.8 ns -1 . However, its accompanying positive Lyapunov exponents were increased significantly, indicating a more complex and less damped dynamical chaotic behavior. For η 2 = 8.0 ns -1 (Figure 8 (right)), it is clear that the large region of chaos shifted to lower values of η 1 (η 1 ≈ 0.4 ns -1 ). Around η 1 = 0.2 ns -1 , the laser was first destabilized as a small window of mildly chaotic behavior appeared (i.e., only one of the Lyapunov exponents was positive). This onset of chaotic oscillations corresponds to the abrupt change in the rescaled STD observed numerically in Figure 7 (right) and experimentally in Figure 6 for I SOA1 = 30 mA. To conclude, with filtered feedback, the dynamical behavior of the SRL was altered considerably. For some values of the filtered feedback this led to a larger but less stable continuous Photonics 2019, 6, 112 9 of 11 wave regime and chaos which was more complex. Because of the larger continuous wave regime, the feedback sensitivity was somewhat reduced as compared to the device without FOF. Photonics 2019, 6, x FOR PEER REVIEW 9 of 11 Figure 7 (right) and experimentally in Figure 6 for ISOA1 = 30 mA. To conclude, with filtered feedback, the dynamical behavior of the SRL was altered considerably. For some values of the filtered feedback this led to a larger but less stable continuous wave regime and chaos which was more complex. Because of the larger continuous wave regime, the feedback sensitivity was somewhat reduced as compared to the device without FOF. Discussion The above results show that the filtered feedback has only a marginal beneficial effect regarding feedback sensitivity of the SRL. Even more, in several cases the filtered feedback leads to a further destabilization of the laser dynamics. One reason that comes to mind as to why the addition of the filtered feedback does not deliver the desired outcome, is the fact that the SRL is not operating in an ideal unidirectional emission regime, i.e., the CW mode in which the COF signal is reinjected is not fully turned off. To investigate if this might be the issue, we have considered an ideal SRL with no backscattering between the two counter-propagating modes (i.e., kd = kc = 0) in the numerical simulations. In this case, the SRL without any feedback operates in a unidirectional regime with the full output power concentrated either in the CW or CCW mode. In Figure 9, we show the results from a numerical analysis of the Equations (1)-(3) for kd = kc = 0. The left-hand side of Figure 9 shows rescaled STDs obtained from time traces using the procedure described above. For all cases, we find that the STD increases for low COF strengths, which are even lower than in Figure 7. The right-hand side of Figure 9 shows the five largest Lyapunov values describing the noiseless dynamics of the SRL in the case of filtered feedback. Again, at a very low feedback strength ( 1 > 0.05 ns -1 ), the SRL becomes chaotic. It is clear that even in the case of no backscattering, the filtered feedback actually destabilizes the SRL. This indicates that-for the device layout studied here-a feedback signal in the quiescent directional mode is coupled (through the FOF branch) sufficiently strongly to the dominant directional mode in order to invoke delay-induced dynamical fluctuations. Discussion The above results show that the filtered feedback has only a marginal beneficial effect regarding feedback sensitivity of the SRL. Even more, in several cases the filtered feedback leads to a further destabilization of the laser dynamics. One reason that comes to mind as to why the addition of the filtered feedback does not deliver the desired outcome, is the fact that the SRL is not operating in an ideal unidirectional emission regime, i.e., the CW mode in which the COF signal is reinjected is not fully turned off. To investigate if this might be the issue, we have considered an ideal SRL with no backscattering between the two counter-propagating modes (i.e., k d = k c = 0) in the numerical simulations. In this case, the SRL without any feedback operates in a unidirectional regime with the full output power concentrated either in the CW or CCW mode. In Figure 9, we show the results from a numerical analysis of the Equations (1)-(3) for k d = k c = 0. The left-hand side of Figure 9 shows rescaled STDs obtained from time traces using the procedure described above. For all cases, we find that the STD increases for low COF strengths, which are even lower than in Figure 7. The right-hand side of Figure 9 shows the five largest Lyapunov values describing the noiseless dynamics of the SRL in the case of filtered feedback. Again, at a very low feedback strength (η 1 > 0.05 ns -1 ), the SRL becomes chaotic. It is clear that even in the case of no backscattering, the filtered feedback actually destabilizes the SRL. This indicates that-for the device layout studied here-a feedback signal in the quiescent directional mode is coupled (through the FOF branch) sufficiently strongly to the dominant directional mode in order to invoke delay-induced dynamical fluctuations. Photonics 2019, 6, x FOR PEER REVIEW 9 of 11 Figure 7 (right) and experimentally in Figure 6 for ISOA1 = 30 mA. To conclude, with filtered feedback, the dynamical behavior of the SRL was altered considerably. For some values of the filtered feedback this led to a larger but less stable continuous wave regime and chaos which was more complex. Because of the larger continuous wave regime, the feedback sensitivity was somewhat reduced as compared to the device without FOF. Figure 8. The five largest Lyapunov exponents: without FOF (i.e., η2 = 0 ns -1 ) (left), with FOF (i.e., η2 = 3 ns -1 ) (middle) and with large FOF strength (i.e., η2 = 8 ns -1 ) (right). Discussion The above results show that the filtered feedback has only a marginal beneficial effect regarding feedback sensitivity of the SRL. Even more, in several cases the filtered feedback leads to a further destabilization of the laser dynamics. One reason that comes to mind as to why the addition of the filtered feedback does not deliver the desired outcome, is the fact that the SRL is not operating in an ideal unidirectional emission regime, i.e., the CW mode in which the COF signal is reinjected is not fully turned off. To investigate if this might be the issue, we have considered an ideal SRL with no backscattering between the two counter-propagating modes (i.e., kd = kc = 0) in the numerical simulations. In this case, the SRL without any feedback operates in a unidirectional regime with the full output power concentrated either in the CW or CCW mode. In Figure 9, we show the results from a numerical analysis of the Equations (1)-(3) for kd = kc = 0. The left-hand side of Figure 9 shows rescaled STDs obtained from time traces using the procedure described above. For all cases, we find that the STD increases for low COF strengths, which are even lower than in Figure 7. The right-hand side of Figure 9 shows the five largest Lyapunov values describing the noiseless dynamics of the SRL in the case of filtered feedback. Again, at a very low feedback strength ( 1 > 0.05 ns -1 ), the SRL becomes chaotic. It is clear that even in the case of no backscattering, the filtered feedback actually destabilizes the SRL. This indicates that-for the device layout studied here-a feedback signal in the quiescent directional mode is coupled (through the FOF branch) sufficiently strongly to the dominant directional mode in order to invoke delay-induced dynamical fluctuations. Conclusions In this paper we studied-both experimentally and numerically-an SRL in which on-chip filtered optical feedback is used to tune the wavelength, to enforce single-longitudinal mode operation and to enhance the directionality of the laser. More particularly, we focused on the sensitivity to coherent optical feedback from a longer off-chip delay path, and we initially speculated that the FOF might result in a higher tolerance to COF. However, our experiments and modeling show that the FOF does not result in a substantial shift of the COF-induced dynamics towards higher COF strengths. We attribute this to the fact that the COF signal after reinjection into the SRL is coupled back into the lasing mode via the filtered feedback. Even when the backscattering would be reduced strongly, our simulations show that this will not result in a beneficial effect for the studied SRL with FOF configuration.
11,527.6
2019-10-28T00:00:00.000
[ "Physics" ]
Bit and Power Allocation in Constrained Multicarrier Systems: The Single-User Case Multicarrier modulation is a powerful transmission technique that provides improved performance in various communication fields. A fundamental topic of multicarrier communication systems is the bit and power loading, which is addressed in this article as a constrained multivariable nonlinear optimization problem. In particular, we present the main classes of loading problems, namely, rate maximization and margin maximization, and we discuss their optimal solutions for the single-user case. Initially, the classical water-filling solution subject to a total power constraint is presented using the Lagrange multipliers optimization approach. Next, the peak-power constraint is included and the concept of cup-limited waterfilling is introduced. The loading problem is also addressed subject to the integer-bit restriction and the optimal discrete solution is examined using combinatorial optimization methods. Furthermore, we investigate the duality conditions of the rate maximization and margin maximization problems and we highlight various ideas for low-complexity loading algorithms. This article surveys and reviews existing results on resource allocation in constrained multicarrier systems and presents new trends in this area. The principle of MCM is the spectrum decomposition into a set of orthogonal narrowband subchannels by utilizing complex exponentials as information-bearing carriers.Two important MCM techniques have widespread use: orthogonal frequency-division multiplexing (OFDM) [14] mainly employed in wireless applications and discrete multitone (DMT) [15] used in wireline systems.Both OFDM and DMT employ the fast Fourier transform (FFT) for spectrum decomposition, hence data transmission is performed in blocks.In order to avoid ISI and to preserve orthogonality, a cyclic prefix is introduced at the expense of a data rate loss [16].Using the cyclic prefix, the system carriers can be viewed as separate independent channels, on which different information rates can be transferred by utilizing constellations of different sizes. The allocation of bits and power to the subchannels is a fundamental aspect in the design of multicarrier systems.The allocation problem is known as bit and power loading and is based on loading algorithms, which aim to distribute the total number of bits and the available power over the subchannels in an optimal way that maximizes performance and preserves a target quality of service.In fact, the bit and power loading is a constraint optimization problem and generally two cases are of practical interest [17]: rate maximization (RM) and margin maximization (MM), where the objective is the maximization of the achievable data rate or the achievable system margin, respectively.In fact, margin maximization is equivalent to power minimization given a target data rate.The loading problem defines a set of constraints imposed either by recommendation rules and specifications [18], or by practical limitations and implementation issues [9].Such constraints include total available power budget, power spectral density (PSD) mask, integer number of bits per subcarrier, and so forth. Adaptive loading is possible only when channel state information (CSI) is known both at the transmitter and the receiver.In wireless applications, the channel is timevarying and therefore OFDM systems usually employ the same constellation in all carriers.On the other hand, the wireline channel is treated either as almost constant or as slow time-varying, and therefore CSI can be sent to the transmitter by a feedback link.Thus, in DMT applications, the utilization of different signal constellations per subchannel by adaptive bit and power loading is of great importance, and as the number of subchannels required in commercial applications [9,17] increases, the development of efficient loading algorithms is a challenging task. The literature contains several loading algorithms proposed for DMT-based systems.These algorithms consider either the RM or the MM problem, and two general classes can be distinguished.The first class of loading algorithms treats the allocation problem using numerical methods that employ Lagrange optimization, which in general results in real numbers for optimum bit allocation [19][20][21][22].However, for practical applications, the number of bits per subchannel is restricted to integer values, and thus, the above algorithms include a final suboptimum bit-rounding step.The integer-bit constraint imposes a combinatorial structure in the loading optimization problem.The second class of loading algorithms employs discrete greedy-type methods in order to obtain the optimum integer-bit allocation results [23][24][25][26][27]. This article aims at providing a tutorial survey on the bit and power loading in constrained multicarrier systems and at reviewing the most popular results on the loading algorithms for the RM and MM problems.We examine the singleuser communications scenario, that is, a point-to-point link between a DMT-based transmitter and a receiver.We start out with a short introductory overview of the multicarrier basics.Then, the loading problem is considered only subject to a total power constraint and the classical water-filling solution is discussed using the Lagrange multipliers optimization approach.Next, the peak-power constraint is included and the concept of cup-limited water-filling is introduced.The loading problem is also addressed subject to the integer-bit restriction and the optimal discrete solution is examined using combinatorial optimization methods.Moreover, we investigate the duality conditions of the RM and MM loading problems and we highlight some ideas for low-complexity loading algorithms.This article aims to provide the basic knowledge for more complex and challenging problems of bit and power allocation in constrained multicarrier systems under the multi-user context. MULTICARRIER LOADING MCM decomposes the channel spectrum into a set of N orthogonal narrowband subchannels of equal bandwidths [1].For each subchannel i, where 1 ≤ i ≤ N, a rate function R(P i ) is defined, which gives the number of bits b i that can be transmitted using power P i .The rate function depends on the maximum probability of error that can be tolerated and the applied modulation and coding schemes, which we assume to be shared among all the subchannels.In addition, we assume the existence of the inverse function R −1 (b i ), namely, power function, which gives the power P i required for the transmission of b i bits.We consider practical QAM-coded MCM, where the rate function is given by the following logarithmic expression in bits per two-dimensional symbol: where g i = |H i | 2 /N i is the gain-to-noise ratio of subchannel i, H i is the channel frequency response, N i is the noise power, and Γ is the SNR gap expressing the loss, in terms of SNR, between the actual rate b i conveyed by the used transmission scheme and the theoretical capacity achieved for Γ = 1 (0 dB).The SNR gap is calculated according to the "gapapproximation" analysis [15,28], based on the target error probability P e , the applied coding gain γ c , and the system performance margin γ m .Some useful comments on the validity limits of the "gap-approximation" can be found also in [21].When QAM transmission is employed, we can write where Q −1 is the inverse of the well-known Q-function defined as Note that the margin γ m in (2) expresses the SNR degradation immunity, which the system designer tries to achieve, so that the MCM performance is maintained for the desired probability of error.The higher the system margin is, the more power is required for a given probability of error.On the other hand, as the coding gain γ c increases, the transmission rate approaches the system capacity. Since OFDM and DMT systems usually require the same error rate for all subchannels [15], in the rest of this article we will consider that Γ is embedded in the g i s, that is, , it is clear that the power function is defined by the exponential expression while the total power and the total data rate of the multicarrier system are, respectively, The loading problem aims at determining the optimal distribution of the available power in all subchannels.Using the rate and power functions, (1) and ( 4), respectively, the optimal distribution of the available power is transformed into an optimal distribution of the achievable data rate over the subchannels, and vice versa.The loading problem is formulated as a multivariable constraint optimization problem.The optimization objective is the maximization of the rate function (RM case), or the minimization of the power function (MM case), subject to a set of constraint functions that reflect system limitations and restrictions.In the following sections, we formulate the loading problem, initially only subject to the total power budget constraint, and afterwards when the peak-power restriction per subchannel is also included. TOTAL POWER-CONSTRAINED LOADING Let P budget denote the total power budget and B target denote the desired data rate.The RM and MM loading problems are formulated as follows. RM loading problem: MM loading problem: We observe that the logarithmic expression in ( 6) is a strictly increasing and concave function of P i , while the exponential expression in ( 7) is a strictly increasing and convex function of b i [29].As a result, we recognize that both RM and MM belong to the class of convex optimization problems with convex constraint sets, and therefore a unique global solution exists.Moreover, we observe that both problems are nonlinear.The optimal solution is calculated by forming the corresponding Lagrangian function and applying the Kuhn-Tucker conditions [30]. From the MM problem formulation in (7), it is clear that margin maximization is equivalent to power minimization.In fact, given a target data rate, the MM objective is to determine (from all possible bit allocations that correspond to a data rate equal to the target one) the optimum bit allocation, which requires the least total power.The additional power, that is, the difference between the available power budget and the total power of the optimum bit allocation, is used in order to increase the system margin γ m in (2).Note that for the MM problem, we assume that the available power budget is sufficient to support the desired target rate, otherwise the problem in (7) has no solution.Based on this assumption, a total power budget constraint is not included in (7). Rate maximization water-filling The optimal solution to the RM problem is given by the following relation, which is known as the water-filling formula: for all used subchannels, 0 otherwise, where K r is a constant value depending on the total power budget. The solution in (8) can be best described using Figure 1.The spectrum can be considered as a vessel and the shape of the bottom of this vessel is determined by the inverse of g i values.We can say that the available power is poured over the spectrum vessel, so that the subchannels covered by the water-level K r are assigned power, while the remaining subchannels are not used at all (water-filling is also referred to as water-pouring).Assuming that the subchannels are sorted,1 the water-level K r is where M r is the total number of the used subchannels determined according to the following criteria: An iterative algorithm that determines the water-filling RM solution by using an initial sorting of the subchannels' gain-to-noise ratio values is described in [20].When the subchannels are sorted, the objective of the loading algorithm is to determine the cut-off subchannel M r and the constant K r .Sorting is not a trivial task when the number of subchannels is large.In general, this task dominates the computational complexity of all practical algorithms for waterfilling, so the complexity is O(N log 2 N). The optimum bit allocation is derived by ( 1) and ( 8), which results in the following compact formula: while the total data rate is Remark 1.By combining ( 8) and ( 9), we derive that the optimal solution uses all the available power budget, that is, the total power constraint in ( 6) is met with equality.Moreover, we observe that as more power budget is available, the water-level in (9) becomes higher and as a consequence, more subchannels may be turned on, as long as (10) implies a higher value for M r .Therefore, a higher power budget corresponds to a higher water-level, which generally results in the utilization of more subchannels and thus in a higher data rate. Remark 2. From ( 8) and ( 9), we can write the optimum power allocation using the following expression: The first term is a constant power portion, while the second term is the distance between the mean of the inverse gain-to-noise ratios of all used subchannels and the g −1 i of each subchannel.Figure 2 illustrates this remark, where the subchannels are sorted.Observe that for subchannel i the distance of g −1 i to the mean value is positive, while for subchannel j, the distance is negative.Remark 3. Figure 2 also illustrates a characteristic feature of the water-filling allocation strategy: water-filling allocates more power to the strongest subchannels. Margin maximization water-filling Using the Lagrange multipliers method and applying the Kuhn-Tucker conditions, we can derive the optimal solution to the MM problem in (7) as follows: for all used subchannels, 0 otherwise, ( where K m is constant and depends on the target data rate. Assuming that the subchannels are sorted in a descending order, K m is given by where M m is the total number of used subchannels determined according to the following criteria: The analogy between ( 9) and ( 10) of the RM problem with ( 15) and ( 16) will be evident, as soon as we calculate the power of each subchannel allocated with b i bits according to (14).Using (4), we get In (17), we observe that the optimum bit solution to the MM problem results in a power distribution that follows a water-filling power allocation as in the RM problem.Therefore, a power distribution, similar to the one shown in Figure 1, holds also for the MM problem.In this case, the constant water-level is equal to while the total power is given by Remark 4. We observe in (18), that the higher the target rate, the higher the water-level and consequently more subchannels may be used, as long as ( 16) implies a higher value for M m .As a result, a higher target rate requires a higher total power consumption. Duality conditions between RM and MM problems The RM and MM problems admit a unique water-filling solution.The following proposition holds. Proposition 1.Let (b w i , P w i ), for 1 ≤ i ≤ N, be a water-filling bit and power allocation, where N i=1 b w i = B w and N i=1 P w i = P w .Then, Proof.We have shown in Section 3.1, that a waterfilling power allocation provides the unique solution that maximizes the data rate subject to a total power constraint. In fact, the whole power budget is consumed (see Remark 1).Therefore, any other allocation (b i , P i ) with N i=1 P i = P w results in a total rate of N i=1 b i < B w .Therefore, the first part of (20) is true. Moreover, we have shown in Section 3.2, that a waterfilling bit allocation provides the unique solution that minimizes the total power subject to a target rate constraint.Therefore, any other allocation (b i , P i ) with N i=1 b i = B w results in a total power of N i=1 P i > P w .Consequently, the second part of ( 20) is also true. From the analytical expressions derived for the RM and MM problems, there exists a duality between the RM and MM problems under specific conditions.We are now in the position to define these conditions in the form of the following theorem.(21) which implies that (b r i , P r i ) is also the solution to the MM problem, when B target = B r . Similarly, for the MM solution (b m i , P m i ), we can write which implies that (b m i , P m i ) is also the solution to the RM problem, when P budget = P m . TOTAL POWER AND PEAK-POWER CONSTRAINED LOADING When introducing the peak-power constraint, the optimization problem becomes more complicated.Let P i , for 1 ≤ i ≤ N, denote the maximum allowable power per subchannel. In multicarrier systems, a power spectral density (PSD) mask constraint is usually imposed by regulatory rules in order to control the level of interference into other communication systems operating in the neighborhood, for example, [18].The RM and MM problems are formulated as follows. RM loading problem: MM loading problem: In the RM problem, we observe that the peak-power constraint upper bounds the possible power allocation in each subchannel.In the MM problem, the peak-power constraint is transformed into a maximum bit allocation constraint, denoted as b i for 1 ≤ i ≤ N, which upper bounds the possible bit allocation in each subchannel and is defined by The RM and MM problems in ( 23) and ( 24) also belong to the class of convex optimization problems with convex constraint sets, and therefore a unique global solution exists.(1) sort g i s in descending order (2) set j = 1 (3) apply WATER-FILLING (8)-( 10), over subchannels j, . . ., N (4) if P j > P j then (5) set P j = P j (6) reduce the available power P budget = P budget − P j (7) update j = j + 1 (8) go to step (3) (9) end if Algorithm 1: Cap-limited water-filling. Rate maximization water-filling By using the Lagrange multipliers approach and applying the Kuhn-Tucker conditions, we can derive the optimal solution to the RM problem in (23) as follows 2 : where K r is a constant and is determined by the solution to the following nonlinear equation: The RM solution in ( 26) is again water-filling, however in this case, the spectrum vessel has a limited depth of P i and 2 The following notation is used, where x, a, and c are real numbers with The optimal water-filling. is covered by a cap.When P i = P for all subchannels, then the shape of the cap is identical to the vessel's bottom, that is, the inverse of the g i s.The concept of the "cap-limited" water-filling is illustrated in Figure 3 subject to a common PSD mask for all subchannels. In order to obtain the solution in (26), we need to determine the constant K r .In Section 3.1, an iterative algorithm for the calculation of the water-level of the total power constrained RM was presented.When the peak-power constraint is introduced, the RM problem in (23) can be treated using an iterative water-filling process [21], which is described using the pseudocode of Algorithm 1. Algorithm 1 is optimal, however its direct implementation is not efficient and presents O(N 2 ) complexity.In order to overcome such a high computational load, an iterative algorithm of reduced complexity can be constructed by exploiting the fact that in every new iteration of Algorithm 1, the participating subchannels are allocated more power with respect to the previous iteration. First, consider the optimal water-filling solution in Section 3.1, which is described in Figure 4, where the subchannels are sorted.Denoting as P N = [P 1 , P 2 , . . ., P N ] the optimum N-point water-filling power vector, P N satisfies the following set of equations: where K w is the water-level and subchannels from M +1 to N are turned off, that is, they are loaded with zero power, and the following proposition holds. Proposition 2. Given the sorted water-filling power allocation vector P N , if one removes subchannels 1, . . ., L and reduces P budget by L i=1 P i , then the new optimal water-filling solution is the (N − L)-point power vector P N−L = P N − {P 1 , . . ., P L } = [P L+1 , . . ., P N ]. For M = M, the constant K w becomes From (28) (30), we derive that the power vector P N−1 = [P 2 , . . ., P N ] satisfies (29) and therefore P N−1 is the optimal vector.The proof for L > 1 is similar. As suggested by Algorithm 1, if the power allocated to subchannel i (staring from the one with the highest g i ) exceeds P i , then we set P i = P i , reduce P budget by P i , exclude subchannel i from the optimization problem, and perform water-filling to the remaining subchannels.Since P budget is reduced by an amount of power less than the optimal power assigned by the previous water-filling, then according to Proposition 2 and Remark 1, the new solution has higher K w and additional subchannels may be turned on.As a result, all subchannels participating in the next water-filling will be assigned additional power.Based on this remark, the new optimal algorithm is described by the pseudocode in Algorithm 2. Algorithm 2 is explained using Figure 5 subject to common PSD mask for all subchannels.Given the initial water-filling solution with cut-off subchannel index M and water-level K w , the algorithm determines the first subchannel, denoted as M, where the power assignment does not violate P.Then, it upper bounds all subchannels from 1 to M−1 with P and reduces the power budget by the total power assigned so far.At the next step, the algorithm proceeds to successive water-filling over the subchannels ranging from M to N. The new water-filling solution determines a new higher water-level, K new w , corresponding to new subchannel indexes M new > M and M new ≥ M.This procedure is repeated until the water-filling allocation does not violate P in any of the subchannels involved in the new iteration. The complexity improvement of the iterative waterfilling scheme described by Algorithm 2 compared with Algorithm 1 depends on the total number of iterations that water-filling has executed.If L ≤ N is the number of iterations, then the computational complexity of Algorithm 2 is O(N(L + log 2 N)).The lower is L compared to N, the higher is the computational complexity improvement.In [21], a suboptimum algorithm for the RM problem in ( 23) is described that uses an iterative search-secant method to determine the root of (27), by noting that (27) admits a root when N i=1 P i > P budget .The search-secant process is subject to a tolerance variable that affects the speed of convergence, as well as the accuracy of the final result.Generally, there is a tradeoff between the speed of convergence and accuracy.The method presents a computational complexity that grows linearly with L N, where L is the number of the search-secant iterations. Remark 5. Similar to Remark 1, we observe from ( 26) and ( 27) that the optimal "cap-limited" water-filling solution consumes the total available power budget. Margin maximization water-filling In order to obtain the optimal solution to the peak-power constrained MM problem, we will use the duality between the RM and MM problems developed in Section 3.3.We have shown in Section 3.2 that the optimal bit solution for the MM problem under a total power constraint results in a power allocation, which follows a water-filling distribution.Given a peak-power constraint, this power allocation should also follow the "cap-limited" water-filling concept of Figure 3.The optimal bit solution to ( 24) is therefore given by b where K m is the solution to the following nonlinear equation: It can be easily verified that Theorem 1 applies also for the total and peak-power constrained RM and MM problems. INTEGER-BIT CONSTRAINED LOADING The Lagrangian methods described in the previous sections provide the optimal loading solutions, where generally the bit assignment in each subchannel takes real values.However, due to implementation constraints, only integer bit values are of practical interest, that is, design of realistic constellation encoders and decoders.As a consequence, the proposed Lagrangian algorithms in the literature include a final suboptimal bit-rounding step with appropriate power scaling to preserve the power budget and target error rate constraints. The integer-bit constrained loading problem, also referred to as discrete loading, belongs to the class of combinatorial optimization problems.The RM and MM formulations of the previous sections apply here, along with the additional integer-bit constraint: b i ∈ Z + for 1 ≤ i ≤ N. Remark 6.The monotonicity and concave nature of the rate function in (1), along with the monotonicity and convex nature of the power function in (4), as well as of the corresponding discrete incremental (33) and decremental (34) power cost functions defined below, guarantee the existence of a unique optimum solution for each of the RM and MM discrete loading problems based on appropriate greedy algorithms.The optimality is addressed in [26,31,32], using the matroid theory. Optimum greedy algorithms The solution to the integer-bit loading problem is provided using a greedy algorithm, which defines an appropriate bit allocation cost function and iteratively assigns one bit at a time to the least cost-expensive subchannel.In general, a greedy algorithm is characterized by the following two properties [33].First, at each step, the algorithm always moves its operating point along the direction that guarantees the largest increment (decrement) to the assigned objective function to be maximized (minimized).Second, a greedy algorithm proceeds only in a forward way, that is, it never tracks back.Two greedy loading methods are used: the bit-filling [23,31] and the bit-removal [25,32]. Considering that subchannel i carries b i bits, the power needed to transmit one more bit in this subchannel is given by while the power saved by removing one bit from this subchannel is given by and the maximum3 number of bits that can be assigned to each subchannel is The incremental power in (33) constitutes the cost function of the bit-filling process, while the decremental power in (34) constitutes the cost function of the bit-removal process.In particular, the bit-filling algorithm starts from an initial all-zero bit allocation, b i = 0 for 1 ≤ i ≤ N, and then adds one bit at a time to the subchannel that requires the minimum additional power until the total power budget is consumed (RM case) or the target rate is achieved (MM case).On the other hand, the bit-removal algorithm starts from an initial maximum bit allocation, b i = b i for 1 ≤ i ≤ N, and then removes one bit at a time from the subchannel that saves the maximum power until the target rate is achieved (MM case).Note that if N i=1 P i ≤ P budget , the maximum bit allocation b i = b i used initially by the bitremoval algorithm, is the direct solution to the RM case.In Appendix A, the following theorem is proved. Theorem 2. Given a target rate B target , the bit-filling and bitremoval algorithms result in the same optimum bit and power allocation. The following remarks are also in order. Remark 7.For the nondiscrete RM problems formulated in the previous sections, we have noted that the optimal solution results in the consumption of the total available power budget.In the discrete RM problem, however, the optimum integer-bit solution results in total power that is generally less or equal to the power budget. Remark 8.Although bit-filling and bit-removal provide the same solution, the computational load associated with each method mainly depends on the target data rate.The complexity of the bit-filling is O(B target N), while the complexity of the bit-removal is O((B max − B target )N), where B max is the data rate corresponding to the b i bit-profile.If B target is close to B max , then bit-removal converges faster. Remark 9.If bit-filling is left free to proceed above B target , by adding one bit at a time to the least power cost-expensive subchannel, then it will terminate at the b i allocation. Also for the discrete loading RM and MM problems, there exist exact conditions for their equivalence, as in the water-filling case.Theorem 1 developed in Section 3.3 holds also for the case of discrete loading, where the only difference is that due to the integer-bit allocation, the MM solution under the duality conditions is also the RM solution with P budget ≥ P m (see Theorem 1 for details).The proof is given in Appendix B. Another approach is provided in [34]. Efficient integer-bit allocation profiles The high computational load of the greedy bit-filling and bitremoval algorithms is an important disadvantage for practical systems with large number of subchannels and high data rate demands.In [27], an efficient discrete bit allocation profile was developed by recognizing that the order of the subchannels, which participate in the single-bit incremental process of bit-filling, is specific and includes a characteristic circular repetition. The characteristic bit allocation profile is calculated as follows: where i max = arg max{g i }, i min = arg min{g i }, and The allocation b i depends only on the system g i values and presents an optimum bit allocation profile of a continuous greedy bit-filling process, where any total power or peak-power constraints are at the moment ignored.In other words, if bit-filling is continuously applied, then it will reach the allocation b i after N i=1 b i steps, where N i=1 b i is the data rate that corresponds to the b i profile.From (36), we observe that b i = 0 for i = i min .Depending on the value of k i , b i may be zero for other subchannels as well.Assuming that the subchannels are sorted, that is, i max = 1 and i min = N, the following remarks are in order. Remark 10.Given allocation (36) and assuming that bitfilling is applied, then one bit has to be added to all the subchannels i : 2 ≤ i ≤ N, before we can further increase the bits in subchannel i = 1 by one.The order, in which the subchannels are assigned by one more bit, depends on the power cost function (33) of each subchannel and generally it does not coincide with the descending order of the g i values. Remark 11.If bit-removal is applied in (36), then one bit is first removed from subchannel i = 1 and then, one bit has to be removed from all the subchannels i : 2 ≤ i ≤ N * , before we can further decrease the bits in subchannel i = 1 by one, where N * corresponds to the first nonzero bit-loaded subchannel. The importance of allocation (36) in providing low complexity bit loading follows from Remarks 10 and 11 along with the next theorem.[35]: Theorem 3. The integer bit allocation Proof.This theorem is proved by substituting b i = [b i + z] bi 0 in (33) and showing that ( 37) is true, ∀i, j : Theorem 3 states that every up-or downshift of (36) corresponds to an optimum discrete bit allocation under the power minimization goal, taking also into account the low (all-zeros) and the upper (b i ) bounds of the valid bit vectors.In [27], the following theorem was proven. , where the sign of Δb determines the up-or downshift.This is illustrated in Figure 6 for the two possible cases, where the subchannels are sorted.In Figure 6(b), the b i profile violates the maximum allowable allocation b i in some of the subchannels.This is due to the fact that profile (36) does not include any power or PSD restrictions. Using Theorem 4, along with Remarks 10 and 11, we can use the bit-profile b i as an initial optimum allocation and then perform a multiple-bit addition or removal process, that converges to the optimum bit solution with no more than a single bit difference per subchannel.In the following section efficient loading algorithms for the discrete RM and MM problems are presented. LOW-COMPLEXITY INTEGER-BIT LOADING In the previous section, it was made clear that in each allocation step, the greedy algorithm updates the bit-profile according to a power cost function until the system constraints are met, that is, the total power budget is consumed for the RM case or the target data rate is achieved for the MM case.At the end of the greedy process, the respective objective is satisfied, that is, rate maximization or margin maximization.The system constraints define a pair of low and maximum bit allocation limits.The greedy loading process can be described as a continuous bit-by-bit allocation procedure, since at each step it updates the bit-profile by moving on efficient bit allocations, see (37), within the set of all possible bitprofiles.For the RM problem, the upper bit allocation limit is determined by the total power budget constraint, while for the MM problem the upper bound is directly calculated by (35).In fact, the upper bound forthe RM problem coincides EURASIP Journal on Advances in Signal Processing Bit-distribution with the rate maximization solution.In the rest of this section, we present efficient discrete loading algorithms by exploiting the characteristic bit-profile b i defined in (36).These algorithms are based on a multiple-bit loading process that moves the b i profile towards the optimum solution. Discrete rate maximization First, we address the total power constrained problem.In contrast to the case of a PSD mask, where the maximum allowable bit allocation is directly determined by (35), the bit upper limit in the total power constrained problem is not straightforward.However, we know from Theorem 3 that every shift of the b i bit-profile corresponds to an efficient allocation.Thus, if the available power budget is not exceeded, the new bit allocation is valid within the system constraints. Since there is no explicit bit upper limit defined, we will use notation [x] a c with a = ∞, that is, [x] ∞ 0 = max(x, 0).The data rate and total power of bit allocation b i +α, where α ∈ Z, are, respectively, Let P(0) < P budget .In order to obtain the maximum possible data rate, we want to upshift profile b i by α ≥ 0, so that P(α) ≤ P budget < P(α + 1).(39) From (38), we can write Using (40), we derive the integer solution of (39) as The difference between the total power budget and the power corresponding to the b i profile upshifted by (41) may allow the allocation of a limited number of additional bits, less than N. We can use the greedy bit-filling process to allocate these bits. If P(0) > P budget , then we want to downshift profile b i by α ≤ 0, so that (39) also holds.Note that |α + 1| < |α|, when α ≤ 0. It turns out that α is given by (41).Since the value of |α| may be greater than the smallest nonzero value of b i for 1 ≤ i ≤ N, the total power of the downshifted bitprofile [b i + α] ∞ 0 may be higher than expected and therefore additional downshifting may be necessary.In this case, the new value of α is calculated using (41), where the upper limit of the summations is replaced by N * , which denotes the total number of nonzero bit-loaded subchannels, and P(0) is replaced by the total power P(α), which corresponds to the downshifted bit profile of the previous step.At the end of the downshift process, we use greedy bit-filling to add any additional bits less than N * if there is available power.The pseudocode in Algorithm 3 describes the low-complexity discrete loading for the total power constrained RM problem. In the case of a peak-power constraint, the optimum RM solution can be calculated by directly allocating b i bits and then, if necessary, we perform bit-removal in order to discard the most power-expensive bits until the total power constraint is met. Discrete margin maximization In the MM problem, we assume that the total power budget is sufficient in order to support the desired target rate.As in the previous section, we first address the total power-constrained loading.Given the initial bit-profile b i with a total data rate of B(0), see (38), we can perform multiple-bits loading by directly calculating the allocation b i = [b i + α] ∞ 0 , where In (42), N * is the total number of nonzero bit-loaded subchannels, as defined in Remark 11, and the sign of α depends on whether data rate increase (if 0 is efficient according to Theorem 3 and optimum under the power minimization goal.However, when downshift is performed, the resulting data rate might be greater than expected due to the low (zero) bitlimit.Therefore successive, but limited, number of multiplebits loading steps may be necessary until α becomes zero.Then, according to Remarks 10 and 11, the bit-profile allocated so far differs from the target rate solution at most in a single bit per subchannel.The remaining bits can be allocated to the appropriate subchannels based on the respective cost function (33) or (34).The pseudocode in Algorithm 4 describes the low-complexity MM loading. The above results also hold for the peak-power constrained loading.However, in this case, two important points should be noted.First, an explicit bit upper limit exists and the data-rate expression in (38) becomes Second, in (42), the value of α is calculated by the difference between the desired rate and the rate of the bounded b i profile.Since the difference between b i and [b i ] bi 0 may be large, a result of a = 0 may not indicate the maximum of one bit difference convergence.In order to overcome such a situation, if b imax violates the upper limit b imax , we apply Theorem 4 and move the b i bit-profile within the bit-limits. Numerical example Figure 7 shows an example of bit and power allocation using the CSA(6) standard ADSL loop in Table 47 of ANSI T1.413-1995 [36].The system parameters are: N = 256 subchannels, subcarrier spacing 4.3125 kHz, −140 dBm/Hz additive white Gaussian noise (AWGN) plus near-end crosstalk (NEXT) generated by 20 high-rate DSL (HDSL) neighboring lines, and a 40-kHz lower band edge.The loading constraint values are: −40 dBm/Hz PSD mask, 100 mWatt total power budget, and b max = 15.We also consider 6 dB margin and 3 dB coding gain.Assuming a maximum bit-error rate of 10 −7 , the corresponding SNR gap equals 12.8 dB. Figure 7 shows the maximum bit allocation, b i , the initial bit allocation, b i , and the target bit allocation that corresponds to 80% of the maximum possible data rate, that is, B target = 0.80 • N i=1 b i .Figure 7 also shows the transmit PSD that corresponds to the maximum and the target rate allocation.The sawtooth shape of the PSD is common to all discrete bit-loading algorithms and is the result of the stepwise power distribution due to the integer bit constraint.Since there is a 40-kHz lower band edge, subchannels 1-9 are not used.Also, note that for the requested target rate, the loading algorithm does not utilize subchannels 28-52.The remaining power, that is, the difference between the total power of the target rate allocation and the power budget, can be used in order to increase the system margin in all subchannels. In [27], numerical results show that exploiting the efficient bit allocation profile described in Section 5.2, a computational complexity improvement of up to 6 times compared with the greedy bit-filling and the bit-removal methods is achieved.Although the results correspond to the case of total and peak-power constraints, the complexity improvement for the total power constraint is only the same or higher.Indeed, in the latter case, the algorithm experiences less differences between the actual and the expected power or rate (step 6 of Algorithms 3 and 4), thus the optimum bit-allocation is reached with less shifting operations of the b i profile.It has to be noted that according to Remarks 10 and 11, the shifting of the b i profile converges to the target rate with only one bit difference per subchannel.Therefore, the final greedy bitfilling or bit-removal steps in Algorithms 3 and 4 require only the calculation of the cost function (33) or (34) and the determination of the least or most power-expensive subchannels, respectively.As a result of Remarks 10 and 11, after each subchannel selection, there is no need to update the corresponding cost function, thus the total complexity of the final greedy process is reduced. When perfect CSI is not known The bit and power loading algorithms described in the previous sections presume that an estimation of the instantaneous CSI, that is, the subchannel gain-to-noise ratio values, is available.When the channel is constant or slow time varying, this is not a complex task.For "always on" links, such as DSL, CSI is obtained during the modems' training.For burst transmission, such as in wireless LANs, CSI can be estimated using a suitable preamble structure or inbound training information.However, in order to account for the limitations imposed by the time-varying behavior of the wireless channels, such as noisy or outdated CSI, alternative adaptive MCM schemes have gained research attention, for example, statistical adaptive MCM and adaptive MCM with partial CSI.References [37][38][39] can motivate the interested reader on this topic. CONCLUSIONS In this work, we surveyed the area of bit and power loading in constrained multicarrier communication systems in the single-user context.We discussed the optimal solutions to the main classes of loading problems, namely, rate maximization and margin maximization, under a set of specification and implementation constraints.We presented the water-filling power allocation policy under a total power constraint and the cap-limited water-filling concept was introduced when the peak-power constraint is included.Moreover, the loading problem was addressed subject to the integer-bit restriction and the optimal discrete solution was examined using combinatorial optimization methods.We reviewed existing loading algorithms and highlighted some ideas for low-complexity solutions. only one bit-profile that is BF-efficient for a given target rate. A similar proof can be derived for the uniqueness of the BRefficient bit solution. Next, we consider a bit distribution vector b, which is BFefficient, that is, ∀i, j : 1 ≤ i, j ≤ N: therefore b is also BR-efficient. B. PROOF OF THEOREM 1: DUALITY CONDITIONS FOR THE DISCRETE RM AND MM PROBLEMS Let (b G i , P G i ), for 1 ≤ i ≤ N, be efficient greedy bit and power allocation profiles that satisfy (37) For a given subchannel g i , the data rate function b i = log 2 (1 + P i • g i ) is strictly increasing with respect to P i , while the integer function b i is increasing with respect to P i .Therefore, which means that (b M i , P M i ) is also the solution to the integer RM problem subject to a total power of P M . Figure 1 : Figure 1: Water-filling rate maximization.The shaded area represents the total available power. Figure 6 : Figure 6: Examples of the b i bit-profile with respect to the b i upper bound. Figure 7 : Figure 7: Example of bit and power allocation using the CSA(6) standard ADSL loop. and let N i=1 b G i = B G and N i=1 P G i = P G .From the definition of the greedy bit allocation process in Section 5.1, we haveP G i = min b G i P i , ∀i : 1 ≤ i ≤ N, R i , P R i ), for 1 ≤ i ≤ N, be the optimum greedy solution of the integer RM problem, where N i=1 b R i = B R andN i=1 P R i = P R .Then according to (B.2), we can writeP R = min (b R i , P R i) is also the solution to the integer MM problem subject to a target rate of B R .Similarly, let (b M i , P M i ), for 1 ≤ i ≤ N, be the optimum greedy solution of the integer MM problem, where N i=1 b R i = B R and N i=1 P R i = P R .Then according to (B.3), we can writeB M = max
10,193.4
2008-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Quantum Talagrand, KKL and Friedgut’s Theorems and the Learnability of Quantum Boolean Functions We extend three related results from the analysis of influences of Boolean functions to the quantum setting, namely the KKL theorem, Friedgut’s Junta theorem and Talagrand’s variance inequality for geometric influences. Our results are derived by a joint use of recently studied hypercontractivity and gradient estimates. These generic tools also allow us to derive generalizations of these results in a general von Neumann algebraic setting beyond the case of the quantum hypercube, including examples in infinite dimensions relevant to quantum information theory such as continuous variables quantum systems. Finally, we comment on the implications of our results as regards to noncommutative extensions of isoperimetric type inequalities, quantum circuit complexity lower bounds and the learnability of quantum observables. The notion of influences appears naturally in many contexts ranging from isoperimetric inequalities [KMS12,CEL12], threshold phenomena in random graphs [FK96], cryptography [LMN93], etc.For these reasons, the last three decades witnessed an extensive study of their properties, which led to many applications in theoretical computer science (hardness of approximation [DS05,Hs01] and learning theory [OS07]), percolation theory [BKS99], social choice theory [Mos12,BOL85] to cite a few. Karpovsky [Kar76] proposed the sum of the influences (also called total influence), Inf j f, as a measure of complexity of a function f .This first intuition was then made rigorous in [LMN93] and [Bop97] where tight circuit complexity lower bounds in terms of the total influence were derived for the complexity class AC 0 of constant depth circuits.A simple lower bound on Inf f in terms of the variance can be derived from Poincaré inequality: For all f : Ω n → R one has [O'D14, Chapter 2] Var(f ) ≤ Inf f . (1.1) Functions on the hypercubes Ω n that take only values in {−1, 1} are of particular interest.These are the so-called Boolean functions and play important roles in social science, combinatorics, computer sciences and many other areas.See [dW08, O'D14] for more information.Note that the L p -norms, 1 ≤ p < ∞, of Boolean functions are always equal to 1, where the weighted L p -norm of a function f : Ω n → R is defined as (1.2) A Boolean function f : Ω n → {−1, 1} is said to be balanced if Ef = 0.If f is a Boolean function, the influence of the j-th variable can further be expressed as The Poincaré inequality (1.1) implies that there exists j ∈ {1, . . ., n} such that Inf j f ≥ 1/n.Note that Poincaré inequality (1.1) can be tight, e.g. for balanced Boolean function f (x) = x 1 .So it may happen that the total influence ≈ variance.Is it possible that all the influences are small simultaneously, that is, Inf j (f ) ≈ Var(f )/n for all 1 ≤ j ≤ n? Quite surprisingly, the answer is negative; a celebrated result of Kahn, Kalai and Linial [KKL88] predicts that every balanced Boolean function has an influential variable.More precisely, Kahn, Kalai and Linial [KKL88] proved that for any balanced Boolean function f on Ω n , there exists 1 ≤ j ≤ n such that where C > 0 is some universal constant.So some variable has an influence at least Ω(log(n)/n), which is larger than the order 1/n deduced from Poincaré inequality. This theorem of Kahn, Kalai and Linial (KKL in short) plays a fundamental role in Boolean analysis.It was further strengthened by Talagrand [Tal94] and Friedgut [Fri98] in different directions. In his celebrated paper [Tal94], Talagrand proved that for all n ≥ 1 and f : Ω n → R, we have for some universal C > 0 that where D j f (x) := 1 2 (f (x) − f (x ⊕j )).Note that if f is Boolean, then D j f takes values only in {−1, 0, 1}, so that D j f 1 = D j f 2 2 = Inf j f .Therefore, this inequality of Talagrand (1.4), as an improvement of Poincaré inequality (1.1), immediately implies the result of KKL.There are plenty of extensions of Talagrand's inequality (1.4) [OW13a,OW13b,CEL12], which has become a central tool in theoretical computer science [O'D14].Moreover, it provides a powerful tool to study sub-diffusive and superconcentration phenomena [BKS03, BKS99, Cha14, GS15, ADH17, Sos18, Tan20] ubiquitous to many models studied in modern probability theory (percolation, random matrices, spin glasses, etc.); see the review articles [CEL12,Led19] and references therein for more details. Also related to the KKL theorem, Friedgut's Junta theorem [Fri98] states that a Boolean function with a bounded total influence essentially depends on few coordinates.More precisely, a Boolean function f : Ω n → {−1, 1} is called a k-junta, for k ∈ {1, . . ., n} independent of n, if it depends on at most k coordinates.When k = 1, the function is called a dictatorship.If f is a junta, it is an immediate consequence that the total influence does not depend on n, i.e.Inf f = O(1).Friedgut's Junta theorem provides the following converse statement: for any Boolean function f : Ω n → {−1, 1} and ε > 0, there exists a k-junta g : Ω n → {−1, 1} such that f − g 2 ≤ ε , with k = 2 O(Inf f /ε) . (1.5) Since its discovery, Friedgut's Junta theorem has found many applications in random graph theory and the learnability of monotone Boolean functions [OS07]. Judging from the range of applicability of these results, it is natural to consider their extensions to noncommutative or quantum settings.Partial results in this direction were obtained by Montanaro and Osborne [MO10a].There, Boolean functions on the hypercube Ω n were replaced by quantum Boolean functions on n qubits, that is, operators A ∈ M 2 (C) ⊗n acting on the n-fold tensor product of C 2 with the additional conditions that A = A * and A 2 = 1.Here and in what follows, M k (C) denotes the k-by-k complex matrix algebra.Then, the L 2 -influence of A in j-th coordinate is defined as Inf 2 j A := d j A 2 2 , where we used d j to denote the quantum analogue of the bit-flip map with I being the identity map over M 2 (C), and replaced the normalized L p -norm on Ω n by the normalized Schatten-p norm on M 2 (C) ⊗n .The quantum influence has already found interesting applications to quantum complexity theory [BGJ + 22].In this framework, Montanaro and Osborne [MO10a,Proposition 11.1] proved a quantum analogue of Talagrand's inequality (1.4).However, this does not yield a quantum KKL as in the classical setting since we do not have the identity d j A 1 = d j A 2 2 for general quantum Boolean functions.In the worst case, we may even have d j A 1 = d j A 2 (j is a bad influence according to [MO10a,Definition 11.2]) and thus (1.4) will not help anymore.For this reason, the problem of whether every balanced quantum Boolean function has an influential variable still remains open; see [MO10a] for some partial results and more discussions. In fact, the observation that d j A 1 = d j A 2 2 is not exclusive to the quantum setting, and also arises for instance when considering extensions of the setup of Boolean functions on the hypercubes to functions on smooth manifolds, after replacing the uniform distribution on Ω n by an appropriate finite measure, and the discrete derivatives D j by the partial derivatives associated to the differential structure of the manifold.In this setting, analogues of the previous results were recently obtained for the L 1 -influences Inf 1 j A := d j A 1 , which is sometimes called geometric influence for its relation to isoperimetric inequalities [KMS12,CEL12,Aus16,Bou17]. In this paper, we propose to take the above considerations as a starting point for establishing quantum analogues of (1.1), (1.3), (1.4) and (1.5) based on the L 1 -influences.Our first main result (Theorems 3.2 and 3.6) states that for any self-adjoint operator A on n qubits with A ≤ 1 we have for some universal C > 0, where log + refers to the positive part of the logarithm.In particular, this suggests that every balanced quantum Boolean function has a variable that has geometric influence at least of the order log(n)/n.We also prove a quantum L 1 -Poincaré inequality (Theorem 3.1): for any operator A on n qubits we have Inf 1 j A . (1.7) Therefore our result provides an alternative answer to the quantum KKL conjecture [MO10a, Conjecture 3 of Section 12] in terms of geometric influences (Theorem 3.9).The inequality (1.6) is inspired by some results in the classical setting; see for example [KMS12,CEL12].Since (1.6) will be our main focus, rather than (1.4), to distinguish them in the sequel, we shall refer to (1.6) as (L 1 -)Talagrand's inequality, and (1.4) as Talagrand's L 1 -L 2 variance inequality as did in [Led19].We also have a qubit isoperimetric type inequality and a stronger form of L 1 -Poincaré (1.7); see Section 6.4 below. Our second main result is a quantum analogue of Friedgut's Junta theorem (Theorem 3.11 and Corollary 3.12): for any quantum Boolean function A ∈ M 2 (C) ⊗n and ε > 0 there exists another quantum Boolean function B ∈ M 2 (C) ⊗n that is supported on k subsystems such that where Inf p (A) := n j=1 Inf p j (A) with Inf p j (A) = d j A p p .The proofs of Equations (1.6) and (1.8) make use of recent noncommutative generalizations of hypercontractive inequalities and gradient estimates [OZ99, MO10a, KT13, TPK14, CM17a, DR20, BDR20, WZ21, GR21, Bei21].Moreover, the generality of these tools also allows us to further extend most of our results to the abstract von Neumann algebraic setting which contains both our previously stated results and their classical analogues previously established in [CEL12,Bou17], but also other extensions arising in noncommutative analysis and quantum information with discrete and continuous variables.As for their classical analogues, we expect our results to find many new applications to quantum information and quantum computation. The rest of the paper is organized as follows: in Section 2, we recall useful definitions and results from the Fourier analysis on the quantum Boolean hypercubes including Poincaré inequality, hypercontractivity, intertwining and gradient estimates.Section 3 is devoted to the statement and proof of our main results, namely a quantum L 1 -Poincaré inequality (Theorem 3.1), quantum Talagrand inequality (Theorem 3.2), and quantum KKL theorem (Theorem 3.9) and a quantum Friedgut's Junta theorem (Theorem 3.11 and Corollary 3.12).These results are then extended to the general von Neumann algebraic setting in Section 4. Finally, examples and applications to quantum circuit complexity and quantum learning theory are provided in Sections 5 and 6. Programme [ESP 156].For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.The authors want to thank Francisco Escudero Gutierrez and Hsin-Yuan Huang for helpful comments on an earlier version of the paper.They are grateful to the referees for the careful reading and helpful comments. Quantum Boolean analysis Let us start by recapitulating the framework of quantum Boolean functions from [MO10a].As a quantum analogue of functions on the Boolean hypercubes, i.e., functions of n bits, we will take observables on n qubits.In other words, our algebra of observables is M 2 (C) ⊗n ∼ = M 2 n (C) endowed with the operator norm • .In what follows, we denote by tr the trace in M 2 (C) ⊗n , and by tr T the partial trace with respect to any subset T of qubits.Following [MO10a, Definition 3.1], we say Here and in what follows, 1 always denotes the identity operator.A quantum Boolean function A is balanced if tr(A) = 0. One pillar of analysis on the Boolean hypercube is that every function f : Ω n → R has the Fourier-Walsh expansion, i.e. can be expressed as a linear combination of characters.Our quantum analogues of the characters for 1 qubit are the Pauli matrices Clearly, these are quantum Boolean functions, and they form a basis of M 2 (C).For s = (s 1 , . . ., s n ) ∈ {0, 1, 2, 3} n , we put These are again quantum Boolean functions, and form a basis of M 2 (C) ⊗n .Accordingly, every A ∈ M 2 (C) ⊗n can uniquely be expressed as where A s ∈ C is the Fourier coefficient.Given s ∈ {0, 1, 2, 3} n , we call the set of indices j such that s j = 0 the support of s, and denote it by supp(s).Its cardinality is denoted by | supp(s)|.Similarly, the support of A is defined by and its cardinality is denoted by | supp(A)|.In analogy with the classical setting, an arbitrary operator As the Pauli matrices are orthonormal with respect to the normalized Hilbert-Schmidt inner product, the coefficients A s can be recovered by Note that whenever A is self-adjoint, the coefficients A s must be real.The quantum analogue of the bit-flip map is given by Here I denotes the identity map on M 2 (C).Note that L 0 := I − 1 2 tr satisfies L 2 0 = L 0 , so that d 2 j = d j .For p ≥ 1, we denote by Inf p j (A) := d j A p p the L p -influence of j on the operator A ∈ M 2 (C) ⊗n , and by Inf p (A) := n j=1 Inf p j (A) the associated total L p -influence, where the normalized Schatten-p norm of an operator A ∈ M 2 (C) ⊗n is defined as (|A| := (A * A) 1/2 ) The L 1 -influence is also called the geometric influence.For the L 2 -influence we have with L := n j=1 d j .The operator L is the generator of the tensor product of the quantum depolarizing semigroups (P t ) t≥0 for the individual qubits: It is a tracially symmetric quantum Markov semigroup, whose general properties are discussed in Section 4. In the Fourier decomposition, we have the following convenient expressions for the L 2 -influence: Inf and the semigroup P t : (2.6) In addition we need the following further facts: Lemma 2.1 (Poincaré inequality, see Proposition 10.9 of [MO10a]).For all A ∈ M 2 (C) ⊗n such that tr(A) = 0 and t ≥ 0, one has This inequality is equivalent to and is also equivalent to Var(A) ≤ Inf 2 (A). We close this section by remarking that classical Boolean functions are special quantum Boolean functions.In fact, the Fourier-Walsh expansions of classical Boolean functions correspond to (2.1) when restricting s ∈ {0, 3} n . Main results for quantum Boolean functions In this section we state and prove our main results in the restricted setting of the quantum Boolean cube. 3.1. A quantum L 1 -Poincaré inequality.We start with the following L 1 -Poincaré type inequality; see also [DPMRF23] for variations of this inequality and Section 6.4 for a stronger form. Theorem 3.1.For all A ∈ M 2 (C) ⊗n , one has Proof.This follows from a simple use of the triangle inequality for the L 1 -norm as well as monotonicity under the normalized partial trace. 3.2. A quantum L 1 -Talagrand inequality.We first prove a quantum L 1 -Talagrand inequality on quantum Boolean cubes that can be extended to more general von Neumann algebras; see Section 4. We will see later that on quantum Boolean cubes the estimates can be improved, so that we may deduce a sharp quantum KKL theorem for L 1 -influences. Theorem 3.2.For all A ∈ M 2 (C) ⊗n with A ≤ 1 one has for some universal C > 0. Proof.Differentiating the function t → P t (A) 2 2 one gets By intertwining (Lemma 2.3) and hypercontractivity (Lemma 2.2), with p(t) = 1 + e −2t .By Hölder's inequality, For the term with the L 1 -norm we use intertwining again and L 1 -contractivity of (P t ) t≥0 to get d j P t A 1 ≤ d j A 1 .For the term with • -norm we use the bound derived from Lemma 3.4 below, which gives d j P t A ≤ (e t − 1) −1/2 .Altogether, 1 .As a consequence, Choosing T = 1, we further show in Lemma 3.3 below that, given a = d j A 1 , for some universal constant C > 0. We finish by combining (3.3) and the bound (with T = 1) 2 ) derived from the Poincaré inequality (Lemma 2.1). It remains to prove the following technical lemma: Lemma 3.4.Let n ≥ 1 and (P t ) t≥0 be the quantum depolarizing semigroup on n-qubits defined in (2.4).Then for all t > 0 and all A ∈ M 2 (C) ⊗n we have In particular, for each 1 ≤ j ≤ n, Proof.By definition of (P t ) t≥0 : A 2 1 ≥ P t (A * A) = 2 where (1) follows from the differentiation of s → P s (P t−s (A * )P t−s (A)), whereas (2) follows from gradient estimates Lemma 2.4.Now we claim that for all A and for each 1 ≤ j ≤ n we have and thus Let us first finish the proof of the lemma given (3.7).Applying (3.7) to P t A, we may proceed with the previous estimate as which proves (3.5).Now it remains to show (3.6).For this we decompose is just an index, then For any X, Y ∈ M 2 (C), a direct computation shows and which can be reformulated as with T j := I ⊗(j−1) ⊗ 1 2 tr ⊗I ⊗(n−j) .Now (3.8) follows from the Kadison-Schwarz inequality [Wol12, Chapter 5.2] and that 1 2 tr is unital completely positive (over M 2 (C)).This finishes the proof of the claim (3.6) and thus the proof of the lemma. Remark 3.5.Following the argument in [CEL12, Proof of Theorem 1], one can prove a quantum analogue of (1.4) using similar properties of quantum depolarizing semigroups.In fact, the proof of (1.4) does not even require strictly positive Ricci curvature lower bounds, i.e.Lemma 2.4 can be weakened.We will not discuss it here as (1.4) is not our main focus and a quantum analogue was already obtained in [MO10a]. The quantum Talagrand inequality Theorem 3.2 implies a quantum KKL for L 1 influences (following a similar argument in Lemma 3.8 below): for balanced quantum Boolean A on n-qubits, Recall that for classical Boolean functions the sharp order is log(n)/n, which can be captured by tribes functions [O'D14, Chapter 4].In fact, the order log(n)/n is also sharp for quantum Boolean functions, which can be seen from the following improved version of quantum Talagrand Theorem 3.2: Theorem 3.6.For every p ∈ [1, 2) there exists a constant C p > 0 such that for every n ∈ N and A ∈ M 2 (C) ⊗n with A ≤ 1 one has , where the constant can be chosen of order C p ∼ C/(2 − p) as p ր 2. In particular, for p = 1: Proof.Let T > 0 be such that p ≤ 1 + e −2T .By the Poincaré inequality we have By intertwining and hypercontractivity, By interpolation and . Now let us prove the claim (3.9) which we divide into two cases.When a ∈ [1, e], we have which are nothing but This proves (3.10) and thus the claim when a ∈ [1, e].When a ≥ e, we have 2 log a ≥ 1 + log a, and which proves the claim for a ∈ [1, e]. Remark 3.7.The above Theorem 3.6 (when p = 1) improves Theorem 3.2, since it gives the right order of quantum KKL as we shall see in the next.However, Theorem 3.2 can be easily extended to more general von Neumann algebras, which will be discussed in Theorem 4.3.The generalization of Theorem 3.6 is also possible but requires additional assumption which we will not discuss in the general von Neumann algebra setting. 3.3. A KKL theorem for quantum Boolean functions.Our quantum KKL theorem for geometric (L 1 -)influences follows as a simple corollary of Theorem 3.6.First we need an elementary lemma. Proof.If max 1≤j≤n a j ≥ 1/ √ n, we are done, so we can assume a j < 1/ √ n ≤ 1 for all j ∈ {1, . . ., n}.Then we have Theorem 3.9.For every 1 ≤ p < 2, there exists a constant C p > 0 such that for any n ≥ 1 and any balanced quantum Boolean function Proof.Since Var(A) = 1 for any balanced quantum Boolean function, the result follows from Theorem 3.6 with the help of Lemma 3.8. All combined, we have shown that every balanced quantum Boolean function has a geometrically influential variable.In fact, suppose that A ∈ M 2 (C) ⊗n is a balanced quantum Boolean function, then One may wonder if Inf 1 j (A) = d j A 1 ≈ 1/n for all 1 ≤ j ≤ n is possible.However, our Theorem 3.9 for p = 1 indicates that this is not the case.There exists j such that Inf 1 j (A) ≥ C log(n)/n for some C > 0. Remark 3.10.In [MO10a, Conjecture 3 of Section 12], the authors have conjectured a similar KKL-type result for the quantum L 2 -influences Inf 2 j (A).While this influence coincides with the L 1 -influence Inf 1 j (A) when A is a classical Boolean function, this is not the case in the quantum setting.Hence, this conjecture in [MO10a] remains open to the best of our knowledge. 3.4.A Friedgut's Junta theorem for quantum Boolean functions.We recall that a Boolean function g : Ω n → {−1, 1} is called a k-junta if it only depends on a set of at most k < n bits.In [Fri98], Friedgut showed that for any Boolean function denotes the total L 2 -influence of f , with Inf 2 j (f ) := D j f 2 2 .More recently, Bouyrie [Bou17] proved an L 1 version of Friedgut's junta theorem, more adapted to continuous models, based on the proof techniques developed in [CEL12] (see also [Aus16] for a previous account of the result upon which the proof of [Bou17] relies).The next theorem constitutes a quantum generalization of the L 1 Friedgut's Junta theorem; see Corollary 3.12 followed.Recall that we define k-juntas for operators that are not necessarily Boolean. Theorem 3.11.For any A ∈ M 2 (C) ⊗n and any ε > 0 small enough, there exists a Moreover, B can be taken to be 2 −|T | tr T (A) for some set T ⊂ {1, . . ., n} of n − k qubits. for any subset T of {1, . . ., n} by the (non-primitive) Poincaré inequality for the tensor product of depolarizing semigroups restricted to the subset T (see [Bar17, Example 3.1]). Let us now consider the case d > 1.Let By Plancherel's identity, Let us treat both summands on the right side separately.For the first summand, where we used formula (2.5) for Inf 2 .For the second summand, for any t ≥ 0.Here we used (2.6) for the depolarizing semigroup.Now take t = log 2. By intertwining (Lemma 2.3), hypercontractivity (Lemma 2.2) and interpolation, Therefore, where we used the elementary inequality x ≤ e x/2 for x ≥ 0 in the last step. Altogether we have shown that In the next corollary we restrict ourselves to quantum Boolean functions. Corollary 3.12.For any quantum Boolean A ∈ M 2 (C) ⊗n and any ε > 0 small enough there exists a quantum Boolean k Proof.By Theorem 3.11 there exists a self-adjoint Inf 1 (A) 6 / Inf 2 (A) 5 .Let us now define C := sgn(B) as follows: Given the spectral decomposition where the sign function sgn is defined as Therefore, where in (1) we have used that A 2 = 1, whereas in (2) we used the fact that B = 2 −|T | tr T (A) for some set T of qubits, so that B ≤ A ≤ 1.Moreover, we know the size of T c from Theorem 3.11.The result then follows after rescaling of ε to ε/3.Remark 3.13.In the case of a classical Boolean function f , we know that Inf f ≡ Inf 1 f = Inf 2 f and the bound in Corollary 3.12, simplifies as We therefore recover the classical Friedgut's Junta theorem. Remark 3.14.In the classical setting, other junta-type theorems related to Fourier analysis of Boolean functions may be found in [FKN02, ADFS04, Bou02, KN06, KS02, DFKO06].While extending these results to the present quantum setting is an interesting problem, their statements do not directly involve the notion of influence that is central to our study.This interesting direction of research will therefore be considered elsewhere. Von Neumann algebraic generalizations In this section, we generalize the main results from the previous section to the general von Neumann algebraic setting.Apart from technical challenges that arise from the fact that the underlying Hilbert space can be infinite-dimensional and the operators involved can be unbounded, most proofs run parallel to the ones for qubits once the appropriate assumptions are identified.As demonstrated in the next section, these hypotheses are satisfied for a number of interesting examples besides the qubit systems treated in Section 3. We start recapitulating some basic von Neumann algebra theory.As a general reference, we refer to [Tak02,Tak03].Let H be a Hilbert space and B(H) the space of all bounded linear operators on H.The σ-weak topology on B(H) is the topology induced by the seminorms |tr( • x)|, where x runs over the set of all trace-class operators.A von Neumann algebra M on H is a unital * -subalgebra of B(H) that is closed in the σ-weak topology.A linear functional on M is called normal if it is continuous with respect to the σ-weak topology.The set of all normal linear functionals on M is denoted by M * , and the obvious dual pairing between M and M * establishes an isometric isomorphism between M and (M * ) * . A state on M is a positive linear functional ϕ : M → C such that ϕ(1) = 1.A state is called faithful if ϕ(x * x) = 0 implies x = 0.For a faithful normal state ϕ on M let H ϕ denote the completion of M with respect to the inner product and let Λ ϕ (x) denote the image of x inside H ϕ .The GNS representation is defined by π ϕ (x)Λ ϕ (y) = Λ ϕ (xy).The vector Λ ϕ (1) is a cyclic and separating vector for π ϕ (M), which is denoted by Ω ϕ .We routinely identify M with π ϕ (M). For the definition of the noncommutative L p spaces, we need some basic modular theory.The operator is a closable anti-linear operator on H ϕ .Let S denote its closure and S = J∆ 1/2 the polar decomposition of S. The operator J is an anti-unitary involution, called the modular conjugation, and ∆ = S * S is called the modular operator. The symmetric embedding i 2 of M into H ϕ is given by i 2 (x) = ∆ 1/4 Λ ϕ (x) and the symmetric embedding i 1 of M into M * is uniquely determined by the relation or in other words, i 1 = i * 2 Ji 2 if we view J as an isomorphism between H ϕ and H ϕ ∼ = H * ϕ .Kosaki's interpolation L p spaces [Kos84] are defined as the complex interpolation space for all x ∈ M. In particular, L 2 (M, ϕ) ∼ = H ϕ isometrically, and the definition of i 2 is consistent with the definition given before under this identification. In the case M = M n (C), every state ϕ on M is of the form ϕ = tr( • σ) for some density matrix σ.The state ϕ is faithful if and only if σ is invertible.In this case L p (M n (C), ϕ) can be identified with M n (C) with the norm tr(|•| p ) 1/p , and the embedding i p is given by i p (x) = σ 1/2p xσ 1/2p .In particular, i p (x) = tr(|σ 1/2p xσ 1/2p | p ) 1/p , which is the expression for the L p norm commonly used in quantum information theory. A quantum Markov semigroup (QMS) on M is a family (P t ) t≥0 of normal bounded linear operators on M such that • P 0 = id M , P s P t = P s+t for s, t ≥ 0, • P t (x) → x as t ց 0 in the σ-weak topology for every x ∈ M, • n j,k=1 y * j P t (x * j x k )y k ≥ 0 for all x 1 , . . ., x n , y 1 , . . ., y n ∈ M and t ≥ 0, • P t (1) = 1 for all t ≥ 0. If (P t ) t≥0 is a quantum Markov semigroup on M, then P t has a pre-adjoint (P t ) * : M * → M * for every t ≥ 0, and ((P t ) * ) t≥0 is a strongly continuous semigroup on M * .The QMS (P t ) t≥0 is called KMS-symmetric with respect to ϕ if i 2 (P t (x)), i 2 (y) = i 2 (x), i 2 (P t (y)) for all x, y ∈ M and t ≥ 0. In this case, for all p ∈ [1, ∞) and t ≥ 0 the operator extends to a contraction P (p) t on L p (M, ϕ), and (P (p) t ) t≥0 is a strongly continuous semigroup.In particular, (P t ) * = P (1) t .Occasionally we also write P (∞) t for P t .The generator of (P (p) t ) t≥0 is defined by where the limit is taken in the norm topology if p ∈ [1, ∞) and in the σ-weak topology if p = ∞.We also write L for L ∞ .Note that there are differing sign conventions for the generator; with our convention, L 2 is a positive self-adjoint operator on L 2 (M, ϕ). We make the following assumption: (H0): There exists a * -subalgebra A of D(L) which is σ-weakly dense in M and invariant under (P t ) t≥0 .We can then define the carré du champ operator as follows: We write Γ(x) for Γ(x, x). We will further use the following assumption: (H1): Bakry-Émery gradient estimate: There exists K ∈ R such that Γ(P t (x)) ≤ e −2Kt P t (Γ(x)) for all x ∈ A and t ≥ 0. To avoid case distinctions, the following notation will come in handy: Further, we write K − for the negative part of a real number K. The following result is an analog of Lemma 3.4. In particular, (H2): There exists a finite family of linear self-adjoint maps and a constant M > 0 such that max j∈J for all x ∈ A. Note that (H2-1) implies in particular that the series on the right side converges for all x ∈ A, and by polarization, for all x, y ∈ A. In this situation we define the p-influence of the j-th variable on x by Inf p j (x) = i p (d j (x)) p and the total influence of x by Inf p (x) = j∈J Inf p j (x).We say (P t ) t≥0 is primitive if P t (x) → ϕ(x)1 σ-weakly as t → ∞ for every x ∈ M. for all x ∈ A. Since (P t ) t≥0 is primitive, Now by the consequence (4.1) of (H2-1), All combined, we obtain the desired inequality. The proof of the following theorem follows the argument given by Cordero-Erausquin and Ledoux [CEL12, Theorem 6] in the commutative case.We refer to the appendix for the details. Again following [CEL12], we can also give a generalization of Talagrand's inequality (1.4) in this setting. Theorem 4.4.If (P t ) is a KMS-symmetric QMS on M satisfying (H0), (H2)-(H5), then Proof.By the Poincaré inequality (H3), we have Arguing as in the proof of Theorem 3.2, we get By (H4) and (H5), After the change of variables s = p(t) and application of Hölder's inequality we get From here, the claimed inequality follows from an elementary bound on the last integral (compare [CEL12, Theorem 1]). Since the p-influences for different p do not coincide in the quantum setting, this version of Talagrand's inequality does not imply a KKL bound.However, we still have the following weaker bound as consequence of Theorem 4.3.Again, the proof can be found in the appendix. Theorem 4.5.If (P t ) is a KMS-symmetric QMS on M satisfying (H0)-(H5) and the cardinality n of J is finite, then there exists C ′ > 0 depending only on the constants K, L, M, α, λ, µ such that Remark 4.6.The sharpness of the bound derived in Theorem 4.5 in the present general context was shown in [KMS12]. To prove our generalized version of Friedgut's junta theorem, we need one last assumption on (P t ).For that purpose, if I ⊂ J , let E I denote the orthogonal projection onto i∈I i 2 (ker d i ) in L 2 (M, ϕ). (H6): There exists a constant ν > 0 such that for every x ∈ A and I ⊂ J . If (P t ) is primitive, then E J (i 2 (x)) = i 2 (ϕ(x)1).Thus (H6) is a strengthening of the Poincaré inequality from (H3) in the case of primitive QMS. Proof.This follows by the spectral theorem from the scalar inequality (1 − e −tx ) 2 ≤ tx for t, x ≥ 0. A version of Friedgut's junta theorem in this setting now reads as follows.Again, the proof can be found in the appendix.Theorem 4.9.Let (P t ) be a KMS-symmetric QMS on M satisfying (H0), (H2), (H4)-(H6).There exists a constant C > 0 depending only on α and ν such that for every x ∈ A and 0 < ε ≤ 2/ν there exists a set I ⊂ J such that i 2 (x) − E I (i 2 (x)) ≤ ε and where µ − = −µ if µ < 0, and 0 otherwise. Classical case. The results in [CEL12, Bou17 ] fit into our framework by choosing the commutative von Neumann algebras, i.e. (M, ϕ) = L ∞ (X, µ) with X a probability measure space.5.2.Generalized depolarizing semigroups.We start with a simple weighted generalization of the depolarizing semigroup, also known as generalized depolarizing: given a full-rank state ω over C d , e tLω = e −t id +(1 − e −t ) tr(ω •)1 ⊗n . We verify assumptions (H0)-(H5) for the semigroup (e tLω ) t≥0 .First of all, since we are in a finite dimensional case, (H0) is directly satisfied.(H1) was proved in [JZ15] with which settles (H2-1).Condition (H2-2) with M = √ 2 follows as in Equation (3.7).Condition (H3) with λ = 1 is easy to check for n = 1, one for arbitrary n follows by tensorization.The best constant α satisfying (H4) for any n has been shown in [BDR20,Theorems 24 & 25], whereas a lower bound on α was found e.g. in [TPK14, Theorem 9].A direct computation shows (H5) with µ = 1.5.3.Quantum Ornstein-Uhlenbeck semigroup.Next, we consider the generator of the so-called quantum Ornstein-Uhlenbeck semigroup [FRS94,CFL00,CS08].The latter acts on the algebra B(ℓ 2 (N)) of all bounded operators on the Hilbert space ℓ 2 (N) of square-summable sequences.Denoting by a and a * the annihilation and creation operators of the quantum harmonic oscillator, which are defined by their action on a given orthonormal basis {|k } k∈N of H ≡ ℓ 2 (N) ≃ L 2 (R) as follows: the generator of the quantum Ornstein-Uhlenbeck semigroup takes the following form at least on finite rank operators: where µ > λ > 0. Denoting ν = λ 2 /µ 2 , it has a unique invariant state Here we will use the notion of a Schwartz operator [KKW16]: where S(R) denotes the set of Schwartz functions over R, Q : (x → ψ(x)) → (x → xψ(x)) is the so-called position operator and P : (x → ψ(x)) → (x → −iψ ′ (x)) is the momentum operator.We denote by S(H) the algebra of Schwartz operators.Moreover, the quantum Ornstein-Uhlenbeck semigroup can be represented by a family of quantum channels e tL modelling a quantum beam-splitter of transmissivity η = e −(µ 2 −λ 2 )t and with environment state σ µ,ν can be shown to induce the following action on characteristic functions: Appendix D] for more information.Fix an orthonormal basis (e j ) j∈J of H.In case G is finite, the index set J can always be taken to be finite.Let A be the linear span of the operator λ g , g ∈ G, and let The space A is contained in the domain of the generator L of (P t ) and Clearly, condition (H0) is satisfied.Condition (H1) is satisfied with K = 0 [WZ21, Example 3.14].For condition (H2) note that if x = g f (g)λ g ∈ A, then In particular, d j (x) * d j (x) ≤ Γ(x) for every j ∈ J .Moreover, Thus condition (H2) holds with M = 1.Condition (H3) holds with the spectral gap λ = inf g:ψ(g)>0 ψ(g) of L. Since d j P t = P t d j , condition (H5) is always satisfied with µ = 0. Condition (H4) is known to hold for certain discrete groups.For free groups, it is known that (H4) holds with α = 2 [JPP + 15, Theorem A].We refer to [JPPP17] for more examples including triangular groups, finite cyclic groups Z N , N ≥ 6, infinite Coxeter groups etc. with 0 < α < ∞. Applications 6.1.Influence and circuit complexity lower bounds.As mentioned in the introduction, Karpovsky [Kar76] was the first to propose the total influence, as a measure of complexity of a function f .This intuition was then made rigorous in [LMN93] and [Bop97] where tight circuit complexity lower bounds in terms of the total influence were derived for the complexity class AC 0 of constant depth circuits. Similar results were recently derived in the quantum setting.For instance, [BGJ + 22] show a direct link between the notion of L 2 -influence and the complexity of quantum circuits.More precisely, they showed that for a quantum circuit U, that is a unitary matrix in where the L 2 -circuit sensitivity CiS 2 (U) is defined as and where Cost(U) refers to the cost of the circuit and was introduced in a series of seminal papers by Nielsen and coauthors [Nie06, NDGD06a, NDGD06b, DN08] as a lower bound on the minimal number of one and two-qubit gates required from a given universal gate-set in order to synthesize the unitary U.More precisely, given traceless self-adjoint operators h 1 , . . ., h m that are supported on 2 qubits and normalized as h i = 1, the circuit cost of U with respect to h 1 , . . ., h m is defined as where the infimum above is taken over all continuous functions r j : [0, 1] → R satisfying where P denotes the path-ordering operator.We start by providing a simple bound on the p-influences for p ∈ [1, 2] (for convenience we may write ⊗ 1): Proposition 6.1.For any j ∈ {1, . . ., n}, let N j ⊂ {1, . . ., m} be the minimal set of qubits such that In the case p = 2 and denoting L := max i |{j : i ∈ N j }|, we get where in the second inequality above we use that the partial trace tr j is a projection onto the algebra of operators supported on {j} c , and therefore minimized the distance to that subalgebra.The third inequality follows from the non-primitive Poincaré inequality from Equation (3.13). Remark 6.2.The assumption ⊗ 1)U * can be interpreted as a lightcone condition: let's consider for simplicity n a unitary circuit in brickwork architecture of the form U = U ℓ U ℓ−1 . . .U 1 , where for each j, where by U j r,r+1 we mean a unitary with non-trivial support on qubits r and r + 1.Hence, for any set N 1 = {1, . . ., n 1 } and any observable Hence, for n 1 = ℓ + 1, the condition holds.In other words, n 1 scales linearly with the depth ℓ of the circuit U.The above simple argument generalizes easily to higher dimensions and general local unitary circuits. In the case when p = 1 we can bound the total L 1 output influence in terms of the total L 1 input influence.Proposition 6.3.For any j ∈ {1, . . ., n}, let N j ⊂ {1, . . ., m} be the minimal set of qubits such that where in the second inequality above we used the definition of N j and that the partial trace tr j is contractive in • 1 norm.The third inequality follows from simple triangle inequality and monotonicity of the L 1 -norm under partial traces. Finally, we find a bound on the variation of the L 1 -influence through a circuit U in terms of the cost of a unitary U. We define the L 1 -circuit sensitivity of a unitary Theorem 6.4.The L 1 -circuit sensitivity of a unitary U ∈ M 2 (C) ⊗n is a lower bound on the circuit cost: Proof.Our proof follows similar steps to those leading to [BGJ + 22, Theorem 12] (see also [Eis21,MAVAV16]): we first show that, for a unitary U t = e −itH , where H acts non-trivially on a set S of k qubits, and where we denoted O(t) := U t OU * t .Back to our original problem, we take a Trotter decomposition of U such that for arbitrary small ε > 0, where V N is defined as follows: where the inequality follows from (6.1) for k = 2 and t = 1 N .Summing over η, we get Since the circuit cost is expressed as , and since the influence of UOU * can be arbirarily well approximated by that of V N OV * N as N → ∞, the result follows. Remark 6.5.Combined with our quantum Friedgut's Junta theorem 3.11 the above results show that for any observable O with Inf 1 (O), Inf2 (O) = O(1) and O 2 = O(1), and for any unitary U with L = O(1), the output observable UOU * can be well approximated by a k-junta with k = O(1).Taking again the simple example constructed in Remark 6.2, we recover the simple fact that, for a 1-qubit Pauli matrix evolving according to a circuit of constant depth, the output observable will still be supported on a constant size region.While it would be interesting to find some non-trivial situations where our bounds still hold, we leave this question to future work.6.2.Learning quantum Boolean functions.In this section, we use our quantum Friedgut's Junta theorem 3.11 to provide an efficient algorithm for learning quantum Boolean functions.Our argument relies on the following quantum generalization of Goldreich-Levin theorem (see [MO10a,Theorem 7.6]): Theorem 6.6 (quantum Goldreich-Levin).Given an oracle access to a unitary operator U on n qubits and its adjoint U * , and given δ, γ > 0, there is a poly n, 1 γ log 1 δ -time algorithm which outputs a list L = {s 1 , . . ., s m } such that with probability 1 − δ: Once the quantum Goldreich-Levin algorithm has been used to output a list of Fourier coefficients, the following lemma, which is also taken from [MO10a], can be used to compute them: Lemma 6.7.[MO10a, Lemma 7.4] For any quantum Boolean function A, and any s ∈ {0, 1, 2, 3} n it is possible to estimate A s to within ±η with probability 1 − δ using O 1 η 2 log 1 δ queries. Combining Theorem 6.6, Lemma 6.7 with our Theorem 3.11, we directly arrive at the following result: Proposition 6.8 (Learning quantum Boolean functions).Let A ∈ M 2 (C) ⊗n be a quantum Boolean function.Given oracle access to A, with probability 1 − δ, we can learn A to precision ε in L 2 using poly(n, 4 k , log 1 δ ) queries to A, where Moreover, by Theorem 6.6, we have that with probability 1 − δ, if s / ∈ L, then | A s | ≤ γ.Therefore we have that with that same high probability It remains to evaluate the coefficients A s for s ∈ L. This can be done within precision ±η with probability (1 − δ) using O 1 η 2 log 1 δ queries of A according to Lemma 6.7. 6.3.Learning quantum dynamics.Proposition 6.8 extends the domain of applicability of Proposition 41 in [MO10b] where the authors provided an efficient algorithm to learn the evolution of initially local observables under the dynamics generated by a local Hamiltonian.While the proof of [MO10b, Proposition 41] requires the Lieb-Robinson bounds in order to control the sets of sites of large influence in terms of the support of the initial observable and the light-cone of H, our argument has the advantage of not putting any geometric locality assumption of the quantum Boolean function A. To further illustrate our result, we consider the following generalization of the setup of [MO10a, Proposition 41]: let Λ be a finite set of size |Λ| = n endowed with a metric d(•, •).We suppose that there is a monotone increasing function g on [0, ∞) and constants C, D > 0 such that The constant D typically denotes the spatial dimension in the case of a regular lattice.We consider a quantum spin system on the point set Λ by assigning the Hilbert space H x ≡ C 2 to each site x ∈ Λ.For any subset T ⊆ Λ, the configuration space of spin states on T is given by the tensor product H T = x∈T H x , and the algebra A T := B(H T ) of observables on T acts on the Hilbert space H T .We consider a Hamiltonian H Λ = X⊆Λ h X , where h X ∈ B(H X ) is a local Hamiltonian, i.e. a self-adjoint operator on H X , for each X ⊂ Λ.In what follows, we denote the diameter of a set Z ⊆ Λ by diam(Z) := max{d(x, y)| x, y ∈ Z}.We further assume the following requirements [MKN17, Assumption A]: (ii) The following constant is independent of the system size n: Strictly speaking, condition (ii) only makes sense when considering a family of Hamiltonians H Λ defined on an increasing family of sets Λ all included in a countable set Σ. We will however favour simplicity over rigour here. By [MKN17, Theorem 2.1], for any two one-local Pauli operators σ s i , σ s j with j = i, and all R ≥ 1 we have that, given for any t ≥ 0, where v and C 2 are positive constants independent of Λ, t, R, i and j.Proposition 6.9.With the above assumptions, we further assume that there is R ≥ 1 such that for all i ∈ Λ, the constants can be bounded by constants independent of the size n of the system.Then, with probability 1 − δ, we can learn the quantum Boolean functions e itH Λ σ s i e −itH Λ to precision ε in L 2 using poly(n, exp(exp(ε −2 | log(ε)|)), log 1 δ ) queries to e −itH Λ and e itH Λ .Proof.In view of the dependence of k on the influences in Proposition 6.8 on the influences, it is enough to control Inf 1 (e itH Λ σ s i e −itH Λ ) and Inf 2 (e itH Λ σ s i e −itH Λ ) independently of the size of the system.We clearly have for any A ∈ M 2 (C) ⊗n that Inf 1 (A) ≤ j∈Λ d j A and Inf 2 (A) ≤ j∈Λ d j A 2 .Moreover, by the following wellknown expression for the partial trace 1 2 tr j (A) where σ s j are Pauli matrices on site j, we have that Therefore, Remark 6.10.Our result in Proposition 6.8 has the advantage that it does not assume in advance that A is (close to) a k-junta.This comes at the price that the dependence of the query complexity on the approximating parameter ε scales doubly exponentially with the latter.For the same reason, our dynamics learning method in Proposition 6.9 allows us to extend the class of Hamiltonians considered in [MO10a] to Hamiltonians satisfying a weaker power-law decay, at the cost of a much worse dependence on ε.This dependency on ε is also not new in classical setting [BT96,OS07]. Remark 6.11.In a recent article [CNY23], the authors provide an algorithm for learning any unitary k-junta U with precision ε and high probability which uses O k ε + 4 k ε 2 queries to U (see Theorem 29), extending a previous quantum algorithm for learning classical k-juntas reported in [AcS07].While the dependence on ε is much tighter than ours, the two results are incomparable, since we replaced the requirement that U is a k-junta by the weaker condition that it has influences Inf 1 U, Inf 2 U = O(1).6.4.Quantum isoperimetric type inequalities.Closely related to the concentration of measure phenomenon and functional inequalities, isoperimetric inequalities provide powerful tools in the analysis of extremal sets and surface measures.Given a metric space (X, d) equipped with a Borel measure µ, the boundary measure of a Borel set A in X with respect to µ is defined as [Led00, Led01, BGL14] where we recall that A r := {x ∈ X| d(x, A) < r} is the (open) r-neighbourhood of A. The isoperimetric profile of µ corresponds to the largest function I µ on [0, µ(X)] such that, for any Borel set A ⊂ X with µ(A) < ∞, In the case of the canonical Gaussian measure γ on the Borel sets of R k with density (2π) −k/2 e −|x| 2 /2 with respect to the Lebesgue measure, with the usual Euclidean metric induced by the norm |x| [Led01, Theorem 2.5]: /2 dx is the distribution function of the canonical Gaussian measure in dimension one.Moreover, equality holds in (6.3) if and only if A is a halfspace in R k .Moreover, as a → 0, we have Similar isoperimetric inequalities were also derived for hypercontractive, log-concave measures [BL96] (see also [Mil09,Mil10]).When k = 1, the boundary measure of a Borel set A can be expressed in terms of the geometric influence f ′ A 1 of a smooth approximation f A of the characteristic function of A. In other words, µ + (A) ≈ Inf 1 (f A ). This observation allows us to generalize the notion of isoperimetric inequality in the context of smooth Riemannian manifolds to discrete settings.In the context of the classical Boolean hypercube Ω n , the edge isoperimetric inequality states that for any m, among the m-element subsets of the discrete cube, the minimal edge boundary is attained by the set of m largest elements in the lexicographic order [Ber67,Har64,Har76,Lin64]. In particular, for any set A ⊂ Ω n of vertices where we recall that µ n is the uniform probability measure on Ω n , and f A corresponds to the characteristic function of set A. Here, ∂A simply corresponds to the set of vertices in the complement of A that are adjacent to A. This inequality is moreover tight when |A| = 2 d for some d ∈ N (take for instance A to be the vertices of a d-dimensional subcube).We notice the similarity with (6.4) up to the change of power in the logarithmic factor. Similarly, consider a finite graph G = (V, E) with set of vertices V and set of edges E with bounded degree d (i.e. each vertex has at most a fixed number d adjacent edges).The graph G is said to satisfy the linear isoperimetric inequality if Card(∂A) ≥ h Card(A) , for some h > 0 and all subsets A of V such that Card(A) ≤ 1 2 Card(V ).The socalled Cheeger constant h of the graph can be related to the spectral gap λ of the graph Laplacian via Cheeger's and Buser's inequalities.Here, Card(∂A) plays the role of µ + (A) and can be once again related to a notion of influence.The linear isoperimetric inequality can be understood as a weaker form of isoperimetry than the one derived for log-concave, hypercontractive measures, and hence only implies exponential concentration for the normalized counting measure on G.Moreover, one should not expect to recover the stronger Gaussian type isoperimetry in this setting, since the hypercontractivity constant for graph Laplacians is known to scale with the size of the graph [BT06]. Linear isoperimetric inequalities were also considered in the more general context of Markov chains over finite sample spaces.For instance, in the case of a continuous time Markov chain with transition rates Q(x, y) and unique reversible probability measure π with non-negative entropic Ricci curvature, [EF18] established that for any set A, where λ is the spectral gap of Q, Q * = min{Q(x, y) : Q(x, y) > 0} and π + (∂A) = x∈A,y∈A c Q(x, y)π(x) denotes the perimeter measure of A. We also note that extensions of such inequalities in the quantum setting were obtained in [TKR + 10]. Interestingly, such inequalities are well-known to be equivalent to an L 1 -Poincaré inequality.It is then natural to ask whether one could recover the type of isoperimetry found for the Gaussian measure and uniform measure on the hypercube in other discrete and quantum settings by further assuming hypercontractivity of the (quantum) Markov chain.This is indeed the case, as we prove by a direct appeal to Talagrand's inequality: Theorem 6.12 (Qubit isoperimetric type inequality).For any projection for some universal constant C, where τ (A) := 2 −n tr(P A ). Proof.As mentioned in [CEL12], this is a simple corollary of Talagrand's inequality Theorem 3.2 after assuming that for every j ∈ {1, . . ., n}, since otherwise the result directly holds. Remark 6.13.Similar to the quantum KKL conjecture of Montanaro and Osborne, it is reasonable to conjecture the following L 2 variant of (6.6) . (6.7) We end this section by remarking the following L 1 -Poincaré inequality that is stronger than Theorem 3.1.See [ILvHV18] for the discussions on the classical Boolean cubes.Theorem 6.14.For all A ∈ M 2 (C) ⊗n , one has In particular, Boolean functions of degree at most d are d2 d−1 juntas [NS94].As a main tool for the result, the authors derived a simple lower bound on the degree of the function in terms of its total influence.This observation can be used in conjunction with the Goldreich-Levin algorithm in order to devise a learning algorithm which makes poly(n) random queries to f .More efficient algorithms were proposed in the past decades [LMN93, IRR + 21, Man94].However all these algorithm have a query complexity scaling polynomially with n.In the recent article [EI22], the authors show that any low degree Boolean function can be approximated to ε precision in L 2 with probability 1 − δ from O poly 1 ε , d log n δ random queries to the function.While this result is incomparable to the ones we report in Section 6.2, it would be interesting to find a quantum extension of it.The result of [EI22] uses the so-called Bohnenblust-Hille inequalities.The study of Bohnenblust-Hille inequalities has a long history and these inequalities have found many applications in various problems.A Boolean analogue was known [DMoP19] and has led to interesting applications to learning theory [EI22].Here we formulate and conjecture a quantum analogue of Bohnenblust-Hille inequality and explain why it is useful to learning problems in the quantum setting.where the degree |s| of a string s is defined as the number of components that are different from 0. If Conjecture 7.1 holds, we expect that it can be used in a similar fashion as in [EI22] in order to devise a highly efficient algorithm for learning quantum Boolean functions of small degree in terms of query complexity. In fact, this conjecture has been resolved after an earlier version of this paper was post out.It was first resolved by Huang, Chen and Preskill [HCP23].Later on, another proof was found by Volberg and Zhang [VZ23]. Data availability statement.No data was generated as part of this work. Hence if ω ∈ M * is positive, then ω(P t (x * x) − P t (x) * P t (x)) = ω(F (t) − F (0)) This implies the first inequality.The second inequality follows from the fact that P t is unital and positive. Proposition 5. 1 . The semigroup generated by L and derivations d a := [a, •] and d a * = [a * , •] satisfy the conditions (H0)-(H5) with respect to the algebra A ≡ S(H).Proof.The set S(H) of Schwartz operators is a * -subalgebra of B(L 2 (R)) [KKW16, Lemma 3.5].Moreover for any p ≥ 1, the set S 0 (H) of finite-rank Schwartz operators is dense in the space T p (H) of Schatten-p operators [KKW16, Lemma 2.5].Therefore, since finite-rank operators are σ-weakly dense in B(H), this also holds for S(H).In order to show that S(H) is invariant with respect to the semigroup generated by L, we use tools from noncommutative Fourier analysis: given a trace-class operator x, its characteristic function is given byχ x (z) := tr(xD(z)) ,where D(z) := e za * −za , for z ∈ C, is the so-called one-mode displacement operator.By the quantum Plancherel identity, we have that for any two trace-class operators x, y [Hol11], tr(x * y) = d 2 z π χ x (z) χ y (z) . Schwartz function, see [KKW16, Proposition 3.18].Finally, by [KKW16, Proposition 3.14], for any x ∈ S(H), L(x) is closable with closure in S(H).Hence, (H0) is satisfied for the algebra A ≡ S(H) .Property (H1) can be easily derived from the canonical commutation relation [a, a * ] = I and gives K = (µ 2 − λ 2 )/2 (see e.g.[CM17b]).Property (H2) is satisfied for the maps d a := [a, •] and d a * := [a * , •].The Poincaré inequality (H3) follows from the characterization of the spectrum of the generator L established in [CFL00].The hypercontractivity constant in (H4) was estimated in [CS08].The intertwining relation of (H5) was found in [CM17b].5.4.Group von Neumann algebras.Let G be a countable discrete group with unit e, L(G) the group von Neumann algebra on ℓ 2 (G) generated by {λ g , g ∈ G} where λ is the left regular representation of G.We denote by τ (x) = xδ e , δ e the canonical tracial faithful state.Here and in what follows, δ g always denotes the function on G that takes value 1 at g and vanishes elsewhere.A function ψ : G → [0, ∞) is a conditionally negative definite (cnd) length function if ψ(e) = 0, ψ(g −1 ) = ψ(g) and g,h∈G f (g)f (h)ψ(g −1 h) ≤ 0 for every f : G → C with finite support such that g∈G f (g) = 0.By Schoenberg's Theorem (see for example [BO08, Theorem D.11]), to every cnd function one can associate a τ -symmetric quantum Markov semigroup on L(G) given by P t λ g = e −tψ(g) λ g .For a countable discrete group G, a 1-cocycle is a triple (H, π, b), where H is a real Hilbert space, π : G → O(H) is an orthogonal representation, and b : G → H satisfies the cocycle law: b(gh) = b(g) + π(g)b(h), g, h ∈ G.To any cnd function ψ on a countable discrete group G, one can associate with a 1 T 0 i 2 (P t (x)), i 2 (LP t (x)) dt = 2 T 0 j∈J i 2 (d j (P t (x))) d j (P 2t (x))) 2 dt, [Vö16]lence between log-Sobolev and Talagrand's inequalities.In Theorem 4.3, we derived a general noncommutative extension of Talagrand's inequality.Our proof requires the joint use of the hypercontractivity inequality (H4) with the intertwining relation (H5).It is hence legitimate to ask whether, in return, such Talagrand-type inequalities imply hypercontractivity.This question was answered in the positive in the classical, continuous setting in [BH99, Proposition 1], and later on for discrete spaces in[Vö16].It would be interesting to consider the similar problem in the quantum setting, which we leave to future work.7.2.Learning low-degree quantum Boolean functions.An alternative notion of complexity than the support condition for k-juntas is that of the degree: a bounded function f : Ω n → [−1, 1] is said to have degree at most d ∈ {1, . . ., n} if for any string s ∈ {−1, 1} n with Hamming weight |s| > d, the Fourier coefficient f (s) = 0.
14,407.4
2022-09-15T00:00:00.000
[ "Mathematics", "Physics", "Computer Science" ]
Magnetized fast isochoric laser heating for efficient creation of ultra-high-energy-density states Fast isochoric heating of a pre-compressed plasma core with a high-intensity short-pulse laser is an attractive and alternative approach to create ultra-high-energy-density states like those found in inertial confinement fusion (ICF) ignition sparks. Laser-produced relativistic electron beam (REB) deposits a part of kinetic energy in the core, and then the heated region becomes the hot spark to trigger the ignition. However, due to the inherent large angular spread of the produced REB, only a small portion of the REB collides with the core. Here, we demonstrate a factor-of-two enhancement of laser-to-core energy coupling with the magnetized fast isochoric heating. The method employs a magnetic field of hundreds of Tesla that is applied to the transport region from the REB generation zone to the core which results in guiding the REB along the magnetic field lines to the core. This scheme may provide more efficient energy coupling compared to the conventional ICF scheme. The results presented in this paper are potentially of interest to the fusion community, however they are based on a very limited number of experimental data points, which means that it is difficult to have confidence in the claimed results. At this point I cannot pass judgement on their claimed findings without additional information. The authors' claims principally rest on the results presented in figures 2 and 3. I will discuss these in detail. Figure 2: • within the margins of the error bars the green points are indistinguishable from the red, so I am ignoring these. • Differentiation of the blue points from the red, relies on the error bars of the blue points being less than that of the red. Can the authors justify these reduced error bars? • This plot is comprised of points which have huge differences in the timing of the heating beam with respect to one another. It is unclear that it is justified making a direct comparison of these points. Can the authors justify this? Figure 3: • This plot is a comparison of shots with and without magnetic fields, however the shots the authors have chosen to compare have very different heating laser energies, so it isn't at all clear whether their claims can be justified. I would like to see these plots re-made with shots of similar energy (e.g. 40541 vs 40543) and also with the K alpha emission normalised to the incident laser energy. At this point I will not detail other more minor issues I have with the paper. The manuscript describes an experiment on possible enhancement of fast ignition by applying a strong magnetic field to guide and to confine fast electrons. Although there seems to be a positive effect, the interpretation and modeling of the results are far from being sufficient for publishing in Nature. At best, the manuscript can be qualified as an internal progress report to serve as a motivation for further investigation. 1. The authors rely on previous works on magnetic field generation by the "Capacitor Coil" method. Since the criticism of the previous works is outside of the scope of this review, I will just mention a few points in this particular manuscript that seem doubtful: -From the geometry of the "coil", it seems roughly equivalent, including the lead wires, to a 2mmx0.5mm loop, that is having a cross-section of 1mm 2 . Driving a current of 250kA would create a field of 600T inside the coil. Assuming a field rise time of 0.5ns, that would require a voltage of about applied to the coil which is about two orders of magnitude higher than the measured fast electron temperature! -The authors speculate that using three lasers will magnify the magnetic field by a factor of 3 1/2 . However, this implies that the magnetic field energy scales proportionally to the laser energy -the scaling that has not been established in the manuscript. 2. Details on the field penetration analysis into the cone and the cone tip foil are very sketchy. In particular, a time scale of is mentioned. However, the field penetration time scale really depends on the specifics of the geometry. For example, for a cylinder (which is topologically equivalent to the cone) with a radius and a wall thickness , the penetration time scale of an axial magnetic field is for and , which is a factor of 7 longer than 2.5ns. Penetration into the cone tip, correctly analyzed, is also longer by almost the same factor. Given that the field lasers are peaked only at about 2 ns before the compression lasers, the field would not have any time to penetrate through the cone wall and the tip. 3. Fast electrons guiding and confinement has not been properly analyzed. The authors just briefly mention that "These strengths are high enough to guide the REB". I think this moment is absolutely essential to the analysis of the results and it definitely deserves more than just a single sentence. Just an example related to item #2 -if the field is low at the cone tip due to the penetration effect and it is high away from the tip then the electrons emerging at the tip would have to climb up a strong magnetic hill and it is not clear what fraction of them would reach the target. 4. Finally, it is not clear from the analysis whether the observed enhancement of the K_alpha radiation is due to the fast electrons or just to an improved confinement of thermal electrons in the compressed plasma. An experiment that could verify it would be to make shots with an applied magnetic field but not firing the short pulse REB lasers. Authors' response to Reviewer1 The authors appreciate the reviewer's time and effort for reviewing our manuscript. Your comments, suggestions, and criticisms helped us greatly to improve the quality of this manuscript. The previous manuscript was written in a Letter style of an another Nature journal. We reformatted thoroughly the manuscript to be adequate to Nature Communications also with consideration of your inputs. We hope that the revised manuscript satisfies the standard to warrant publication of our research in Nature Communications. Comment 1: This article presents experimental results on improved coupling of a relativistic electron beam to a compressed solid-density target. The improvement is due to an imposed, laser-drive, kilo-Tesla magnetic field, as well as using an initially solid target, instead of a thin shell as is commonly used for inertial fusion. The results are promising and seem valid. But I do not think this rises to the level of Nature Communications. This paper is probably appropriate for Physical Review Letters, or possibly Nature Physics. There are very few plasma-physics papers in Nature Communications, maybe several to ten a year. Is this one of the 10 best plasma-physics results of 2018? In my opinion it is not. Response from the authors: We guess that the reviewer misunderstood Nature Communications journal as Communications in Nature journal because the reviewer recommends this paper to be possibly appropriate for Nature Physics instead of Nature Communications. While Nature Physics requests that published researches have the highest impact within the disciplines, Nature Communications is committed to publishing important advances of significance to specialists within each field. We think that our manuscript satisfies, at least, the standard of Nature Communications even if we accept the reviewer's evaluation for our paper. Comment 2: The paper combines ideas that have already been proposed, and is therefore not really breaking new ground. Also the improved coupling isn't that significant, in fact close to results the authors cite from OMEGA (albeit with higher rho*r and reduced pre-plasma). The solid-density target (as opposed to a thin shell) does not appear on a path that scales to ICF ignition, so these results to not improve the prospects of fusion in an obvious or immediate way. Response from the authors: We disagree with this reviewer's comment. It needs long-term continuing efforts to realize the proposed ideas. We believe that breaking new ground is really happen only by preforming experiments not just by proposing ideas. While 7% of the coupling was obtained using a 0.3 g/cm 2 core in OMEGA facility, we achieved 7.7% coupling at core area density 0.1 g/cm 2 . This coupling per unit area density is three times more efficient than the OMEGA value, which justifies clearly the great advantage of the magnetized fast isochoric heating scheme. The solid ball target had not been considered as an ignition target, however, we are now investigating its possibility of forming an ignition scale core. The graph shows a temporal evolution of area density calculated with a one-dimensional hydrodynamic simulation code for a solid DT filled in a 2-mm-diameter and 25-µm-thick plastic shell compressed by 0.35 µm and 300 kJ laser beams. In the fast ignition scheme, the solid ball target can be an ignition target because the hot spark is produced separately by the external energy injection. The authors appreciate the reviewer's time and effort for reviewing our manuscript. Your comments, suggestions, and criticisms helped us greatly to improve the quality of this manuscript. The previous manuscript was written in a Letter style of an another Nature journal. We reformatted thoroughly the manuscript to be adequate to Nature Communications also with consideration of your inputs. We hope that the revised manuscript satisfies the standard to warrant publication of our research in Nature Communications. 1-A) within the margins of the error bars the green points are indistinguishable from the red, so I am ignoring these. 1-B) Differentiation of the blue points from the red, relies on the error bars of the blue points being less than that of the red. Can the authors justify these reduced error bars? 1-C) This plot is comprised of points which have huge differences in the timing of the heating beam with respect to one another. It is unclear that it is justified making a direct comparison of these points. Can the authors justify this? Response from the authors: In the first, we must explain the source of the error of the coupling. We used a planar highly oriented pyrolytic graphite (HOPG) in the X-ray spectrometer. The absolute integral reflectance of the HOPG was measured using an X-ray diffractometer. The HOPG has ±16% of spatial nonuniformity of its reflectance. It is not possible to identify the exact area of the HOPG that diffracts the detected Cu-Ka X-rays, therefore the reflectance non-uniformity causes ±16% of uncertain in the evaluation of absolute deposit energy. It is important to refer here that the HOPG alignment in respect to the targets was not changed during these measurements, and the detected Kα photons were always reflected from the same area in the HOPG. This means that true couplings do not scatter randomly within the error bars but locates at certain points keeping ratios between another points within the error bars. The above sentences have been written in the revised manuscript. 1-A) The green points are distinguishable from the red ones as discussed above. 1-B) The errors correspond to ± 16% of the evaluated coupling due to the HOPG non-uniformity, therefore, the smaller coupling has the smaller error value. 1-C) The comment is completely correct. The solid and dashed lines were fitted, as an eye guide, to the couplings neglecting the injection timing difference. We have modified the figure by using the solid and open marks to make the injection time difference easily distinguishable. This plot is a comparison of shots with and without magnetic fields, however the shots the authors have chosen to compare have very different heating laser energies, so it isn't at all clear whether their claims can be justified. I would like to see these plots re-made with shots of similar energy (e.g. 40541 vs 40543) and also with the K alpha emission normalized to the incident laser energy. Response from the authors: We appreciate for this suggestion. We have redraw the panels with 40541 and 40543 shots using absolute Ka emissivity normalized with the heating energy in the unit of × 10 9 phtons/str/cm 3 /J. The differences can be seen more clearly between these two shots compared to the previous plots. We have added the sentence as " Note that the strong emission spot appeared in Fig. 7 (c, g) can be seen also in the other shot (ID 40556) performed with close injection timing (t = 0.61 ± 0.02 ns)." (c,d,g). The numbers on the contour lines represent heating-laser-energy-normalized Cu-Kα-emissivity (× 10 9 photons/str/cm3/J) and mass density (g/cm3), respectively. Cu-Kα emission profiles are compared between those obtained with (e,g) and without (f,h) application of the external magnetic field at two different injection timings. These images were obtained after applying an inverse Abel transformation to the line-integrated emission profile, assuming rotational symmetry of the core along the cone axis. The authors appreciate the reviewer's time and effort for reviewing our manuscript. Your comments, suggestions, and criticisms helped us greatly to improve the quality of this manuscript. The previous manuscript was written in a Letter style of an another Nature journal. We reformatted thoroughly the manuscript to be adequate to Nature Communications also with consideration of your inputs. We hope that the revised manuscript satisfies the standard to warrant publication of our research in Nature Communications. Comment 1: From the geometry of the "coil", it seems roughly equivalent, including the lead wires, to a 2mm x 0.5mm loop, that is having a cross-section of A ∼ 1mm 2 . Driving a current of 250kA would create a field of B ∼ 600T inside the coil. Assuming a field rise time of τ ∼ 0.5ns, that would require a voltage of about AB/τ ≈ 1.2MV applied to the coil which is about two orders of magnitude higher than the measured fast electron temperature! Response from the authors: We recognized the problem what the reviewer pointed out. We have not yet been able to explain completely the mechanism, of which the non-thermal electron stream flows against such a strong electric field between the capacitor plates. Besides, V.T.Tikhonchuk et al. [Phys. Rev. E, Phys. Rev. E 96, 023202 (2017)] explained that the intense coil currents stem from the ion expansion from the laser irradiated zone: ions fill the volume between the target's disks in about 100ps, neutralizing the space charge and flattening the potential. The target then works like a laser-driven diode. Comment 2: The authors speculate that using three lasers will magnify the magnetic field by a factor of 3 1/2 . However, this implies that the magnetic field energy scales proportionally to the laser energy -the scaling that has not been established in the manuscript. Response from the authors: We remove the scaling from the revised manuscript. We revised the paragraph as " A current of 250 kA was generated with a capacitor-coil target driven by one GEKKO-XII beam as measured using proton radiography [12]. The current of 250 kA was used in the following analysis, although three GEKKO-XII beams were used in this integrated experiment.". Comment3: Details on the field penetration analysis into the cone and the cone tip foil are very sketchy. In particular, a time scale of μ0σL/2 ≈ 2.5ns is mentioned. However, the field penetration time scale really depends on the specifics of the geometry. For example, for a cylinder (which is topologically equivalent to the cone) with a radius a and a wall thickness δ, the penetration time scale of an axial magnetic field is μ0σaδ/2 ≈ 18ns for a = 100μm and δ = 7μm, which is a factor of 7 longer than 2.5ns. Penetration into the cone tip, correctly analyzed, is also longer by almost the same factor. Given that the field lasers are peaked only at about 2 ns before the compression lasers, the field would not have any time to penetrate through the cone wall and the tip. Response from the authors: We appreciate for introducing us the formula. We have rewritten the introduction of the magnetic field diffusion based on the cylindrical geometry that the reviewer pointed out. In addition, we have developed an electro-magneto dynamics simulation code with consideration of the inductive heating and the temperature dependence of conductivity to calculate the spatial and temporal distribution of magnetic field in the cone-ball target. The magnetic field strengths at the tip is estimated to be 335 T by this calculation. This discussion has been written in the revised manuscript. Details of the computation are written in H. Morita et al., arXiv:1804.10410 (2018. Comment4: Fast electrons guiding and confinement has not been properly analyzed. The authors just briefly mention that "These strengths are high enough to guide the REB". I think this moment is absolutely essential to the analysis of the results and it definitely deserves more than just a single sentence. Just an example related to item #2 -if the field is low at the cone tip due to the penetration effect and it is high away from the tip then the electrons emerging at the tip would have to climb up a strong magnetic hill and it is not clear what fraction of them would reach the target. Response from the authors: We have added the discussion in the revised manuscript as "The magnetic field strengths at the ball center is B0 = 225 T in Fig. 3 (b). The ball radius at the maximum compression timing was about rmax = 50 μm as shown in Fig. 4 (b). The magnetic field strength at the maximum compression timing (Bmax) can be estimated with Bmax = B0(r0/rmax) 2(1−1/Rem) , here r0 = 125 μm and Rem ∼ 2 are initial radius of the ball and magnetic Reynolds number, respectively, as Bmax = 560 T. Therefore, the mirror ratio along the REB path, ratio of magnetic field strengths at the peak and the REB generation zone, is Rm = 560/335 = 1.7. This is small enough to guide the REB efficiently to the core in this system without significant losses caused by the mirror effect as discussed in Ref. [29]". Comment5: Finally, it is not clear from the analysis whether the observed enhancement of the K_alpha radiation is due to the fast electrons or just to an improved confinement of thermal electrons in the compressed plasma. An experiment that could verify it would be to make shots with an applied magnetic field but not firing the short pulse REB lasers. Response from the authors: As shown in Fig. 2, Cu-K α X-ray yield produced during the compression process (green dotted line) was negligibly weak compared to those produced with the heating lasers (red solid and black dashed lines) even with application of the external magnetic field. This result means that Cu-K α enhancement is not due to an improved confinement of thermal electrons in the compressed plasma but to the guiding of the fast electrons. This sentence has been written in the revised manuscript. I thank the authors for clarifying the role of the journal Nature Communications. I almost never read any Nature journals unless someone points me to an article, and it's almost always Nature Physics. I was unaware of the massive number of journals now published under the Nature "brand." I read Physics of Plasmas and Physical Review Letters regularly, everything else I mostly ignore. Since I never read Nature Comm., I don't have a good sense of the caliber of paper that gets accepted. I would say the work is of the level that Physical Review Letters publishes. So, I leave it to the editors to decide if the point of their journal is to publish at (or below) PRL level, or above it. I appreciate the revisions the authors have made. I now think the paper should be published, provided it meets the editors' standards vs. PRL. A few minor comments: * I'd replace 'area density' with 'areal density', which is more standard. * In Table 1, "Davis" should be "Davies". Reviewer #2 (Remarks to the Author): After the significant changes made to this paper, I am now happy to recommend it for publication. Reviewer #3 (Remarks to the Author): Review of the authors rebuttal: Response from the authors: We recognized the problem what the reviewer pointed out. We have not yet been able to explain completely the mechanism, of which the non-thermal electron stream flows against such a strong electric field between the capacitor plates. Besides, V.T. Tikhonchuk et al. [Phys. Rev. E, Phys. Rev. E 96, 023202 (2017)] explained that the intense coil currents stem from the ion expansion from the laser irradiated zone: ions fill the volume between the target's disks in about 100ps, neutralizing the space charge and flattening the potential. The target then works like a laser-driven diode. Reviewer: So, basically the authors are saying that they do not know what current is generated and they do not exactly understand its mechanism. There are many models of this phenomenon and without direct and firm measurements, the magnitude of the coil current still remains unknown. Response from the authors: We remove the scaling from the revised manuscript. We revised the paragraph as " A current of 250 kA was generated with a capacitor-coil target driven by one GEKKO-XII beam as measured using proton radiography [12]. The current of 250 kA was used in the following analysis, although three GEKKO-XII beams were used in this integrated experiment." Reviewer: Two things I would like to mention here. First, as I said before, the figure of the coil current of 250 kA remains questionable. Second, the statement that this number is used in the following analyses is not what is presented in the paper. For instance, the revised Bfield map in Fig. 3 looks identical to that in the original paper. In addition, it unclear whether all three Gekko beams (as in the original paper) are needed for the effect or just a single beam (as in the revised version) is enough. Response from the authors: We appreciate for introducing us the formula. We have rewritten the introduction of the magnetic field diffusion based on the cylindrical geometry that the reviewer pointed out.In addition, we have developed an electro-magneto dynamics simulation code with consideration of the inductive heating and the temperature dependence of conductivity to calculate the spatial and temporal distribution of magnetic field in the cone-ball target. The magnetic field strengths at the tip is estimated to be 335 T by this calculation. This discussion has been written in the revised manuscript. Details of the computation are written in H. Morita et al., arXiv:1804.10410 (2018. Reviewer: Again, the revised Fig. 3 is identical to the original one so it is hard to judge what changes have been introduced and to what effect. The authors reference an unreviewed paper in arXiv so it is impossible to ascertain its value. Response from the authors: We have added the discussion in the revised manuscript as "The magnetic field strengths at the ball center is B0 = 225 T in Fig. 3 (b). The ball radius at the maximum compression timing was about rmax = 50 μm as shown in Fig. 4 (b). The magnetic field strength at the maximum compression timing (Bmax) can be estimated with Bmax = B0(r0/rmax)2(1−1/Rem), here r0 = 125 μm and Rem ∼ 2 are initial radius of the ball and magnetic Reynolds number, respectively, as Bmax = 560 T. Therefore, the mirror ratio along the REB path, ratio of magnetic field strengths at the peak and the REB generation zone, is Rm = 560/335 = 1.7. This is small enough to guide the REB efficiently to the core in this system without significant losses caused by the mirror effect as discussed in Ref. [29]". Reviewer: Again, without a proper validation of the magnetic field measurement and field profile calculation, and it is one of the central points of the paper, it is impossible to ascertain the accuracy of these statements. without direct and firm measurements, the magnitude of the coil current still remains unknown. As I said before, the figure of the coil current of 250 kA remains questionable. Without a proper validation of the magnetic field measurement and field profile calculation, and it is one of the central points of the paper, it is impossible to ascertain the accuracy of these statements. Response from the authors: Here we will explain two facts to strengthen our conclusion. One is a summary of the magnetic field produced by the laser-driven scheme and the other is the integrated REB transport simulation with changing applied magnetic field strength. There are written in the Supplementary Information. We stress that 250 kA is not questionable based on the previous experimental results. Table R1 summarizes previous experimental results obtained with kilo-Joule class laser facilities. Current flows in the coils were evaluated from the measured magnetic field strengths with resistances and inductances that were calculated for the initial coil geometries. 200 kA-level currents were obtained with except for Ref. 27 [L. Gao et al., Phys. Plasmas 23, 043106 (2016).]. In Ref. 27, a plastic spacer was inserted between the capacitor-plates, a current may flow in not only the coil but also the spacer surface. This target design is completely different from the other ones used in Refs. 24,25,26 and 28. In addition, the most important result of this manuscript is not affected by the exact value of the current and also understanding of the current generation mechanism. The enhancement of the laser-to-core coupling is the consequence of REB guiding by the applied magnetic field. The magnetic field strength, which was measured directly in the experiments, is important for the REB guiding. Two-dimensional radiative MHD code calculated density and magnetic field profiles of a compressed solid ball attached to a gold cone. The initial magnetic field profile is shown in Fig. 3 (b) of the revised manuscript. The profiles at the maximum compression timing are shown in Figure R1 (a) and (b), respectively. The total energy and the wavelength of the compression laser were 1.5 kJ and 0.53 µm, respectively. The pulse shape was a Gaussian with 1.3 ns of FWHM. The REB transport simulation was performed with the profiles shown in Figure R1 (a) and (b). The REB was injected at z = 65 µm (dashed white line). The half divergence angle of REB was 45 degrees. Temporal shape and spatial profile of the injected REB were a Gaussian with 1 ps duration (FWHM) and the super Gaussian with 30 µm radius (FWHM), respectively. The peak intensity of REB was 7 × 10 18 W/cm 2 . The energy Arikawa et al., "Optimization of hot electron spectra by using plasma mirror for fast ignition" presented at IFSA2015, Seattle NE USA, Sept. 2015]. This distribution was obtained by coupling high energy x-ray spectrometer and electron energy spectrometer. The multiplication factor of the REB-to-core energy coupling was calculated by changing initial magnetic field strength at the coil center as shown in Figure R1 (c). The multiplication factor is the ratio between the couplings calculated with and without application of the external magnetic field. The blue hatching indicates the range of the experimentally obtained multiplication factor including errors that is the ratio between the red point (ID 40543) and blue square (ID 40541). The calculated factor for above 300 T at the coil center is in the experimental range, which corresponds to 120 kA in the coil. This simulation result also supports the generation of several hundred Tesla magnetic field. The magnetization of relativistic electrons and magnetic mirror effect [30] account for the factor reduction shown in Fig. R1 (c) with increasing applied magnetic field strength. Comment 2 : Figure R1. Enhancement of REB-to-core energy coupling was calculated with the integrated simulation by coupling two-dimensional radiative MHD code and twodimensional Fokker-Planck code. Spatial profiles of (a) density and (b) magnetic field at the maximum compression timing that were calculated with twodimensional radiative MHD simulation code. (c) Dependence of multiplication factor of REB-to-core energy coupling on the initial magnetic field strength at the coil center that was calculated with two-dimensional Fokker-Planck code. The statement that this number is used in the following analyses is not what is presented in the paper. For instance, the revised B field map in Fig. 3 looks identical to that in the original paper. Again, the revised Fig. 3 is identical to the original one so it is hard to judge what changes have been introduced and to what effect. The authors reference an unreviewed paper in arXiv so it is impossible to ascertain its value. Response from the authors: We deeply apologize for our mistake made in the first manuscript. All the magnetic field profiles shown in the first, second, and third (this) manuscripts were calculated with 250 kA, not 430 kA that was written in the first manuscript. Therefore, Fig. S4 The model used in the second and this manuscripts is completely different from that in the first one as written in the second and this manuscript. A temperature-dependent electrical conductivity model was used in Fig. 3 (b) of the second and this manuscripts, while a constant electrical conductivity [s = 2 × 10 6 S/m] was used in the first manuscript. However, it is difficult to judge the difference between them as the reviewer pointed out, because we showed the profiles only at the peak field timing. Two-dimensional profiles of the magnetic field at three different timings are shown in The electro-magnetic dynamics code used in this simulation is based on Maxwell solver using a finite-difference time-domain (FDTD) method coupled with a conductivity depending on material temperature. Comment 3 : In addition, it unclear whether all three Gekko beams (as in the original paper) are needed for the effect or just a single beam (as in the revised version) is enough. Response from the authors: Three GEKKO beams were used for the magnetic field generation in this experiment. We have never performed the integrated experiment with a single GEKKO beam for the magnetic field generation. For the effect on the magnetic field diffusion, a single GEKKO beam (600 T, 250 kA) may be enough because 250 kA is used in our simulation shown in the Fig. R2. According to the integrated transport simulation, a single GEKKO beam seems to be enough to enhance the coupling, however, experimental confirmations have not been performed yet. Figure R2 Two-dimensional profiles of the magnetic field at three different timings calculated with the constant conductivity (upper) and with consideration of temporal changes in temperature and conductivity of gold due to inductive heating (lower).
7,124.6
2017-12-16T00:00:00.000
[ "Physics" ]
MORPHIOUS: an unsupervised machine learning workflow to detect the activation of microglia and astrocytes Background In conditions of brain injury and degeneration, defining microglial and astrocytic activation using cellular markers alone remains a challenging task. We developed the MORPHIOUS software package, an unsupervised machine learning workflow which can learn the morphologies of non-activated astrocytes and microglia, and from this information, infer clusters of microglial and astrocytic activation in brain tissue. Methods MORPHIOUS combines a one-class support vector machine with the density-based spatial clustering of applications with noise (DBSCAN) algorithm to identify clusters of microglial and astrocytic activation. Here, activation was triggered by permeabilizing the blood–brain barrier (BBB) in the mouse hippocampus using focused ultrasound (FUS). At 7 day post-treatment, MORPHIOUS was applied to evaluate microglial and astrocytic activation in histological tissue. MORPHIOUS was further evaluated on hippocampal sections of TgCRND8 mice, a model of amyloidosis that is prone to microglial and astrocytic activation. Results MORPHIOUS defined two classes of microglia, termed focal and proximal, that are spatially adjacent to the activating stimulus. Focal and proximal microglia demonstrated activity-associated features, including increased levels of ionized calcium-binding adapter molecule 1 expression, enlarged soma size, and deramification. MORPHIOUS further identified clusters of astrocytes characterized by activity-related changes in glial fibrillary acidic protein expression and branching. To validate these classifications following FUS, co-localization with activation markers were assessed. Focal and proximal microglia co-localized with the transforming growth factor beta 1, while proximal astrocytes co-localized with Nestin. In TgCRND8 mice, microglial and astrocytic activation clusters were found to correlate with amyloid-β plaque load. Thus, by only referencing control microglial and astrocytic morphologies, MORPHIOUS identified regions of interest corresponding to microglial and astrocytic activation. Conclusions Overall, our algorithm is a reliable and sensitive method for characterizing microglial and astrocytic activation following FUS-induced BBB permeability and in animal models of neurodegeneration. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-021-02376-9. Introduction Across neurodegenerative diseases, microglia and astrocytes represent important glial cell populations that are activated in response to pathology. Depending on the context, this activation can either ameliorate or exacerbate disease progression [1,2]. Microglial and astrocytic activation is accompanied by distinct morphological characteristics and several machine learning approaches Page 2 of 20 Silburt and Aubert Journal of Neuroinflammation (2022) 19:24 have been developed to classify and understand activated states based on cellular morphology. Commonly, these methods deploy unsupervised learning algorithms (e.g., K-means clustering, hierarchical clustering) [3][4][5][6][7]. In general, these approaches aim to classify activated and non-activated cellular morphologies into distinct groups based on the similarities of their features. However, given that the activation of microglia and astrocytes exhibit a range of morphologies [4,[7][8][9][10], it remains difficult to define strict classification boundaries to accurately identify activated cells. Supervised learning algorithms have shown promises in classifying cell types and learn rules based on patterns in labelled data to discriminate between multiple classes of features [11]. Among many applications, supervised learning algorithms have been used to identify activated microglia following traumatic brain injury [12], and to distinguish between macrophage activation states [13]. While powerful, supervised learning classifiers must be provided with labelled data, where the class label of each data point is known. For many clinical, preclinical, and basic biological applications, including for detecting activated microglia and astrocytes, standardized data sets with labelled data are not available and they are challenging to generate. Moreover, because supervised classifiers are trained using predefined classes, they suffer from an inability to discover new categories of classification, which is of interest to biologists [11]. We developed a method to identify regions of interest corresponding to activated astrocytes and microglia using a one-class support vector machine. Support vector machines in general have been widely used in biology and are both capable of modelling significant complexity while also regularizing against overfitting [14,15]. Traditionally, support vector machines are supervised, and determine a decision boundary by evaluating the largest margin from which to separate classes of data. In contrast, one-class support vector machines require the input of a baseline class and the selection of a probability quantity (i.e., nu), which helps to define whether a datapoint should be considered consistent with the baseline, or, deemed an outlier [16]. In this way, data can be classified based on patterns learned solely from a baseline class. Using a one-class support vector machine, we developed a novel approach to identify classes of microglial and astrocytic activation; a workflow that we termed MORPHological Identification of Outlier clUSters (MORPHIOUS). MORPHIOUS learns the feature patterns of "normal", here non-activated microglia or astrocytes, and uses this information to segment regions of cells which are classified to be "abnormal" and, therefore, inferred to be activated. This definition for activation, i.e., spatial clusters of abnormal cellular morphologies, is flexible, and thus enables the robust identification of a range of activation-associated morphologies. To facilitate its use, MORPHIOUS provides a set of ImageJ scripts to extract features from immunofluorescence images. MORPHIOUS is available to users as a stand-alone software package with a graphical user interface written in python. To validate its utility, we used MORPHIOUS to quantify the activation of microglia and astrocytes in the hippocampus of C57BL/6 J mice treated with focused ultrasound (FUS) and intravenously injected microbubbles to induce a localized and reversible permeabilization of the blood-brain barrier, which is known to transiently activate microglia and astrocytes [17]. We further demonstrated the utility of MORPHIOUS by evaluating microglial and astrocytic activation in the TgCRND8 mouse model of amyloidosis [18]. Through our analysis, we show that MORPHIOUS can segment regions of activated microglia and astrocytes from surrounding nonactivated tissue based on morphology alone. Animals For the focused ultrasound (FUS) data set, male C57BL/6 J mice (N = 4) at 3.5 months of age were treated with FUS unilaterally in the left hippocampus and sacrificed at 7 day (D) post-FUS. For the amyloidosis data set, 4 TgCRND8 mice [18] (2 males, 2 female) and 4 nonTg C3H/C57BL6 controls (2 males, 2 females) at 7 months of age were used. Mice were sacrificed under anesthesia of ketamine/xylazine and perfused with 4% paraformaldehyde, brains were extracted and post-fixed in 4% paraformaldehyde over night at 4 °C. Brains were switched to 30% sucrose for > 24 h and sectioned at 40 µm using a microtome. Free floating sections were stored in cryoprotectant at -20 °C until use. All procedures were conducted in accordance with guidelines established by the Canadian Council on Animal Care and protocols approved by the Sunnybrook Research Institute Animal Care Committee. Magnetic resonance imaging guided focused ultrasound Prior to FUS treatment, mice were anesthetized with 5% isoflurane, and maintained at 2% isoflurane. Fur was removed from the head using depilatory cream. A 26-guage angiocatheter was inserted into the tail vein. Animals were imaged using a 7.0-T MRI (Bruker), and T2-weighted axial scans were used to position four focal spots targeting the hippocampus. FUS was conducted using an in-house system with a spherically focused transducer (1.68-MHz frequency, 75 mm diameter, 60 mm radius of curvature) and the BBB was permeabilized using standard parameters (10 ms bursts, 1 Hz burst repetition frequency, 120-s duration) [19]. At the initiation of sonication, mice were injected via the tail vein separately with Definity microbubbles (0.02 ml/kg; Lantheus Medical Imaging) and Gadovist (0.2 ml/kg, Schering AG). Each injection was followed by a 150ul flush with saline. Acoustic emissions were monitored using a polyvinylidene fluoride (PVDF) hydrophone. Acoustic pressure was increased after each pulse in a stepwise manner. Once subharmonic emissions were detected, the acoustic pressure was reduced to 25%, and maintained there for the remainder of the pulse schedule [20]. BBB permeability was confirmed based on the presence of Gadovist enhancement on T1 weighted MR images. Imaging All images were acquired using a Zeiss Z1 Observer/ Yokogawa spinning disk (Carl Zeiss) microscope. Tiled images encompassing the entire hippocampus were acquired using 40 µm z-stacks with a 1 µm step-size at with a 20X objective. All analysis was conducted using images at 20X magnification. Intensity and branching feature generation All image analyses procedures were performed using Fiji/ ImageJ [21]. Microglia soma, branching, and intensity measures were visualized using IBA1 immunofluorescence. Similar to previous work, astrocytes were double labelled with S100β and GFAP [22]. S100β was used to demarcate soma, while branching and intensity measures were evaluated with GFAP. For all images, a region of interest (ROI) was drawn around the hippocampus. Regions outside this ROI were cleared and therein excluded from the analysis. Z-stacked images were converted to maximum intensity projections. Prior to analysis, images were background subtracted, and despeckled. Images of astrocytes were contrast-enhanced to ensure full arborization could be detected. To collect features, for each image, a 100 µm × 100 µm sliding window was applied to the image which was iteratively translated across the image in the X and Y directions with a 50% overlap. A local threshold was first applied to the image (Method: Phansalkar, radius: 60, parameter 1: 0, parameter 2: 0). For each iteration, immunoflourescence features (Mean, IntDen, Area) were quantified using the "Measure" command, and the fractal dimension (D) was measured using the "Fractal Dimension" command. Images were further binarized (i.e., "Binarize" command) based on the local threshold, skeletonized (i.e., "Skeletonize (2D/3D)" command) and branch features were collected ("Analyze 2D/3D Features"). Cell soma features Microglial and astrocytic cell bodies were segmented using custom imageJ scripts. For each 100 × 100 µm window, mean soma area, soma circularity, and nearest neighbour distance (NND), were evaluated. Soma circularity was calculated using the formula: circularity = 4π(area/perimeter 2 ). For each cell soma, the nearest neighbour distance was determined as the distance between the geometric center of a cell, and the nearest neighbouring geometric cell center, as determined via the Euclidean distance. Segmenting microglia cell bodies To count microglia and astrocytes, we developed a custom macro to segment and count microglia and astrocyte cell bodies. IBA1 images were first background subtracted by 50 pixels, and despeckled. Subsequently, using the MorphoLibJ library [23], we applied erosion (element: octagon, radius: 1), directional filtering (type: Max, operation: Mean, line: 6, direction: 32), morphological filter opening (element: Octagon, radius: 2), and top hat gray scale attribute filtering (attribute: Box Diagonal, minimum: 150, connectivity: 4). The image was subsequently binarized using an "IJ_IsoData" global intensity threshold. Cell body ROIs were identified using the ImageJ particle analyzer command with a size filter of 25 pixels (scale: 1.5 pixels/µm). Input features Features used for identifying proximal microglia included area, mean intensity, the fractal dimension (D), number of cells, average NND, average soma size, average soma circularity, number of branches, branch length, number of branch junctions, number of triple branch points, number of branch ends, and the cellular perimeter. Features used for identifying proximal astrocytes included area, mean intensity, number of branch junctions, number of branch ends, number of slab branch pixels, number of triple points, and the cellular perimeter. Each feature was z-score normalized: z = (x i -µ)/s, where x i is individual sample value, µ is the feature mean, and s is the feature standard deviation. Both training and testset samples were normalized based on the mean and standard deviation of the training data set. Subsequently, features were transformed using principal component analysis, and enough principal components (PCs) were selected to retain 99% of variance. This corresponded to 9 PCs for the microglia feature set and 5 PCs for the astrocyte feature set. Z-score normalization and principal component analysis were conducted using the scikitlearn module in python [24]. Identifying proximal clusters of microglia and astrocytes To identify outliers in hippocampal sections of FUStreated and TgCRND8 mice, separate one-class support vector machines were trained using features from contralateral sections and control animals appropriate for each experimental group. Since outliers can represent regions with either hyperintense features, or hypointense features, the initial set of putative outliers were filtered to ensure all identified outliers had a mean intensity that was larger than a z-score of -1. These candidate outliers were subsequently spatially clustered using the density-based spatial clustering of applications with noise (DBSCAN) algorithm [25]. Spatially clustered outliers were deemed proximal clusters. Implementations for the one-class support vector machine and DBSCAN were accessed from scikit-learn [24]. MORPHIOUS requires user input for four parameters: nu, gamma, minimum cluster size, and minimum neighbour distance. The nu and gamma parameters are hyperparameters for a one-class support vector machine, and the radial-basis-function kernel, respectively. Nu reflects the percentage of normal observations which lie outside the classification decision boundary and is a regularization parameter. Gamma is a parameter for the radial basis kernel function. The minimum cluster size and distance are hyperparameters for DBSCAN which collectively defines the cluster size as the area, where the number of points greater than the minimum cluster size are within the minimum neighbour distance. By default, MORPHIOUS sets the radius to be equal to the diagonal length of the window size rounded up (142 µm). Values for nu, gamma, and minimum cluster size for each stain were optimized via a grid search (Additional file 1: Figures S2-S4). Using the contralateral data sets, tenfold cross-validation was performed to identify the set of nu, gamma, and minimum cluster size parameters which resulted in no clustering across any control hippocampal sections. A second grid search was performed that trained on the control data set, and tested on test data set, to identify the set of hyperparameters which maximized cluster size within the test tissue (i.e., ipsilateral FUS, TgCRND8). The optimal parameters were evaluated as the set of values which maximized the clusters in the test tissue (i.e., FUS-treated, TgCRND8) while yielding no clustering in the respective control data set. Optimal parameters were identified via a separate grid search for each IBA1 (Additional file 1: Figure S2) and GFAP antibody (Additional file 1: Figure S3) in the FUS and TgCRND8 (Additional file 1: Figure S4) mice experiments. Identifying proximal clusters of microglia and astrocytes We further classified a second subset of microglia, termed focal microglia, which represent the most activated microglia. To identify focal microglia, first, a threshold-value was determined to identify windows of highly activated cells. Thus, for each test-set section, the IBA1 integrated density were sorted in ascending order (Fig. 1H). The elbow point of this curve corresponds to the threshold value. Proximal grid points with a mean IBA1 intensity greater than this threshold value were subsequently spatially clustered using DBSCAN, with a min cluster size of 5, and distance of 142 µm. To evaluate this elbow point, a vector was drawn to connect the first and last points (A 1 ) of the integrated density curve. Subsequently, a perpendicular vector B x from every datapoint in the curve was connected to A 1 . The datapoint corresponding to the largest perpendicular vector Fig. 1 MORPHIOUS workflow trains a one-class support vector machine to identify activated glial cells. A sliding window is applied to control (i.e., contralateral FUS, nonTG) hippocampal sections to extract morphological features (A). Extracted features are used to generate a spatial feature map (B). Selected morphological features from control hippocampal sections are used to train a one-class support vector machine which generates a decision boundary for defining non-activated microglia and astrocytes (C). A sliding window is further used to extract morphological features from test-sample (i.e., ipsilateral FUS, TgCRND8) hippocampal sections (D, E). The trained model is applied to test-sample hippocampal sections to identify outlier windows (F). Outliers are spatially clustered using the density-based spatial clustering of applications with noise algorithm (DBSCAN) to identify proximal clusters (G). To identify focal clusters, the integrated density of proximal cluster windows are sorted in ascending order, and the elbow point of this curve (red line) is used as a defined threshold value (H). DBSCAN is applied to windows with an integrated density above the defined threshold value to establish focal clusters (I).Contra., contralateral; FUS, focused ultrasound; Ipsi., ipsilateral; Hipp., hippocampus; Tg, TgCRND8 mice (i.e., max(|A 1 B x |)) was labelled as the elbow point. To ensure stability of elbow point, this procedure was iterated, and on each iteration, the first point in the curve was removed. From this procedure, the modal elbow point was used as the focal threshold value. Finally, to ensure that the integrated density IBA1 curve was sufficiently steep and reflected an exponential relationship, focal clusters were only evaluated if the magnitude of the elbow point vector (i.e., max(|A 1 B x |)) was greater than a threshold of 0.5, a value which worked well in our experience. Colocalization analysis Pearson correlation analysis was used to assess the colocalization between IBA1 and TGFβ1, IBA1 and CD68, and GFAP and Nestin. Colocalization analysis was conducted using the coloc2 plugin in ImageJ and expressed as the Pearson correlation coefficient (R). Statistical analysis In the FUS data set, differences in cellular features were analyzed using a mixed-linear model. Pairwise betweengroup differences in cellular features were assessed with a Sidak post-hoc test. In the TgCRND8 data set, cellular differences were analyzed using a One-Way ANOVA with a Tukey's post-hoc test. An independent student's t test was used evaluate the differences between two groups. A value of P < 0.05 was considered statistically significant. A linear regression was used to evaluate correlations between microglia and astrocyte cluster sizes, and between manual and automatic cell counts. All statistical analyses were conducted using SPSS (version 22, IBM). Source code The MORPHIOUS source code, as well as imageJ macros, tutorials, and additional documentation are available for use at https:// github. com/ jsilb urt/ Morph ious. Feature collection The activation of microglia and astrocytes was induced in mice using a unilateral treatment of focused ultrasound (FUS) in the presence of microbubbles, to the left hippocampus in 14-week-old C57BL/6 J mice. Mice were sacrificed at 7D post-FUS, a timepoint when the activation of both microglia and astrocytes has been previously detected [26], and processed for immunohistochemical analysis. Microglia were stained with ionized calciumbinding adapter molecule (IBA1), which labels microglial processes and is upregulated with activation [27]. Astrocytes were double-stained with S100 calcium-binding protein beta (S100β) and glial fibrillary acidic protein (GFAP). GFAP, which is upregulated with astrocytic activation [2], was used to evaluate branching and intensity metrics, while S100β was used to count cells and quantify soma characteristics. Using custom ImageJ scripts, features were extracted from hippocampal slices by applying a sliding window (Fig. 1A, B). We collected features related to the fluorescence intensity, cellular surface area, branching complexity, cell location, and cell soma shape. Averaged features for each sliding window were extracted, normalized, and principal component analysis (PCA) transformed. Automated counting of microglia and astrocytes To aid in feature collection, we developed two protocols using the FIJI morpholibJ package [23] to automatically count IBA1 + and S100β + cell bodies-representing microglia and astrocytes, respectively, and measure cell soma related features. These protocols strongly correlated with manual counts (R 2 : 0.964 for microglia, R 2 : 0.959 for astrocytes) (Additional file 1: Figure S1). Building an unsupervised one-class classifier We trained a one-class support vector machine using control hippocampal sections ( Fig. 1A-C). Once trained, the classifier was applied to the test-set (Fig. 1D, E). For FUS experiments, contralateral hippocampal sections were used as controls, and the FUS-treated ipsilateral sections were used in the test-set. For TgCRND8 experiments, hippocampal sections from nonTg mice were used as controls, and sections from TgCRND8 mice were evaluated in the test-set. As such, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm was used on spatial coordinates of identified outliers to generate a region of interest (ROI) corresponding to clustered activated/outlier cells (Fig. 1G). Microglia and astrocytes within these ROIs are termed proximal microglia or astrocytes, to indicate that they are located proximally to the activating stimulus. Microglia and astrocytes within the test-set tissues outside the proximal cluster regions are referred to as distal microglia and astrocytes, indicating that they are "further" from the activating stimulus than the proximal cells. This is evident from the unremarkable changes in their cellular morphologies. Among other examples, this proximaldistal terminology has been used previously to describe the spatial nature of microglial activation adjacent to an ischemic stroke [10], and to plaque pathology [17,29]. Within proximal microglial activation clusters, we observed subareas, where microglia exhibited prominent features of activation, which we termed focal microglia. To delineate the boundary of focal clusters, DBSCAN was applied to proximal cluster outliers with an IBA1 integrated density above a threshold value (Fig. 1I). To calculate this threshold, the IBA1 integrated density for each proximal outlier window was sorted in ascending order and the elbow point of the ensuing curve was used as the threshold value (Fig. 1H, red line). A glossary of terms describing our spatial nomenclature can be found in Table 1. Moreover, representative visualizations of focal, proximal, and distal ROIregions of interest for FUS treated and TgCRND8 mice hippocampi is provided in Fig. 2. Parameter tuning To ensure that identified clusters represent morphologically activated cells, our learning objective was to predict no false-positive microglial or astrocytic activation clusters. Thus, we applied tenfold cross-validation across all control hippocampal sections (i.e., contralateral FUS, or nonTg) to identify hyperparameters, where no activation clusters were observed within any control hippocampal sections (Additional file 1: Figure S2, S3). Within the set of parameters which ensured no clustering among control hippocampal sections, we chose parameters which maximized the amount of activated microglia and astrocytes in the test-set. When conducting immunohistochemical analyses, it is important that a representative ROI is selected [30][31][32]. Typically this ROI should be a clearly definable area, such as a brain region (e.g., hippocampus), that is analyzed in its entirely, or through appropriate sampling [30][31][32]. In practice, both whole-region and sampling approaches are common [10,26,[33][34][35][36]. To illustrate some of the advantages of MORPHIOUS, we asked whether a typical quantitative approach could detect activated microglia in our sections. Thus, we defined the analytical ROI as the entire hippocampal area and compared microglial morphologies within FUS-treated hippocampal sections to their contralateral side. Notably, ipsilateral microglia showed only small reductions in the branch length (P < 0.01, Fig. 4D), number of branches (P < 0.05, Fig. 4E), and nearest neighbour distance (P < 0.05, Fig. 4F), but showed no changes for other features (P > 0.05). These results of this whole-region analysis are in contrast with those obtained when MORPHIOUS was used to identify ROIs, where a rich set of distinct morphologies were detected (Fig. 4). Thus, by defining discrete clusters of activation, MOR-PHIOUS improves the sensitivity for detecting pockets of activated microglia in heterogeneous tissues when compared to a traditional analytical approach. Classifying astrocytic activation following FUS Next, we trained MORPHIOUS on contralateral hippocampal sections of astrocytes (Fig. 3G, I) and tested it on FUS-treated hippocampal sections of astrocytes (Fig. 3H, J, K). In response to FUS, we used MORPHI-OUS to classify a single class of activated astrocytes, which we termed proximal astrocytes (Fig. 3H, K). Compared to contralateral astrocytes, proximal astrocytes D). Proximal (orange line) microglia colocalized with TGFβ1, but not CD68. Pearson correlation was used to colocalize IBA1 with TGFβ1 (E) and CD68 (F). Images (A-C) were taken at 20× magnification. Insets (D1-D4) were taken at 63× magnification. Groups were analyzed via a mixed linear model. Scale bar: 100 µm. CD68, cluster of differentiation 68; Contra., contralateral; IBA1, ionized calcium-binding adapter molecule 1, Prox., proximal; TGFβ1, transforming growth factor beta 1, exhibited a 1.3-fold increased GFAP intensity (P < 0.001, Fig. 6A), a 1.4-fold increased branch length (P < 0.0001, Fig. 6B), a 1.5-fold increased area coverage (P < 0.001, Fig. 6C), and a 1.3-fold increased number of branches (P < 0.05, Fig. 6C). As well, proximal astrocytes did not show changes to NND (P > 0.05, Additional file 1: Figure S6), which is consistent with in vivo findings that astrocytes do not migrate [28]. Similar to our microglial analysis, we evaluated the performance of a conventional analysis in detecting the presence of astrocytic activation following FUS in our tissue. In defining the analytical ROI as the entire hippocampal region, none of the activation-associated features of astrocytes were found to be significantly different in the ipsilateral FUS-treated side compared to the contralateral side (Fig. 6, Additional file 1: Figure S6). To validate MORPHIOUS predicted clusters of astrocytic activation, we observed that proximal astrocytes colocalized with Nestin (vs. contralateral, P < 0.05), an intermediate filament protein which becomes upregulated during astrogliosis (Fig. 7). Moreover, there was a spatial overlap between activated astrocytes and microglia (Fig. 8A, B). In total, 15.8% and 10.3% of treated hippocampal sections were covered by activated microglia and astrocyte clusters, respectively (Fig. 8C). Of this area, 75% of activated astrocytes overlapped with activated microglia, while 49% of activated microglial clusters overlapped with activated astrocytic clusters. Moreover, proximal cluster sizes for activated astrocytes correlated with total (i.e., proximal + focal) cluster sizes for activated microglia (R 2 = 0.753, P < 0.0001, Fig. 8D). Collectively, this suggests that both cells are responding to the common FUS stimulus, and provides additional evidence that both cell types are indeed activated. Classifying microglial and astrocytic activation in a mouse model of amyloidosis To assess the generalizability of MORPHIOUS to applications related to stimuli other than FUS, we evaluated microglial and astrocytic activation in 7-month-old TgCRND8 mice, a mouse model of amyloidosis. After being trained on a set of hippocampi from nonTg littermate control mice, MORPHIOUS was applied to a test-set of hippocampi from TgCRND8 mice, therein subdividing the hippocampal area into focal, proximal, and distal subregions (Fig. 9A). Focal and proximal microglia, predicted to be activated, were visually found to overlap with plaque pathology (Fig. 9B). Similar to what we observed following FUS-induced microglial activation, within TgCRND8 mice, focal microglia showed elevated IBA1 expression (Fig. 9C, P < 0.0001) and percent area (Fig. 9D, P < 0.0001 to 0.01) when compared with contralateral, distal, and proximal microglia. IBA1 expression was also greater in proximal microglia compared to distal and nonTg microglia (Fig. 9C, P < 0.0001 to 0.01). Next, to further validate our classification predictions we asked whether microglial activation was related to plaque pathology. Both proximal (Fig. 9E, R 2 : 0.50, P < 0.01) and focal microglial (Fig. 9F, R 2 : 0.51, P < 0.01) cluster sizes correlated with amyloid plaque load. Moreover, plaque coverage was greater in both focal (Fig. 9G, P < 0.0001) and proximal microglial regions (Fig. 9G, P < 0.01) compared to distal regions. Focal microglial regions also exhibited greater plaque coverage compared to proximal microglial regions (Fig. 9G, P < 0.01). Finally, compared to both distal, and proximal regions, the mean plaque size was significantly larger in focal microglial cluster regions (Fig. 9H , P < 0.001 to 0.0001), indicating that focal microglial clusters are associated with larger plaques. We subsequently used MORPHIOUS on hippocampal sections stained with GFAP to detect distal and proximal regions (Fig. 10A). Notably, proximal astrocytes were associated with plaque pathology (Fig. 10B). Compared with distal and nonTg astrocytes, proximal astrocyte clusters showed elevated levels of GFAP immunofluorescence (Fig. 10C P < 0.0001 to 0.001) and percent area coverage (Fig. 10D P < 0.001 to 0.01). As with activated microglia, the level of astrocytic activation correlated with plaque load (Fig. 10E, R 2 : 0.66, P < 0.0001). Compared to the distal region, the proximal astrocytic region showed greater levels of plaque coverage (P < 0.05), and a larger mean plaque size (P < 0.05). This data suggests that the detected levels of microglial and astrocytic activation by MORPHIOUS are sensitive to plaque pathology. Discussion In this work, we developed MORPHIOUS, an unsupervised workflow that learns a signature of "normal" microglia or astrocyte morphologies, and uses this information to generate ROIs corresponding to "abnormal" microglia or astrocytes, here referring to activated cells. The capacity to consistently identify and segment ROIs corresponding to activated microglia and astrocytes, without the need for labelled examples of activation, can improve the study of activated microglia and astrocytes in response to disease progression and following treatment. Here we demonstrated that MOR-PHIOUS was able to detect clusters of microglial and astrocytic activation in response to FUS-BBB modulation, and in the TgCRND8 mouse model of amyloidosis. Activated microglia exhibit a range of morphological changes [8,10]. Using MORPHIOUS we segmented two distinct populations: focal and proximal microglia. Consistent with activation associated morphological changes, . Cluster sizes are reported as the percentage of the total hippocampal area covered by activated microglia (red) or astrocytes (green) (C). In total, 74.5% of astrocytic clusters overlapped with microglia clusters, while 48.7% of microglial clusters overlapped with astrocytes (brown). (D) Within the same section, cluster sizes for activated microglia and activated astrocytes strongly correlated. The correlation coefficient (R 2 ) was analyzed via linear regression analysis (N = 16). Significance: **** P < 0.0001. Scale bar: 100 µm (A, B). GFAP, glial fibrillary acidic proteins; IBA1, ionized calcium-binding adaptor protein 1 Fig. 9 MORPHIOUS identified focal and proximal microglia in the hippocampus of TgCRND8 mice. MORPHIOUS identified focal (red line) and proximal microglia (orange line) (A) in association with amyloid-beta plaques (B). Compared with nonTg and distal microglia, proximal and focal microglia exhibited progressively higher IBA1 immunofluorescence (C). Focal microglia showed increased percent area coverage when compared with all other groups (D). Pearson correlation analysis demonstrates that proximal (E) and focal (F) microglial cluster sizes correlated with amyloid plaque load. Activated microglial clusters were associated with greater overall plaque coverage (G) and mean plaque size (H). Images (A, B) were taken at 20X magnification. Between-group differences were assessed via a one-way ANOVA with Tukey's post-hoc analysis. Correlations were assessed via linear regression analysis and the Pearson correlation coefficient (R 2 ) is reported. Significance: ** P < 0.01; *** P < 0.001; **** P < 0.0001. Data represent means ± SD; N = 4 per group. Scale bar: 100 µm (A, B). IBA1, ionized calcium-binding adapter molecule 1; nonTg, non-transgenic littermates of TgCRND8 mice; Prox., proximal; Tg, TgCRND8 mice . Compared with nonTg and distal astrocytes, proximal astrocytes exhibited greater levels of GFAP immunofluorescence (C) and percent area coverage (D). Pearson correlation analysis demonstrates that proximal astrocyte cluster sizes correlated with amyloid plaque load (E). Proximal astrocytes exhibited increased levels of amyloid-beta plaque coverage (F) and plaque size (G). Images (A-B) were taken at 20X magnification. Between-group differences were assessed via a one-way ANOVA with Tukey's post-hoc analysis. Correlations were assessed via linear regression analysis and the Pearson correlation coefficient (R 2 ) is reported. Significance: * P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001. Data represent means ± SD; N = 4 per group. Scale bar: 100 µm (A, B). GFAP, glial fibrillary acidic protein; nonTg, non-transgenic littermates of TgCRND8 mice; Prox., proximal; Tg, TgCRND8 mice focal microglia, and to a lesser degree proximal microglia, exhibited elevated IBA1 expression, increased soma size, decreased nearest neighbour distance and decreased branching [10]. Following FUS, focal microglia colocalized with the activation markers CD68 and TGFβ1 [42,44], whereas proximal microglia only co-localized with TGFβ1, which itself was lower than that of focal microglia. These data support the claim that focal and proximal microglia represent two spatial subsets of microglial activation with distinct morphological and molecular identities [10,45,46]. Following FUS, in the same regions as activated microglia, MORPHIOUS independently identified clusters of astrocytes which were characterized by increased GFAP intensity, area coverage and branching, hallmark features of astrogliosis [2]. Moreover, proximal astrocytes colocalized with Nestin, an intermediate filament which is co-expressed with GFAP when astrocytes are activated [47]. Thus, in addition to microglia, MORPHIOUS identified regions of morphologically distinct astrocytes which exhibit features consistent with activation. MORPHIOUS works by learning a definition of "normal" cellular morphologies from control tissues, which is subsequently used to identify spatial clusters of cells deemed to be sufficiently distinct from "normal" cells. As such, MORPHIOUS does not rigidly define the morphology of an activated microglia or astrocyte; instead, it infers a broad definition of "abnormal" activationassociated morphologies. As a result, we suggest that MORPHIOUS may be conducive towards identifying activated microglia and astrocytes in a broad range of pathologies. To support this claim, we show that MOR-PHIOUS could also detect focal and proximal activation clusters of microglia, and proximal activation clusters of astrocytes, in a mouse model of amyloidosis. Both microglial and astrocytic activation are known to correlate with plaque pathology [48,49]. Similarly, we found that proximal and focal microglial and proximal astrocytic cluster sizes were responsive to amyloid burden. Collectively this suggests that MORPHIOUS can be used to detect pathological changes that are associated with microglial and astrocytic activation. Importantly, genomic studies have clarified that the activation of microglia and astrocytes is complex and context specific [46,[50][51][52]. This suggests that traditional markers of activation may not be suitable for characterizing the full magnitude of microglial and astrocytic activation. MORPHIOUS provides the advantage of identifying microglial and astrocytic activation-associated morphological changes, which reduces the need for secondary activation markers. Indeed, MORPHIOUS was able to identify activated microglia in a mouse model of amyloidosis, where the molecular landscape is complex and different between microglia adjacent to plaques, phagocytosing, and those that are further away [29,53,54]. When quantifying cellular morphologies in immunohistochemical analyses, it is critical to choose an appropriate ROI (i.e., the denominator) by which immunological features can be normalized [30][31][32]. To avoid bias, it is conventional to define a ROI as brain region, or tissue type, which is either analyzed in its entirety, or, through sampling multiple fields of view [30][31][32]. However, quantification in this manner can be difficult in tissues with significant heterogeneity, as the presence of relatively few activated cells can be masked by the abundance of surrounding non-activated cells. This quantification problem is exemplified in the detection of small tissue perturbations as previously reported following the application of FUS. Specifically, after applying FUS to the cortex, Sinharay et al. found that despite the visual appearance of activated microglial clusters, the levels of IBA1 detected between FUS-treated and contralateral cortices were not statistically different [35]. Similarly, in our study, most features of microglial and astrocytic activation were statistically indistinguishable when comparing the entire ipsilateral FUS-treated and contralateral hippocampi (Figs. 4, 6). To increase the sensitivity of detecting morphological changes in relatively small groups of cells within a heterogenous tissue, the definition of reasonable regions of interest to focus the analysis is required (i.e., such as by a trained histologist) [30][31][32]. Using a data-driven approach, MORPHIOUS aims to automate this approach, and can generate discrete regions of interest of activated microglia and astrocytes. This in turn facilitates the detection and quantification of microglial and astrocytic activation not apparent through conventional analytical means. It is recognized that the identification of pathology-associated regions of interest by a trained histologist represents a gold-standard. As such, we do not claim that MORPHIOUS outperforms expert manual labelling. However, manual labelling can be labor intensive and time consuming [32]. In automatically defining ROIs, MORPHIOUS may directly aid the work of histologists in their workflows, and generate initial ROIs that can be fine-tuned as needed. MORPHIOUS offers advantages over previous unsupervised approaches that identify activated microglia through clustering in feature space alone, such as through K-means or hierarchical clustering [3][4][5][6][7]. While previous methods can evaluate the putative activation of individual cells, the heterogeneity in microglia morphology poses a risk for false positives that are difficult to interpret given the nature of unlabeled data. For example, Davis et al. (2017) reported that following orbital optic nerve crush, activated microglia were found distributed among resting microglia in both the treated and untreated olfactory bulbs, a finding that merits further investigation [3]. By contrast, MORPHIOUS avoids the inclusion of individual false-positive cells by clustering through a spatial approach. While this approach prevents the identification of sparsely activated cells, or individual cells, it distinguishes MORPHIOUS from previous work by allowing it to segment whole regions of cell activation. This discrete ROI both provides an indication on the spatial extent of pathology, as well as distinguishes a region for more fine-tuned analyses. In addition, MORPHIOUS may be adaptable to applications outside of identifying microglial and astrocytic activation. Indeed, a similar one-class support vector machine approach has been used to segment the borders of tumors using MRI data [55,56]. The lack of ground truth that could be obtained from a histologist and against which predictions could be compared, precludes the ability to report the accuracy of microglial and astrocytic classification; a limitation which is common to unsupervised quantification approaches [3,4], including MORPHIOUS. Given that we do not have a ground truth for activated microglia and astrocytes, we cannot rule out that we are over-or under-classifying the activation of microglia and astrocytes. Tuning hyperparameters for one-class support vector machines is a critical, and often difficult task, for which a consensus on optimal methodology has yet to be reached [57]. A common technique is to maximize accuracy while minimizing the number of false-positives, based on labeled data (i.e., the class of the data is known) [55,56]. However, examples of positive-class cases (i.e., outlier data) can be challenging to acquire. Advanced methods deploy a variety of strategies which focus on identifying patterns in the one-class itself to maximize the capacity to distinguish normal cases from outliers [57]. In our case, we leveraged two plausible biological assumptions for optimizing hyperparameters: (1) that activated microglia and astrocytes will coalesce in spatial clusters that occur in response to a stimulus. This has been well documented to occur in cases of ruptured blood vessels [10,28], and amyloid-beta plaques [17]; and (2) that healthy control hippocampal brain tissue will not exhibit large clusters of outlier cells. Thus, in tuning our one-class support vector machine, we deployed a simple learning objective: find the set of hyperparameters which maximizes cluster size in test-set hippocampi, while ensuring that no clusters of activation are observed in control hippocampal slices. Importantly, searching for clusters of outliers may not be suitable for images which are highly heterogenous, or, for identifying single, or small numbers of morphologically distinct cells. As with all machine learning approaches, the effectiveness of the learning model is limited by the range of features selected. To develop a simple and accessible approach, MORPHIOUS collects features through the widely used software ImageJ. It is possible that greater levels of sophistication will be required for developing features to distinguish levels of activation in microglial or astrocytic cells of higher complexity in species, such as primates, and/or following certain pathological and experimental conditions. In such case, users can input their own set of features into MORPHIOUS, such as has recently been described by other methods [4,33], and therein expand its usability to other cases. Moreover, MORPHIOUS could be further improved with state-of-the-art convolutional neural networks that can effectively interpret features from raw images. Conclusions In conclusion, we demonstrate in two animal models that MORPHIOUS can, in an unsupervised manner, identify clusters of activated microglia and astrocytes based on morphology alone. These clusters were found to coincide with the expression of common activation markers and indicators of pathology. Quantification methods such as MORPHIOUS show promises for improving the detection of microglial and astrocytic activation in diverse contexts.
9,098.6
2022-01-29T00:00:00.000
[ "Medicine", "Computer Science", "Biology" ]
Subjunctive and Interpassive “Knowing” in the Surveillance Society The Snowden affair marked not a switch from ignorance to informed enlightenment, but a problematisation of knowing as a condition. What does it mean to know of a surveillance apparatus that recedes from your sensory experience at every turn? How do we mobilise that knowledge for opinion and action when its benefits and harms are only articulable in terms of future-forwarded “as if”s? If the extent, legality and efficacy of surveillance is allegedly proven in secrecy, what kind of knowledge can we be said to “possess”? This essay characterises such knowing as “world-building”. We cobble together facts, claims, hypotheticals into a set of often speculative and deferred foundations for thought, opinion, feeling, action. Surveillance technology’s recession from everyday life accentuates this process. Based on close analysis of the public mediated discourse on the Snowden affair, I offer two common patterns of such world-building or knowing. They are (1) subjunctivity, the conceit of “I cannot know, but I must act as if it is true”; (2) interpassivity, which says “I don’t believe it/I am not affected, but someone else is (in my stead)”. We Knew Already At least, that was what some people said after Edward Snowden's leaks on NSA surveillance.Did he tell us anything we didn't know?, asked journalists (Milner, 2013)."They didn't feel much like revelations", said a director (Laskow, 2013).But what was meant by this curious phrase, we knew already?"Knew"-yes, some of the information really was public knowledge.But even the entirely new aspects of it were, apparently, not very surprising.After all, the discourse goes, we already "knew" of older NSA programs like Trailblazer and ECHELON-so we surely expected something like PRISM.But who is this "we"?The discourse designates a depersonalised hivemind: the knowledge of NSA surveillance was stored in our collective archive, though the proof is in nonhuman documents rather than what individuals can "remember".Sometimes, the "we" instead designates the journalist, the director, the activist: the "we" in the know who pens these commentaries, the "we" that is less gullible than the average Joe, the "we" of the "we told you so".And what about the "already"?Despite itself, the discourse is less about defining past concerns and more about characterising the present.It is a way to designate a historicity for the revelationswhether to dampen the outrage or stoke it.So: this we sure isn't everyone, and sometimes excludes me at least; and the knowing it did certainly wasn't a very comprehensive one.Satire, as it so often does, brings these ambiguities into the open: "We already knew the NSA spies on us.We already know everything.Everything is boring" ("We already knew," 2015).What has knowing ever done for us, anyway? These questions provoke and organise the present essay.I argue that what happened with Snowden was not a simple flip of the switch from collective ignorance to enlightenment.Rather, it is a question of what knowing involves.How do we develop belief about a surveillance system so vast it cannot be experienced by any single individual-and moreover, a surveillance system which consistently seeks to recede from lived experience?How is a "we"-thatknows interpellated, and how is this "knowledge" leveraged to authorise actions and opinions?It is often said that surveillance inherently violates our fundamental rights, and the public need only be informed in order to rise up against it.Others explain contemporary surveillance in terms of disempowerment, paranoia and anxiety (Andrejevic, 2013a;Bauman & Lyon, 2013;Browne, 2010).Yet for all the merits of such criticism, they sit uneasily with the fact that most people have learned to live with their awareness of Orwellian surveillance.Whether one seeks to defend state surveillance or denounce it, the basic operation that underlies both is a "world-building".The facts and arguments are cobbled together to present a new intuition, a new common sense, about how this enormous technological apparatus runs our world.Hence the question: how do we develop a sense of contemporary surveillance as a world "out there"? In what follows, I first describe the recession of surveillance practices from the subject's lived experience.The gap created by this recession accentuates the role of speculation and belief.I then offer a conceptualisation of world-building vis-à-vis surveillance, drawing especially from phenomenology, affect theory and ritual theory.Finally, I discuss two common patterns in the Snowden affair discourse to indicate particular techniques of world-building.They are (1) subjunctivity, the conceit of I cannot "know" but I must act "as if"; (2) interpassivity, which says I don't believe it/I am not affected, but someone else is (in my stead).These latter sections are based on ongoing research into the public discourse on the Snowden affair for a larger project.This essay draws on U.S. media coverage from June 6, 2013 (the date of the first leak) until March 14, 2014, focusing on prominent publications such as The New York Times and The Washington Post. 1 It also draws on high-profile public statements, such as Edward Snowden's public appearances and statements by President Obama or NSA personnel.The essay's arguments arise from identification of the recession of surveillance, and techniques for coping with that recession, in this body of discourse. 1 All relevant coverage from the following publications were examined: New Yorker, The Atlantic, The Intercept, The New York Times, The Washington Post, Wired (all online).The Guardian was also included as an especially relevant publication that was also read directly by many U.S. readers (which was not necessarily being true of Der Spiegel, another key player in the affair).Some snowballing was also conducted on the data for this essay. Recession People should be able to pick up the phone and call their family, should be able to send a text message to their loved ones, buy a book online, without worrying how this could look to a government possibly years in the future.(Edward Snowden in Rowan, 2014) The irony is that many of us-including those outraged by NSA surveillance-do call our family, buy books online, and sleep very well at night.A few months after Snowden's appearance, a Pew survey (Rainie, Kiesler, Kang, & Madden, 2013) suggested that the majority of Americans believe their privacy is not well protected by current laws.Yet in most cases, their response amounted to deleting cookies.If the gesture was hopelessly inadequate, it at least had the virtue of being convenient.This apparent contradiction arises from the recession of surveillance.In contrast to the flood of media reports, actual surveillance technologies systematically withdraw from our lived experience and "personal" knowability.The mantra for this situation: "I know they might be watching, Edward Snowden told me so-but I don't 'experience' it." We can first of all characterise this recession as technological.In a basic sense, all technology involves a withdrawal from sensory experience.Heidegger's hammer externalises human action and intention, and embeds it in a crafted object (Scarry, 1985).Computational technology often amplifies this recessive character.The smooth surface of the smartphone, even compared to the gears and chains on a bicycle, encourages us to forget the connections, dependencies and processes that maintain our environment-and should we remember, denies us easy access to that knowledge (Berry, 2011, Chapter 5).This is precisely the case with contemporary online surveillance.It is designed to operate behind the front-end user interface, sweeping up personal data out of human awareness.It interacts with the world-and us-in ways that our senses cannot access. 2 Even the physical databanks are literally isolated in a giant data centre in the Utah countryside.This is in distinct contrast to, say, American police surveillance.In that case, the post-1970's period has seen techniques like house raids, court summons, patrols, pat-downs and urine tests to impose state power viscerally upon the (especially poor black) population (Goffman, 2014).If in police raids or airport screenings (Adey, 2009;Parks, 2007;Schouten, 2014) surveillance intrudes rudely upon one's space, habit, affect and body programs like PRISM do the oppo-site.They evacuate every sign of their existence from lived experience.Consider the web beacon, commonly used in corporate/commercial surveillance.Also called tracking pixels, it is a tiny (1 × 1 single pixel), transparent object embedded into web pages to track user access.It is literally invisible to the naked eye, and the user may only discover it by bringing up the source code.Of course, even if I am informed of the existence of beacons and how they work, I quickly realise that it is impractical to comb through the source code of every page I visit.Momentarily armed with the power of knowledge, I surrender it again in favour of a deferred and simulated feeling-knowing: "I would be able to tell if a beacon is tracking me if I took the time to look." The beacon illustrates the recession's epistemological properties.The subject is distanced from knowledge of surveillance at multiple levels.There is what we might call, in Rumsfeldian terms (Hannah, 2010), a "known unknown": I know that I will never know if an NSA agent has gone through my metadata.Then, there is the "unknown unknown": Snowden has revealed programs like PRISM and XKeyscore, but given the apparently enormous quantities of documents in Snowden's hands, and given that Snowden himself won't know everything, I now know that I am unlikely to ever know what I don't know about my vulnerability to surveillance.In Kafka's The Trial, what strikes Josef K. is not the fact that he is charged with serious crimes; it is that, despite every desperate attempt, the inscrutable bureaucracy yields no knowledge of what he is charged with and why.Certainly, Snowden's revelations have provided new information about state surveillance; "we" can say we "know" more than we had before.But we can see that this knowing can actually contribute to the recession of surveillance. One ironic aspect of this recession is that most of us experience discourse about surveillance more than surveillance itself-a situation we also find with respect to globalisation (Cheah, 2008) and the nation-state (Anderson, 1991).Surveillance becomes available for talk and thought precisely as an estranged and phantasmal object.Through public, mediated discourse surrounding the Snowden affair, we make this surveillance into something knowable and sensible-even if the kinds of beliefs produced here are not strictly reducible to objective fact.This is what I mean by world-building activity.It is the interpellation of the surveillance society as a world "out there".Recession and world-building are intertwined.The former emphasises what we do not and cannot "know" for ourselves.The latter is how, despite this gap, we try to make some sense of the world we find ourselves in.Surveillance hides from us, but we cannot help but talk about it endlessly. The "Out There" Our ability to render surveillance society comprehensi-ble is predicated not (only) on objective proof and available facts, but conventionalised ways for putting what we know together with what we don't know; ways for forming a coherent, though often inconsistent, picture.As noted above, many Americansthrough media like Pew surveys-claim they are concerned about surveillance and often feel unsafe.At the same time, this same public has exhibited a clear willingness to live in and with this surveillance society, in many (not all) cases declining to take revolutionary or directly political action in response to the Snowden leaks.It is not sufficient to presume false consciousness, an illusory daze maintained by a clever concoction of ideology, misinformation and obfuscation.Studies into risk perception have shown that becoming better informed does not necessarily correlate to a stronger perception of dangers-or concrete actions taken to mitigate them (Douglas, 1992;Wildavsky & Dake, 1990, pp. 31-32).A similar sentiment is now being expressed by surveillance and privacy scholars.Subjects can know very well their rights are being violated and live with that violation (Andrejevic, 2013b;Mansell, 2012;Turow, 2013).The key is not to seek to unravel this "contradiction" into a consistent explanation, one which would supply us with a "worldview" with a singular internal logic.Subjects, Lauren Berlant tells us, are surprisingly good at managing their affective incoherence and disorganisation, and defending it in their own terms (Berlant & Edelman, 2014, p. 6;Berlant & Greenwald, 2012).When my firm belief in control over my life is challenged by news of state surveillance, or when my habituated attachment to new media bristles against my political views, I do not always respond with bold and sweeping changes to smooth out the differences.Rational consistency is often not our highest priority.Instead, what emerges is a set of platitudes, "common sense" wisdoms, habits, turns of phrase, speculative beliefs, recited facts, which support precisely the contradictions I have already come to embody.Now, this line of thought must be distinguished from older modernist denigrations of "primitive" beliefs.Those were presumed to be an amalgamation of non-scientific mistakes taken as eternal truths-thus explaining their resistance to rationalist "demystification".Here, it is a question of making knowledge of the world work for what subjects can't help but know, face, and deal with in their present lives.In short, to study world-building activity vis-à-vis surveillance is to understand how we cope (Berlant, 2011) with our own persistent living while under exposure to a relentless program of observation. This isn't to say that nothing can knock us off from our serene perch.Crises happen-sometimes erupting in political and psychological drama, sometimes undoing social cohesion or individual well-being quietly in the backstage.Surveillance, too, can sometimes confront subjects violently and threateningly.The world-building perspective is to explain how things "work"not perfectly, but sufficiently-in those times when crises don't happen, or when (possible) crises become dampened into compromises and apologies.The Snowden leaks certainly did challenge our previously built worlds.For some, it really was a crisis, driving them to explicit changes in behaviour.But many subjects also found ways to restore normalcy precisely by responding to new narratives and events, and rebuilding their positionality vis-à-vis the world out there. That is what we have always done, after all-even back when "we knew already".A great deal of knowledge about U.S. state surveillance had been available to the public before 2013.But this "we" had stumbled on ways to keep that knowledge sequestered in a dusty corner, a largely negligible and rather conspiratorial fact about "politics these days". What these world-building responses suggest is that we have multiple ways of "knowing" and "believing".Indeed, those very terms do not do justice to that multiplicity.What does it mean to "know" when a teenager says "I know what I learned in school today", but can't articulate it to the expectant parent?What does it mean to "believe" in God but nevertheless demand scientific proof of his existence-or, inversely, accomplish my "belief" by submitting to Pascal's wager?As Žižek might quip, we know many "truths", but truths we are willing to die for, which we believe in absolutely in any circumstance, are all too rare.This is easier for us to grasp when we consider a nonmodern case.The Dorzé people in Ethiopia believe leopards are also Coptic Christian and observe fasting days prescribed by the religion…and on those fasting days, they will take care to protect their livestock from hungry leopards, as they've always done (Veyne, 1998, pp. xixii).They see nothing strange in this.Similar cases abound in anthropological writings.The Nuer believe twins are birds, which is distinct from saying birds are twins or that this twin is a bird (Douglas, 2001, p. 148). The key is to take on such contradictions not as mistakes or ignorance but as genuine world-building techniques.Or again: Merleau-Ponty (2012) argues that mythology or madness is not a case where our objective connectivity with the material world is underdeveloped or broken.Rather, a mythological explanation or a schizophrenic's hallucinations, for those subjects, involve a way of perceiving and understanding the world that is just as intuitive and genuine as our relationship to science, visual phenomena or speech.A schizophrenic woman believes two people with similar looking faces must know each other (Merleau-Ponty, 2012, pp. 298-299).This is an abnormal wiring of world-building capacity, but one which makes life possible and sensible for this woman in the same way something like physiognomy did normatively for 19 th century urban dwellers (Pearl, 2010).The normal is full of arbitrary connections, too; one example is confabulation, or the pre-reflective and non-deliberate fabrication of personal memory that appears to occur in spontaneous ways to achieve self-understanding of what just happened (Orulv & Hyden, 2006). In short, not only are our worldviews often complex and contradictory, we are also able to hold a plurality of relationships to the world out there through these flexible ways of knowing and believing.Why do we do this?Because contradiction and even incoherence can often be of great use in our ordinary living.Sometimes it's a matter of convenience, or of persuading others (and myself), of saving face.Sometimes we persist in some kind of belief because to jettison it would change our own image of ourselves unacceptably.The "effect" of a truth or belief is thus entangled with its "cause".To accuse such activity of inauthenticity is to miss the point.Such multiplicity is often critical to our ability to cope with our lived reality.It is what gives the subject the power to stay cohesive across the battery of situations and challenges it faces each day and hour-to maintain a feeling that despite everything, the world continues to make some minimal sense. The next two sections will discuss concrete ways in which such world-building is taking place in the wake of the Snowden affair.They analyse how public debate is producing various narratives of the new surveillance world, and importantly, what specific ways of knowing and believing are involved in such production.The mass media plays what we might call a ritualistic role in this process.Media has been classified as ritualistic in the sense that media activity itself is often calendrical and collectively coordinated for effects of "liveness" and participation (Dayan & Katz, 1992).This effect is not reducible to the symbolic content of media coverage.Even if not everyone watches the same television program, even if interpretations of specific messages differ, even if some may not take media reports of the dangers of surveillance seriously, mass media have a phatic effect.The rhythmic pattern with which they take a place in our everyday life produces in itself a sense of connectivity to a wider world (Frosh, 2011).This phenomenological relationship enjoins the public not to swallow whatever they are told by the television anchor, but to continually adjust their positionsceptical, believing, critical, supportive-relative to media representations (Carey, 1975).It is on this basis that media performs itself as a "centre" of society, one which provides "transcendent patterns within which the details of social life make sense" (Couldry, 2003, p. 3).In other words, the media is less an indisputable source of factual statements about the world, than it is a repository of themes, topics and interests against which we form our beliefs about how the world works.One might decry surveillance coverage in the media as conspiratorial nonsense (of the Left, of the Right, of the American government, or the Russian one…) and disbelieve it; but that very move often entails trusting that coverage as some reflection of what "people out there" believe. This leads back to the "we" of "we knew already".Insofar as my sense of the surveillance world is framed in relation to what I believe is the public understanding and experience of surveillance, the public "out there" becomes an essential part of this mediated worldbuilding.Indeed, the modern public, from its very inception in the age of the printing press, has always had a virtual and imagined quality.After all, what I can see and hear on my own is always only a small part of that human multitude, one which extends into the "out there" as an indefinite set of strangers (Eisenstein, 1980;Tarde, 1969;Warner, 2002).We learn to authorise ourselves to speak on the public's behalf, or at least, presume what the public thinks and knows, in order to produce our own positions (Bourdieu, 1979;Hong, 2014).Media discourse, insofar as it is a ritualised promise of a "centre" of society, instructs its audience not only on what the public allegedly is, but how to relate to the public as an object of knowledge and belief (Fraser, 2006, pp. 155-156).Media discourse is thus the site where multiple ways of knowing and believing are expressed and legitimated, and it is on this basis that we are able to build a sense of surveillance as the world "out there".We now move to two specific patterns: subjunctivity and interpassivity. Subjunctivity Your rights matter because you never know when you're going to need them.People should be able to pick up the phone and call their family, should be able to send a text message to their loved one, buy a book online, without worrying how this could look to a government possibly years in the future.(Snowden in Rowan, 2014) I buy fire insurance ever since I retired, the wife and I bought a house out here and we buy fire insurance every year.Never had a fire.But I am not gonna quit buying my fire insurance, same kind of thing.(James Clapper in Lake, 2014) "You never know" is the ominous mantra that grounds both the claims of Edward Snowden, whistleblower, and of James Clapper, the U.S. Director of National Intelligence."You never know" invokes a looming: a threat that is nothing yet, but is very much real in its existence as potential (Massumi, 2005, p. 35).The Orwellian future where you might be punished for your ordinary actions today; the apocalyptic scenario when terrorism happens to you and your family.That which by definition cannot ever be made certain is invoked as presumptively real in order to legitimise actionwhether for or against state surveillance.This is the as-if, the subjunctive.Grammatically, the subjunctive mood is the flotation of a non-true state-ment: "if I were…" This very construction produces an ambiguity, a split construction of "belief".Such constructions sustain a state of affairs which is neither mere illusion nor fully believed to be true.In the Snowden discourse, we find the paradigmatic formulation of subjunctivity to be the as if: we must act, think, feel, believe, as if I am personally under watch, as if terrorism is about to happen to me, as if surveillance does help us prevent terrorism.In other words, the subjunctive involves a two-pronged handling of knowledge and belief, and this very ambiguity is what lets us leverage the unknown: "Yes, we don't know if it's true or not, but we have to pretend it is true".It is telling that one of the few scholarly fields where subjunctivity is commonly discussed is in science fiction studies, centred on the work of Samuel R. Delany (1971).Although deployments of subjunctivity do not materially count as "events", these characteristics mark them as highly ritualistic.Rituals have been called "time out of time" (Rappaport, 1999, pp. 216-222).They are moments when we collectively say, wait: let us step out of our rules and rhythms of life for a moment, so that they may be renewed and reaffirmed, or even, adjusted with localised change (such as the change in status of an individual member in a rite of passage).Similarly, the as-if is a way to step into a liminal (Turner, 1982) zone in one's thinking and believing, but one which is then sutured back into one's assessment of "reality".It is a way for us to deal with our ignorance, our uncertainty, and other ways in which our present and ourselves in that present disappoint us.It is a way to cope with the imperfections and vulnerabilities of our exposure to power and danger. This subjunctive turn in surveillance has been subject to much commentary.In risk literature, it is described in terms of "precautionary" or "catastrophic" risk-enormous uncertainties of climate change and terrorism which outstrip the industrial risks of factory disasters and chemical contaminations (Aradau & van Munster, 2007;Ewald, 1993).Surveillance studies frequently references Brian Massumi's (2007) pre-emptive logic: a radicalisation of traditional causality and proof in a world of pure potentiality.My account does not necessarily supplant or contradict these theorisations.Rather, it emphasises the world-making aspects of subjunctive logic; a world-making which is capable of supporting both pro-and anti-surveillance attitudes. The first type of as-if that permeates our present relationship to surveillance is the uncertainty about whether I am being watched at all.This effect is created by the juxtaposition of an apparently enormous and pervasive surveillance system, and, given its recession, the fact that the surveilled subject will rarely know if they have ever been "watched" by a human agent.Surveillance becomes a Deleuzian virtual.For Snowden and other opponents of NSA surveillance, it is critical to overcome this felt recession if the public is to "build" a world where surveillance is a keen danger.Ironically, this task is undertaken by combating another kind of as-if.Anti-NSA discourse consistently interpellate an imagined public, one which presumably thinks it is safe as long as it has not done anything "wrong".A New York Times opdoc, "Why Care About the N.S.A.?" opens thus: Narrator: I want to get your response to a few things people typically say who aren't concerned about recent surveillance revelations.David Sirota: Nobody is looking at my stuff anyway, so I don't care?My argument for that is if you don't speak up for everybody's rights, you better be ready for your own rights to be trampled when you least expect it.First and foremost, there are so many laws on the books, there are so many statutes out there, that you actually probably are doing something wrong….So when you start saying I'm not doing anything wrong…you better be really sure of that.(Knappenberger, 2013) Sirota's warning is accompanied by a dizzying array of legalese in flight (Figure 1).By shifting the subject's gaze onto the bureaucratic and technological depths which almost entirely lie beyond everyday experience, the subject is divested of the ability to confirm or deny his/her own safety.This is distinct from the simple claim that we are not safe.It is (also) the claim that we do not have the ability or resources to tell in the first place.The projected "common sensical" subject is appealed to through an indeterminate "what if" situation, and implicitly, the argument is made that since the "what if" is particularly unsavoury, it should be considered as an "as if".Thus the reality of surveillance is impressed upon the subject not by recovering concrete surveillance practices from their recession, but actually by expanding their virtual dimension into an enormous, totalitarian as-if.Snowden and his sympathisers argue they are informing the public.True.But what they are also doing, above and beyond that, is to modulate an imagination which is necessarily in excess of the information strictly available. This same technique is applied to the objective of surveillance itself: the threat of crime, and especially of terrorism.James Clapper quips that PRISM is no different from fire insurance.But insurance developed its appeal by quantifying fearful indeterminacy into percentages and premiums.The strategic use of disaster statistics and risk percentages could claim to provide a stable and objectively factual knowledge of danger and vulnerability.This is decidedly not the case with post-9/11 surveillance (Beck, 2009;Ewald, 1993).Terrorist attacks are sometimes analysed statistically, but their relative scarcity makes it difficult to draw convincing conclusions.The danger of being surveilled or falling victim to a terrorist attack is generally not parsed in terms of estimable "risks" (at least, not in public debate).As has been extensively analysed (and criticised), U.S. surveillance and anti-terror policy following September 11 has been predicated on the idea that even one attack is too much, and even one percentage a chance is too great (Aradau & van Munster, 2007;Cooper, 2006;Hannah, 2010).The proponents of state surveillance thus rely on the same "excessive" designation of the as-if.One key metaphor for NSA surveillance programs has been the dragnet, traditionally used to describe police activities like location-wide stop and frisks.The dragnet indiscriminately collects data on the innocent as well as the suspicious, highly relevant data as well as irrelevant ones-because the innocent can always turn out to be the criminal, and the most irrelevant piece of data may help triangulate his/her identity.Within this rationality, surveillance is not, strictly speaking, proven to be necessary by past terror attacks or present identification of concrete dangers.Proof is always deferred: we must act as if the efficacy of this program has been proven by a danger which, if we are right, we will prevent from ever actualising. Subjunctivity is one name for how public figures present the world of surveillance.Importantly, this presentation is also a part of public subjects' wider, lived relationship to that complicated and distant world.And crucially, our relationship to the media discourse on surveillance itself becomes subjunctive as we try to navigate this tangle of complex and often contradictory claims.How can we produce a picture that makes sufficient sense to us, and how can we say to ourselves that we "know enough" to act, to not act, or at least to have an opinion about the whole affair?For instance, the subject's ability to assess the legality of surveillance becomes challenged by his/her experience of this discourse.Snowden's revelations were, at least, generally accepted in the media as solid, reliable information about the technical process of NSA surveillance.However, the precise legality of each given practice, and indeed, the question of who actually knows about and guarantees each practice, is explicitly designated as uncertain.As one headline put it: "You'll Never Know if the NSA Is Breaking the Law" (Bump, 2013).On one level, as David Sirota did above, it is suggested that there are so many different programs, legal decisions, secret courts and procedures involved, the public as a whole will "always" be left uncertain as to if the letter of the law is really being broken.On another level, we cannot presume that the reading public is a homogeneous mind with full access to every piece of information made available to them.The "we" of "we knew already" does not exist in such a form.Most subjects are likely to experience a partial picture, based on their limited reading and recall, of conflicting arguments and claims made in public.One may not keep up with every Snowden leak, tell apart XKeyscore from PRISM, or even understand exactly what counts as metadata and what doesn't.But it is more than possible to take away a general picture: the idea that the legality of surveillance is uncertain, and that any opinion or action we take will have to happen in abeyance of that knowledge. What these situations suggest is that information often begets uncertainty, and in turn, provokes subjunctive responsivities.It is indisputable that Snowden's leaks have increased the total amount of knowledge we collectively hold about NSA surveillance.But the more Snowden reveals, the more cause we have for paranoia and uncertainty-an ironic reversal of Shannon's law of information.When we learn that the NSA monitors video game chatter for terrorist activity (Ball, 2013), it does not provide reassurance that we now know everything there is to know about that sordid affair.Rather, it gives us license to believe that if such a thing is true, surely many more things might be as well.Table 1 lists only the major additions to "our" knowledge of NSA surveillance between June 2013 and March 2014.It is quantitatively beyond what most subjects can afford to give full attention to.Indeed, the sheer number of documents Snowden has been said to possess-1.7 million by one count (Kelley, 2013)-makes the Snowden files themselves an inexhaustible and virtual repository of new revelations, just like the NSA's portfolio of surveillance technologies or the manifold dangers of the post-9/11 world.As with the question of legality, many subjects proceed with a general awareness that there is a plethora of leaks, without a firm grip on each leak or what they concretely add up to.Mary Douglas once asked: why do experts insist on educating the public about issues like climate change?Don't they realise that the more information becomes available, the more possible interpretations arise, and the more intractable a sensitive topic becomes?(2001, p. 146) To this, we might add: don't they know that information can feed speculation, rather than extinguish it?The Snowden leaks have provided additional ingredients for feeling uncertain and vulnerable.Whatever political position (including apathy and a "wait-and-see" prudence) one chooses, whatever imagination of surveillance one subscribes to, it must be predicated on an uncertain and receded reality that one chooses to overcome through the "as-if". Finally, the subjunctive experience even extends to cases where subjects do try and take concrete steps to protect themselves from surveillance.While Edward Snowden espouses the benefits of programs like TOR, he admits: You will still be vulnerable to targeted surveillance.If there is a warrant against you if the NSA is after you they are still going to get you.But mass surveillance that is untargeted and collect-it-all approach you will be much safer [with these basic steps]. ("Edward Snowden SXSW," 2014) Nearly every privacy solution recommended today comes with such caveats.As the concerned public flocked to existing privacy solutions, one VPN (Virtual Private Network) developer-a common alternative to TOR-commented: If you're concerned about surveillance agencies such as the NSA, their capabilities are shrouded in secrecy and claiming to be able to protect you is offering you nothing but speculation.(Renkema, 2014) Table 1.Major revelations on NSA surveillance, June 2013-March 2014.14.3.12Leak: NSA "Expert System" for malware implants allegedly planned 14.2.10 Leak: NSA metadata & geolocation helps drone attack 14.1.27 Leak: NSA uses "leaky" mobile apps 14.1.16 Leak: NSA collects millions of texts 13.12.13 Leak: NSA cracks cell phone encryption for A5/1 (2G standard) 13.12.10 Leak: NSA uses cookies to spy 13.12.9 Leak: NSA uses video games to spy 13.12.4 Leak: NSA collects 5 billion phone records per day 13.11.26 Leak: NSA spies on porn habits 13.11.23 Leak: NSA "Computer Network Exploitation" infects 50k networks 13.11.14 Leak: CIA collects bulk international money transfers 13.10.31 Leak: NSA hid spy equipment at embassies & consulates 13.10.30 Leak: NSA attacks Google & Yahoo data centres 13.10.24 Leak: NSA tapped 35 world leader calls 13.10.21 Leak: NSA spied on Mexico's Calderon, emails 13.10.14 Leak: NSA collects US address books, buddy lists 13.10.4 Leak: NSA can hack Tor 13.10.2 Leak: NSA stores cell phone locations up to 2 years 13.9.30Leak: NSA stores metadata up to a year 13.9.28Leak: NSA maps Americans" social contacts 13.9.16 Leak: NSA "Follow the Money" division tracks credit card transactions 13.9.7 Leak: NSA can tap into smartphone data 13.9.5 Leak: NSA attacks encryption standards and hacks 13.8.29 Leak: US intelligence "black budget" 13.8.23 Leak: NSA employees spy on ex-lovers 13.8.15 Leak: NSA internal audit shows thousands of violations 13.7.11 Leak: XKEYSCORE program.13.7.10Leak: NSA "Upstream" fibreoptic spying capacities 13.6.30 Additional PRISM leaks 13.6.19 Leak: NSA "Project Chess" for Skype 13.6.17 Apple, Microsoft, Facebook release details 13.6.16 Leak: NSA spied on Medvedev at G20, 2009 13.6.11 Leak: BOUNDLESS INFORMANT for surveillance records globally 13.6.10Snowden named 13.6.9 Leak: NSA record/analysis tool 13.6.7 Leak: "Presidential Policy Directive 20" for cyberattacks to foreign targets 13.6.6 Leak: PRISM revealed In other words, the subject's feeling safe enough is predicated on his/her ability to live on as if whatever tools chosen (including none) has provided sufficient protection against this unknown and silent risk.After all, one will never know if one's privacy was in fact compromised.The lived experience of interacting with privacy tools also contributes to this subjunctive situation.Consider AVG PrivacyFix (Figure 2), one of many simpler tools which promise to protect against (in this case, corporate) surveillance.It is all too easy: a few clicks, yellow and white symbols flashing into a reassuring green, and one is allegedly safer.Certainly, some of this software at least does provide some real mitigation against major surveillance techniques.But for any subject that is not particularly well informed or technologically savvy, the experience of using these programs is often a simulation of safety: a simulation against the inscrutable backdrop of a receded world.And so, even the subject who does "everything possible" to guard against surveillance must subjunctively reassure him/herself that "everything possible has probably been done".The as-if is a technique for leveraging the receded, virtual enormity of "surveillance" to a produce a presumptive basis for knowing and believing.Such knowledge or belief is ambiguous and complex.One acknowledges the probabilistic or speculative nature of one's own belief, but simultaneously applies a practical-and sometimes even moral-injunction that hardens this belief and qualifies it for speech and action. Interpassivity Interpassivity originally arose from art and media theory as a response to the dominion of "interactivity" (Pfaller, 2003;Scholzel, 2014;Van Oenen, 2002).It is now applicable as a more general conceit: "not me, but another for me".Someone else believes, so that even if I do not, it remains a kind of "truth" (Žižek, n.d.).I Xerox a book or VCR a television show, and become satisfied that I have nearly consumed it; in a way, the machine has "watched it for me" (Pfaller, 2003).This deferral, this "outsourcing" (Van Oenen, 2002), has numerous practical uses.Interpassivity allows us to maintain beliefs which may not be supported by our own behaviour, identity and environment.I don't believe Obama is Muslim, but there are people who do.I don't find this content morally offensive, but other people might.In such cases, the interpassive articulation excuses the subject from being bound to the belief in question, even as that belief is hypostatised into reality, thereby forming a reliable basis for opinions and actions.Indeed, in some cases, "delegating one's beliefs makes them stronger than before" (Pfaller, 2001, p. 37): my beliefs now appear as objective fact, something I cannot dismiss as mere flight of my fancy.We are familiar with this mechanism, of course, in the work of rumour.The conceit "I have heard it said elsewhere" holds the truthfulness of the rumour in con-stant suspense, adding to its resilience.I cannot vanquish a speaker who is there in absentia. It is critical to understand what kind of "belief" is at stake in an interpassive movement.When I "act as if the Xerox machine were reading the text [for me]" (Pfaller, 2003), clearly, I do not "literally" believe that I have read the book.But I may well derive satisfaction from the act; a satisfaction that says "it is almost as if I have read the book, since I can now read it at any time I choose."When canned laughter laughs "for me" in a television sitcom, I do not look back and say "I now need not laugh."But, as Žižek (n.d.) points out, the experience can often leave me feeling "relieved" and rested afterwards.Such satisfaction is not necessarily reducible to false consciousness or pathological misrecognition.Interpassive techniques are ways for subjects to navigate a world which is so often alien to them, a world which they must nevertheless and constantly articulate as sensible and reliable.We employ interpassivity on a daily basis because it is a way to cobble together some understanding of politics, technology, public opinion, in the face of the harsh fact that so much of it exceeds our own experience and environment.The very ability to believe in surveillance as a part of our world is predicated on some noncongruence, some difference, between my "here" and the "out there". Interpassivity was commonly leveraged in the Snowden affair to mitigate precisely the recession of surveillance practices and knowledge from public debate.Indeed, certain "knowns" were quite explicitly evacuated out of the public domain and designated as "known elsewhere": Here's the rub: the instances where [NSA surveillance] has produced good-has disrupted plots, prevented terrorist attacks, is all classified, that's what's so hard about this.(Dianne Feinstein in Knowlton, 2013) Feinstein and others insisted that the fruits of surveillance could not be proven publicly, lest that too endanger national security.Although one or two concrete cases have been mentioned (such as Najibullah Zazi's 2009 plot), the general trend was to claim that proof, too, was classified for the sake of security.Notably, these claims do not simply place the public in ignorance of "all the facts"; they demand that public deliberation take place in full awareness of that ignorance. It becomes impossible to simply say "the benefits of surveillance have not been proven", since proof has been publicly designated as existing elsewhere.Feinstein's apology asks the reading public to actively hold their judgment in abeyance, or to be precise, make their judgment by simulating what someone else knows in their stead.All this is compounded by admissions that even the special court tasked to know in our stead-a court that is itself secret-also judges in ignorance.Reggie Walton, the presiding judge of that very court at the time, explains: The FISC [Foreign Intelligence Surveillance Court] is forced to rely upon the accuracy of the information that is provided to the Court…the FISC does not have the capacity to investigate issues of noncompliance, and in that respect the FISC is in the same position as any other court when it comes to enforcing [government] compliance with its orders.(Leonnig, 2013) The public is thus deprived of even the comforting thought that the law or the government "knows" in its stead.Rather, it is an indistinct other, dispersed and elusive, which promises to guarantee that surveillance indeed has been proven.This makes the interpassive movement fragile and speculative.When expert knowledge is stably instituted, the public can feel that it may reliably defer the work of knowing to those experts, and build a sensible world out of what the public itself does not know (Beck, 1992;Giddens, 1990).When expertise itself is threatened, as in climate change or the Snowden affair, the subject must make sense of what is happening in a more speculative and, indeed, subjunctive manner: "I don't know what the proof is, but if we presume for a minute that the proof is…" Even a cynical stance, which assumes that Fein-stein and others are lying and there is no proof at all, requires some presumptive position to be taken against the knowledge that another has "for" me. Certainly, the subject is not always forced to pretend to some knowledge of surveillance, interpassive or not.Nina Eliasoph's ethnography of Americans' everyday discussion of politics describes communities which consistently shy away from talking politics.When Eliasoph herself brought such topics up, it was seen as "an inert, distant, impersonal realm" too hard to get a handle on.It was a shame that political problems happened, and the "public" should do something about it-but that "public", the people who ostensibly knew enough to debate the problem, were not them (Eliasoph, 1998, pp. 131-135).Even the refusal to have an opinion was qualified by the interpellation of an other who participates in publics in my stead.The recourse to interpassivity is not reducible to voluntary "choice" by an autonomous agent.It is a responsivity demanded by a situation-a situation which comprises of the recession of surveillance, including the logic of secrecy and security folded into the debate. Not only can the other know for me, but they can also do and experience for me.Since surveillance's pervasiveness far outstrips the highly infrequent occasions on which it intrudes tangibly into individual lives, interpassivity becomes a key technique by which a given political and affective orientation becomes fleshed out into our reality: My older, conservative neighbour quickly insisted that collecting this metadata thing she had heard about on Fox was necessary to protect her from all the terrorists out here in suburbia.She then vehemently disagreed that it was okay for President Obama to know whom she called and when, from where to where and for how long, or for him to know who those people called and when, and so forth.(Van Buren, 2013) One might read this as typical liberal snarkiness about the cognitive dissonance of a stubborn conservative.But the general sentiment that there are people out there, "bad things" happening out there, that need to be watched and stopped is far from an abnormal one.Hence my own feeling of safety, my own ability to imagine a safer world, arises from a situation where someone else is surveilling someone else-myself, not being "that kind of person", one degree removed from the whole unpleasant affair.Indeed, interpassivity does not stop at projecting "probable factual events"; it also leverages downright fictional others.The non-news, fictional media thus participate in the ritualistic function: Great Britain's George Orwell warned us of the danger of this kind of information.The types of collection in the book-microphones and video cam-eras, TVs that watch us-are nothing compared to what we have available today.We have sensors in our pockets that track us everywhere we go.(Edward Snowden in "Whistleblower Edward Snowden gives,"2013) Snowden's comparison might have been a little redundant.Sales of Orwell's 1984 had already rocketed by some 6,000% after his initial leaks in June (Hendrix, 2013).Of course, one cannot claim that the public flocked to Orwell, Dick and Huxley in order to take them literally as prophecy.But such fictional work clearly served as resources for making sense of the confused present and the uncertain future.Some of this imaginative media also intersected the contemporary surveillance debate with an older tradition of representing crime and police work.Jonathan Nolan's Person of Interest debuted in U.S. television in 2011, two years before the Snowden leaks.The series was nevertheless conceived through extensive consultation of U.S. state surveillance practices as was known and estimated at the time (Gan, 2013).The popular series presented the public with an NSA-style dragnet which "spies on you every hour of every day", which the protagonist would use each episode to track down individuals before they became perpetrators or victims of violent crime.On one hand, Machine", Person of Interest's mass surveillance program, is clearly based on and evocative of U.S. state surveillance, providing the public with a simulation of hypotheticals.On the other hand, its show structure necessarily produces a world where urban crime of every kind proliferates and may strike any individual without notice.George Gerbner's famous cultivation theory suggested that media can have long-term, sedimented effects-that it can train people into presuming phenomena that lie beyond their own lives in order to, say, develop a heightened fear of criminal victimisation.This is not to say that Person of Interest is alarmist.The point is that insofar as terror and crime are not everyday realities for many (not all) of the population, we turn to fictional as well as strictly journalistic representations to develop an idea of what we can only assume is happening "out there".Nobody believes a television show is objectively true.But we often do leverage it for our world-building-just as we leverage the presumed opinions and actions of "others", and just as we leverage facts and statements we do not fully believe and cannot quite confirm. Feeling-Knowing Contemporary online surveillance is one which recedes in multiple ways from lived experience.This recession accentuates surveillance society's quality as a world out there: a vast, virtual entity which constantly eludes our knowing and living.Yet it is something which we invest a great deal of belief and passion into, cobbling what we know and suspect into a picture of a sensible, working world.The mediated public discourse on the Snowden affair exhibits two major techniques of such world-building.First, it leverages the virtuality and unknowability of surveillance as if it were in some way true and certain, producing hypothetical, provisionary bases for real, enduring actions and beliefs.Second, it encourages the notion that if not me, then another will know, experience, do in my stead.Even if the world of surveillance and terror is not real in my back yard, these interpellated others will make it real enough for me.The idea of the "public" or "society" provides a vast landscape of deferrals and potentials, a protective ambiguity for my political beliefs. We began with a rhetorical question: "we knew already", didn't we?Well, what has knowing ever done for us, anyway?What matters at least as much as what we know or not, is what kind of knowing and believing has allowed us to engage that information.It is about what, affectively and epistemologically, it means to say 'I know'.Much has been made of the secrecy that surrounds state surveillance-the arcana imperii-and even corporate data-mining operations.The debate over Snowden as hero or traitor also revolves around this opposition of secrecy and transparency.Scholarly commentary often laments the ambiguous, uncertain and impoverished kinds of information the public is offered about surveillance.All of this is undoubtedly significant.But what this essay suggests is that we must also understand what techniques, what habits, of knowing and believing proliferate and become legitimated in this political environment.What wirings of narrative arcs, tropes, stereotypes, emotive associations, come into play in the discourse, images and practices of the surveillance society?It cannot simply be unrestrained paranoia or dangerousness.We use these symbolic ingredients not only to become afraid or suspicious, but also to cope with our subjection to surveillance, to make our daily routines and affects still make sense in this new world order.This line of questioning asks not what we know, but how we come to feel we know.And ultimately, it asks whether, given different circumstances, we could have a different relationship to knowing and believing surveillance.
11,235
2015-09-30T00:00:00.000
[ "Philosophy" ]
Study on the Intention of Private Parking Space Owners of Different Levels of Cities to Participate in SharedParking inChina +e implementation of shared parking program can effectively increase the utilization rate of existing parking space resources. At present, shared parking program has not been widely practiced in China, and the prerequisite for this prospect to be implemented is whether the private parking space owner group can quickly and widely accept shared parking program. In this study, considering the differences in the economic development, urban planning, and parking pressure in cities of different levels, the theory of planned behavior and the benefit-risk perception model (C-TPB-BRA) are combined as the theoretical framework to explore the intention to share parking space from the perspective of the owners of private parking spaces in cities of different levels. Based on China’s empirical data, structural equation models are built to verify the hypotheses proposed. Our results show that (a) the intention of private parking space owners in different levels of cities to participate in shared parking and the mechanism of action of the psychological factors are different, and not all psychological factors have a direct impact on the intention to share. In first-tier, second-tier, and third-tier cities, Subjective Norm (SN) and Perceived Behavioral Control (PBC) indirectly affect Behavior Intention (BI) through Attitude (ATT), Perceived Benefit (PB), and Perceived Risk (PR). In the fourthtier cities, SN and PBC directly affect BI. Except for BI, other psychological factors influence each other significantly; (b) the psychological factors affecting the intention to supply shared parking spaces in first-tier, second-tier, third-tier, and fourth-tier cities, respectively, are PB>ATT>PR, PB>PR>ATT, PB> PR>ATT, and PB> SN>PBC>ATT>PR. Our research results could help determine the internal factors that affect the intention of parking space suppliers and their mechanisms of action to participate in shared parking, and on that basis, our findings could also help governments and platform operators to promote shared parking development plans. Introduction In recent years, the economy has developed rapidly and the number of motor vehicles has increased greatly, but the infrastructure construction and management level have not been correspondingly improved, so the contradiction between supply and demand of parking has become increasingly prominent. In addition, according to relevant statistics [1], in the context of sudden public health incidents such as the COVID-19, citizens are more concerned about the hygiene of public transportation and shared bicycles, resulting in a further reduction in the proportion of public transport trips, and the proportion of private car trips increased, which will further intensify parking needs. In addition, Amott [2] pointed out that, in Boston and some major European cities, more than 50% of cars need to find parking spaces during peak hours. e research of Shoup [3] pointed out that if each parking activity requires three minutes to find a parking spot, the cruising mileage of each vehicle needs to be increased by about 1825 kilometers per year. Besides, the lack of parking spaces can also lead to illegal parking activity, increased time costs caused by queuing and waiting, etc., thereby exacerbating carbon dioxide emissions [4]. is situation is more serious in many cities in China. Zhao et al. [5] constructed a quantitative model to evaluate the emission reduction effect of the implementation of the shared parking policy. e results show that 120 shared parking spaces in Beijing can reduce about 400 tons of carbon dioxide emissions a year; if 20% of the existing parking spaces in Beijing are shared, every year carbon dioxide emissions can be reduced by up to 7.3 million tons. Ayala [6] found that more than 3.1 million gallons of gasoline was wasted and more than 48,000 tons of carbon dioxide was emitted due to the search for parking spaces in Chicago. erefore, if the problem of parking difficulty can be solved, the parking pressure can be effectively alleviated, the driver's time to find parking is greatly reduced, and the environmental pollution caused by vehicle emissions can be alleviated. In addition to the contradiction between supply and demand of parking, another prominent manifestation of the current urban parking problem is the inefficient utilization of parking resources, which is mainly reflected in the imbalance in the space-time utilization of parking resources. For example, parking spaces in office areas are usually vacant at night and on weekends, while parking spaces in residential areas are often vacant during workdays during the day, which also provides an opportunity to meet parking demand without the need to build more parking lots [7]. According to relevant statistics, 485,000 parking spaces in Hong Kong are designated for private use, accounting for nearly 70% of the total number of parking spaces; Beijing's residential parking resources account for 58.1% of all parking resources, during working hours nearly 800,000 private parking spaces have been left unused [8], and because most urban residents work inconsistently with their homes, parking spaces in residential areas have been unused during the day on weekdays. If the spare time of these private parking spaces can be used effectively, the parking problem can be greatly alleviated. In recent years, the concept of shared parking has been proposed, the basic idea of which is that the parking space owner sells parking permits for the idle period of their parking spaces to public users on the electronic parking platform [9], and travelers with parking needs can purchase a parking permit through the parking platform. e relationship between supply and demand is shown in Figure 1. Some cities have already experimented with shared parking, but private parking space sharing in residential areas is still in its infancy. Most urban residents do not know much about shared parking in residential areas, and the number of users of each shared parking platform is small, so participation in shared parking is far from enough. erefore, it is very important for urban planning and parking management to understand the decision-making mechanism for people to accept shared parking. In the context of the sharing economy, previous behavioral research mainly focused on the behavioral intentions of demanders. However, it must be noted that, in the sharing practice, the supplier plays an equally important role [10]. Shared parking spaces are essentially private goods, and their supply comes from individuals whose purpose is to obtain certain benefits. eir participation in decisionmaking plays a decisive role in the development of shared parking. erefore, it is very important to explore the mechanism of influencing factors of private parking space owners' intention to provide shared parking spaces. erefore, use the theory of planned behavior and the benefit-risk perception model (C-TPB-BRA) as the theoretical framework. is paper explores the differences of parking space sharing intention and the mechanism of psychological factors from the perspective of the suppliers of shared parking spaces in cities of different levels. Based on China's empirical data, the structural equation models are built to verify the hypothesis proposed. Our research aims to determine intrinsic factors that affect the intention to share parking spaces and their mechanisms of action. e rest of the paper is structured as shown in Figure 2: Section 2 reviews and summarizes the literature on shared parking. Section 3 introduces the theoretical model framework of this research and puts forward the research hypothesis. Section 4 introduces the design of the questionnaire, data analysis, model evaluation, and results analysis. Section 5 puts forward corresponding policy recommendations based on the analysis results. Finally, the main content of this study is summarized and the future research directions are introduced in Section 6. Literature Review e relevant research on this topic involves the following three aspects: parking selection behavior and matching and pricing of shared parking spaces. In terms of parking selection behavior, some scholars conducted SP (Stated Preference) and RP (Revealed Preference) survey considering time factors, economic factors, external information, traffic safety, and other information and constructed the model of parking choice behavior by using prospect theory [11]. Some scholars used the RP/SP survey to collect data related to the traveler's parking space choice behavior and then constructed multiple Logit, nested Logit, and mixed Logit to calibrate the model of parking space search time [12][13][14]. Besides, aiming at the driver's competitive choice behavior under the condition of limited parking space resources, Guo et al. [15] further collected relevant behavioral data through experiments and performed a dynamic model to study the traveler's parking space choice behavior. In the related research on the matching of shared parking spaces, in view of the low utilization rate of private parking spaces in the community during working days, Shao et al. [16] established a parking space matching model between residential parking spaces and parking space users. Apart from that, some scholars constructed the matching model based on GIS [17], cloud technology [18], and the double auction mechanism [19]. In addition, Zhao et al. [20] proposed a shared parking resource management framework from considering the uncertainty of the arrival time and departure time of P-users and O-users and then developed an intelligent parking management system (IPMS) to simulate the operation of shared parking. Wang [21] developed a dynamic optimal supply strategy for parking permits and constructed a stochastic optimal control model to minimize the expected value of the total time loss of the system. 2 Discrete Dynamics in Nature and Society e charging of shared parking is the most important factor that affects the intention of shared parking supply and demand parties to participate. At present, there are many studies on parking pricing, such as parking pricing strategies based on the marginal cost principle and suboptimal pricing theory [22,23] and the demand-side competitive auction mechanism based on distribution rules and transaction payment rules [24]. In 2020, Xiao [25] designed two auctionbased shared parking pricing strategies in a dual environment that includes parking suppliers and demanders. Based on the Internet of vehicles technology, some scholars used cloud computing to match the supply and demand of shared parking spaces and then determined the shared parking rate [18]. In addition, there is related study on the allocationpricing-revenue mechanism of shared parking spaces [26]. From the existing literature on shared parking, there are few studies on the sharing intention of private parking space owners and due to the large differences in economic levels, urban planning, parking pressure, and information popularization among cities of different levels, there may also be differences in the intention of private parking space owners in different levels of cities to supply shared parking spaces. In response to this problem, this paper uses the C-TPB-BRA as the theoretical framework and constructs the structural equation models of cities of different levels to verify the proposed hypothesis based on China's empirical data. is study aims to predict the intention of private parking space owners in different levels of cities to provide shared parking spaces and determines the mechanism of action of each influencing factor on the sharing intention under different Discrete Dynamics in Nature and Society urban development forms. Finally, we propose policy recommendations based on the calculation results. e findings could provide decision-making basis for different regions to develop shared parking plans tailored to local conditions. Basic eory and eoretical Framework 3.1.1. TPB. Social psychology is the theoretical system that focuses on the psychology of people in a social environment, which studies the laws of the psychological and behavioral occurrence and change of individuals and groups in social interaction. Its representative theory is the TPB. According to TPB, human behavior is totally or partially affected by BI, ATT, SN, PBC, and other psychological factors [27]. ATT refers to an individual's positive or negative feelings about the behavior; SN refers to the social pressure that an individual feels about whether to take a particular behavior, that is, when predicting the behavior of others, the influence of individuals or groups that have influence on the individual's behavioral decision-making on whether an individual takes a particular behavior; PBC refers to the obstacles that reflect the individual's experience and expectations. e more resources and opportunities at your disposal and the fewer expected obstacles, the stronger your perceived behavioral control over behavior. BI refers to the judgment of an individual's subjective probability of taking a particular behavior, which reflects an individual's intention to take a particular behavior. TPB provides a theoretical basis for explaining the influence of psychological factors on behavioral choices [28,29]. However, because shared parking is not widely used in China, actual behavior data of which is not easy to obtain, and BI can explain behavior well. erefore, in this study, BI is used to predict actual behavior. Although TPB has strong universality in practice, it cannot fully explain the actual behavior under any circumstances, and there are some missing factors [30][31][32]; that is, in addition to the above factors, the BI may still be affected by some other undiscovered hidden factors. BRA Model. While sharing parking brings rental benefits to parking space suppliers, it also brings risks such as hidden safety hazards to residential areas. erefore, the psychological game between risks and benefits will also affect the intention to supply parking spaces. e benefit-risk analysis (BRA) model is often used in behavior research. It points out that Perceived Benefits (PB) and Perceived Risks (PR) are important variables that affect users' behavioral intention. Users make behavioral choices after weighing benefits and risks. At the same time, PB and PR will affect individual behavior attitude and behavior intention. e BRA model believes that PB is more likely to be influenced and driven by emotions than PR. Zeithaml [33] defined the PB in 1988 as the overall evaluation of the utility of the product or service after weighing the value that the user perceives when receiving the product or service and the cost paid by the user. e core of the evaluation is to weigh the PB and PR. Jacoby et al. [34] divided the PR of customers into economic risk, functional risk, physical risk, psychological risk, social risk, and time risk. Sweeney [35] proposed that perceived value includes social value, quality value, price value, and emotional value. Although different scholars have different views on the division of perceived value in different fields, the overall situation is mainly composed of two parts: PB and PR. Structural Equation Model . Structural equation model (SEM) is a statistical method that expresses the relationship between observed variables and latent variables, as well as the relationship between latent variables using the linear equation system. Its core idea is to set observation variables for the latent variables and then through the relationship between observation variables to indirectly reflect the relationship between latent variables [36]. ATT, SN, PBC, PB, PR, and BI involved in this paper are all latent variables that cannot be directly observed, and the corresponding observed variables have been listed in Table 1 e matrix equations are expressed as follows: Equations (1) and (2) are the measurement equations, and (3) is the structural equation where X is the exogenous observed variable vector, such as ATT1 and other measured variables in Table 1; ξ indicates the exogenous latent variable vector, such as ATT and PBC; A X represents the relationship between the exogenous observed variable and the exogenous latent variable, that is, the factor loading matrix of the exogenous observed variable with the exogenous latent variable; δ is the residual item vector of the exogenous observed variable; Y represents the endogenous observed variable vector; η is the endogenous latent variable vector, that is, the BI; A Y indicates the relationship between the endogenous observed variable and the endogenous latent variable; ε is the relationship between endogenous observed variable and endogenous latent variable; B indicates the relationship between endogenous latent variables; Γ represents the influence of exogenous latent variable on endogenous latent variable; ζ is the error term of the structural equation. Combined eoretical Framework. erefore, based on the TPB and BRA model, this paper constructs a combined theoretical framework (C-TPB-BRA) explaining the relationship between psychological factors and behavioral intentions and establishes structural equation models for the intention of parking space suppliers in different levels of cities. As shown in Figure 3, the constructed framework includes six potential variables, PR, PB, ATT, SN, PBC, and BI; the arrows represent the influence relationship; and the mutual influence relationship is represented by double arrows. 4 Discrete Dynamics in Nature and Society Research Hypothesis. According to the proposed combined theoretical framework and related references, the following hypotheses are proposed. According to Huijts's [42] points, ATT and SN all have a positive effect on the intention to act. Chen's [43] work has shown that, for motorcycle and car users, SN and PBC have a key influence on intention. erefore, we propose the following hypotheses: H1: ATT has a positive effect on BI. H2: SN has a positive effect on BI. H3: PBC has a positive effect on BI. Wu and Lin [44] found that SN have a direct impact on ATT. When respondents receive positive support or encouragement from relatives, friends, or other social organizations for their behavior, their ATT will also become more positive. As private parking spaces are shared products I think it's a good thing to supply shared parking spaces. SN1 If there is an opportunity, I think most people around me would choose to supply shared parking. Chen [37] SN2 If there is an opportunity, I think my family will suggest and support me to supply a shared parking space. SN3 If there is an opportunity, I think my friends/classmates/colleagues will suggest and support me to supply a shared parking space. SN4 e support and appeal of the government and media will make me more willing to supply shared parking spaces. SN5 e greater the number of owners who supply shared parking spaces in a small area, the more I am willing to supply shared parking spaces. PBC PBC1 It is easy for me to supply shared parking spaces through the shared parking system. Davis [38] PBC2 I believe I can supply shared parking spaces through a shared parking system. PBC3 I have a lot of knowledge to supply shared parking spaces through the shared parking system. PBC4 I think I have complete control over the use of the shared parking system. PBC5 If the operation of the shared parking system is easy to understand, I am more willing to supply shared parking spaces. PBC6 If a shared parking credit mechanism is established, I would prefer to supply shared parking spaces. PBC7 When a parker uses my parking space over time, I am more willing to supply a shared parking space if the shared parking platform provides me with a spare parking space. PBC8 When a parker uses my parking space over time, I am more willing to supply a shared parking space if the sharing parking platform provides me with subsidies. PR PR1 I think it is very likely that the parker will park over time, which will cause more inconvenience to me personally. Im [39] PR2 I feel that supplying a shared parking space may expose my personal privacy (such as travel records, home address, etc.). PR3 I think if the parked car has an accident in the neighborhood, it will probably get me into trouble. PR4 I think supplying shared parking will increase the cost of new equipment and redevelopment of the parking lot. PR5 I think supplying shared parking space will increase the pressure of property management in the community. PB PB1 I think I can get a lot of money by supplying a shared parking space. Lee [40] PB2 I think supplying a shared parking space for others will give me a sense of achievement and satisfaction. PB3 I think supplying shared parking spaces can help solve other people's parking problems. PB4 I think supplying shared parking spaces will improve the utilization rate of idle parking spaces. PB5 I think the provision of shared parking spaces contributes to the sustainable development of the city. BI1 If there is an opportunity, I would like to try to supply shared parking spaces through the shared parking system in the future. Tan [41] BI2 If there is an opportunity, I would like to give priority to supplying shared parking spaces through the shared parking system in the future. BI3 If there is an opportunity, I will often supply shared parking spaces through the shared parking system in the future. BI4 In the future, I will strongly recommend to my friends and family to participate in the shared parking program. Discrete Dynamics in Nature and Society of the family, the views of family members will also have an impact on the intention of decision maker. erefore, we propose the following hypotheses: H4: there exists a significant relationship between ATT and SN. When Yu et al. [45] constructed a causal model to study the tourism behavior of Taiwanese tourists in Kinmen in 2005. ey found that ATT played a role as an intermediary variable in the influence of PBC on BI, indicating PBC has a positive effect on ATT. Similarly, Tsai [46] proved through canonical correlation analysis that PBC influences ATT. erefore, we propose the following hypotheses: H5: there exists a significant relationship between ATT and perceptual behavior control. H6: there exists a significant relationship between PBC and SN. Hae-Kyung Sohn's [47] research results show that PR can lead to people's negative perception of festivals. Liao [48] explored the proenvironmental behavioral intention based on environmental ATT and the relationship between PB and the purchase choices of energy-saving appliances in Chinese households. e research results show that behavioral intention has a significant positive impact on the purchase choices of energy-saving appliances. ATT and psychological benefits have a significant positive impact on the respondents' intention to purchase energy-saving equipment. Lee's [40] research results show that the intention to use online banking is mainly negatively affected by security/privacy risks and financial risks and is mainly positively affected by PB and ATT. erefore, we propose the following hypotheses: H7: PR has a negative effect on BI. H8: PB has a positive effect on behavioral intentions. H9: there exists a significant relationship between PB and PR. Mary [49] examined the causal influence of PR and PB as well as SN on users' intention to adopt cloud technology trust and found that PR and PB have a significant impact on SN. erefore, we propose the following hypotheses: H10: PR is negatively associated with ATT, PBC, and SN. H11: PB is positively associated with ATT, PBC, and SN. Experimental Design. In order to ensure the data quality of the questionnaire, a presurvey was conducted before the formal survey. We asked people with varying degrees of understanding of shared parking to fill out the questionnaire, and after deleting inappropriate items, we got the formal questionnaire. We used online questionnaire for the formal survey. According to the statistical data of the National Bureau of Statistics of China and related literature [50,51], in the questionnaire, cities with a population of over 20 million are defined as first-tier cities, including Beijing and Shanghai. Cities with a population of 10 to 20 million are defined as second-tier cities, including Tianjin, Chongqing, Jinan, and Hangzhou. e cities with a population of 2 to 10 million are defined as third-tier cities, such as Haikou and Luoyang. Cities with a population of less than 2 million are defined as fourth-tier cities, such as Changde and Xiaogan. In the questionnaire, participants are asked to read a brief introduction about shared parking and then answer the relevant questions, which aim to ensure that the respondent really understands the operation process involved in shared parking. e survey is mainly aimed at people who have private parking spaces in urban areas. According to the geographical location information of the surveyed people, it is found that the surveyed people come from 31 provinces in China (a total of 33 provinces) and are evenly distributed in first-tier cities, second-tier, third-tier, and fourth-tier cities and 673 questionnaires are collected. After conditional screening, the number of valid questionnaires is 625 (the effective rate is 92.3%). When using structural equations for multivariate research, the sample size should be at least 5 times the variable; that is, the sample size for a formal survey is appropriate more than 150 copies, so the sample size of the formal investigation can meet the research needs. e questionnaire consists of three parts: the first part includes the social and economic attributes of the respondent; the second part covers the built environment of the respondent's residential area and the use of parking spaces, such as public transportation accessibility, surrounding land types, and parking spaces idle time; the third part covers the measurement of variables in C-TPB-BRA, including ATT, SN, PBC, BI, PR, and PB. e basic information and socioeconomic attributes of the surveyed participants include gender, age, monthly income, education level, the number of private cars, and the number of private parking spaces. Among them, there are slightly more male respondents (58.58%) than females (41.42%) in the overall sample; respondents are mainly aged from 26 to 60 years of age (70.16%), which is more in line with the age characteristics of private parking spaces; 68.63% of the population are with college/bachelor degree and above; the income of the respondents is evenly distributed between 3000 and 11000 yuan/month (75%), indicating that the surveyed group has a certain degree of universality. Table 2 summarizes the main demographic characteristics of the valid samples of cities at different levels. Urban Difference Analysis of Intention to Supply. rough the questionnaire, we investigated "Do you agree to open the community to share parking spaces?" and "Do you want the residential area near your company to be opened to share parking spaces?" Figure 4 shows the statistics of the survey results. We found interesting conclusions: as for "Do you agree to open the community to share parking spaces?", the agreement and disagreement ratios of different levels of urban groups are all over 40% and close to 1 : 1, indicating that there is a lot of potential development space for shared parking in residential areas. Regarding "Do you want to share parking spaces in residential areas near your unit?", among different levels of urban groups, the agreeing group account for about 60%, and the neutral group account for about 20%, indicating that most residents recognize the value of shared parking. Questionnaire Measurement. In this study, all psychological variables (C-TPB-BRA variables) were measured using the Likert five-level scale (1 � completely disagree; 5 � completely agree). e higher the score, the higher the degree of agreement. Based on the existing related research on TPB and BRA model, we made corresponding modifications to the measurement variables according to the research questions. e six latent variables of the research question include a total of 30 measurement items, of which ATT includes three items (ATT1-ATT3), SN includes five items (SN1-SN5), PBC includes eight items (PBC1-PBC8), PR contains five items (PR1-PR5), PB contains five items (PB1-PB5), and BI contains four items (BI1-BI4). Table 1 lists specific measurement items in detail. Factor Analysis and Reliability and Validity Test. First, we use SPSS to perform exploratory factor analysis, and we carry out KMO and Bartlett's sphere test on the scale. e calculation results are shown in Table 3. e results show that KMO values are all above 0.880 (>0.700), and Bartlett's sphere test value is significant (Sig.<0.001), indicating that the questionnaire data meets the prerequisite requirements of factor analysis. Reliability and validity are used to measure the accuracy and stability of questionnaire test results [52]. Reliability analysis is used to measure whether the results of the questionnaire are reliable, and Cronbach's coefficient (Cronbach's α) is generally used for evaluation. If Kronbach's coefficient is higher than 0.8, the reliability is high; if it is between 0.7 and 0.8, the reliability is acceptable; if it is between 0.6 and 0.7, it is basically acceptable; if it is less than 0.6, the reliability is not good. It is necessary to consider revising the survey scale [53]. Table 3 shows the reliability and validity test results of each latent variable. From the results, Cronbach's coefficient of each latent variable is higher than 0.8, indicating that the reliability of the questionnaire is very high. Validity analysis is used to measure the validity and accuracy of the design of problem items. e higher the validity is, the more accurate the measurement results are. Validity is usually verified by using Average Variance Extracted (AVE) and when the value is greater than 0.5, which indicates that the latent variable has good convergence validity. In addition, when the load coefficient of the standardized factor corresponding to each observed variable is greater than 0.6 and P < 0.05, it also shows that the convergence validity is up to standard. From the results in Table 4, the AVE of each latent variable is above 0.63, and the value of each observed variable is above 0.66, indicating that the data has strong reliability and internal consistency. Figure 1, the structural equation models were established using AMOS for path verification. e preliminary verification found that the theoretical model and the empirical data could not be completely fitted. erefore, the SEMs were revised without affecting the integrity of the theoretical model. e path that has no significant impact at all is deleted, and the final model fitness test index calculation results are shown in Table 5. e optimized model fitness of different levels of cities is significantly better than the initial theoretical model. Indicators of the first-tier, second-tier, and third-tier cities meet the requirements except that the AGFI is slightly lower than the standard value. Although RMSEA and AGFI are slightly lower than the standard value, other indicators of the fourthtier cities meet the requirements, indicating that the model fit meets the standard requirements. Table 6. Model Evaluation and Modification. Based on the combined theoretical framework in (1) Table 7 lists the influence degree of latent variables that have a direct impact on BI. We can see that the most influential psychological factor affecting the intention to supply parking spaces in cities of all different levels is PB. e factors that directly affect the intention to share parking spaces in first-tier cities are PB > ATT > PR. SN and PBC mainly indirectly affect BI by affecting "ATT > PB"; that is, ATT and PB play an intermediary role. e factors that directly affect the intention to share parking spaces in second-tier cities are as follows: PB > PR > ATT; SN and PBC indirectly affect BI by affecting "PB > ATT > PR"; PB, ATT, and PR play an intermediary role. e factors that directly affect the intention to share parking spaces in third-tier cities are PB > PR > ATT, while SN and PBC mainly affect BI and PR by affecting "PB > ATT". e intermediary effect of PR is not strong; while the factors that affect the intention to share parking spaces in fourth-tier cities are as follows: PB > SN > PBC > ATT > PR. PR has the least direct impact, indicating that shared parking space suppliers in fourth-tier cities are not sensitive to the hidden risks brought by parking space sharing. (2) In the structural equation models of cities of different levels, ATT has a significant impact on BI, especially in first-tier cities and third-tier cities, which validates H1. In first-tier cities, second-tier cities, and third-tier cities, neither SN nor PBC has significant influence on BI, but indirectly influences BI through ATT, possibly because (3) In the SEMs of the four levels of cities, SN, PBC, PR, and PB all have a significant impact on ATT, indicating that the attitude of shared parking space suppliers is influenced by the recognition of the society and the surrounding people, as well as the anticipated benefits and risks of participation; thus H4 and H5 are supported and H10 and H11 are supported partially. Meanwhile, SN is positively associated with PBC, which also confirms H6. (4) In the four SEMs, PR has a significant negative impact on BI, so H7 is supported, which shows that the greater the risk perceived by the shared parking space supplier, the weaker the intention to supply shared parking spaces, such as the overtime use of parking spaces by the renters, and the security risks brought by the entry of other vehicles into the community will have a negative impact on the intention to provide shared parking spaces. Especially in second-tier cities and third-tier Discrete Dynamics in Nature and Society cities, this negative impact is more significant, but the impact is relatively weak in first-tier cities and fourthtier cities, which may be related to the overall management level of residential areas in cities of different levels. Similarly, PR has a significant impact on PB, indicating the compensation effect of PB on PR, so the greater the PB of the parking space supplier, the greater PR tolerance strong; thus H9 is supported. Both PBC and SN have a significant impact on PR, so H10 is supported. (5) PB have a significant positive impact on BI, and the impact level is greater than the effect of PR on BI, so H8 is supported, indicating that the potential income generated by unused parking spaces may be a strong motivation for parking space owners, but this does not necessarily conform to the views of other stakeholders (such as family members). e PB has little direct impact on BI in first-tier and fourth-tier cities, which may be due to the relatively high level of economic development in first-tier cities and the overall insensitivity of suppliers to shared parking revenues, while in fourth-tier cities, parking pressure is relatively low; thus the supplier perceives less benefit. Besides, PBC, SN, and PB all have significant effects on each other, thus partially verifying H10 and H11. Policy Application Based on the above results, the following policy recommendations are put forward. Enhance the Public's Understanding of Shared Parking Projects. According to the analysis result (2), in the SEMs of different levels of cities, ATT has a significant positive impact on BI. In first-tier and third-tier cities, the influence of ATT on BI is greater. erefore, if the shared parking operation platform wants to develop the shared parking projects in the first-tier and third-tier cities, it can cooperate with the government and take measures to enhance the public's understanding of shared parking, to promote the public's intention to participate in shared parking projects. Strengthen the Publicity of the Shared Parking Projects and Improve the Ease of Use of System Particularly in the Fourth-Tier Cities. Only in the fourth-tier cities, SN and PBC have a significant positive impact on BI, indicating that the decision-making of this group is more susceptible to the influence of relatives, friends, and government. ey also pay more attention to related operating systems. erefore, in order to promote shared parking plans in fourth-tier cities, in addition to enhancing the public's awareness of shared parking, attention should also be paid to reducing the complexity of the use of shared parking operating systems and strengthening relevant instructions. ese recommendations are reflected in measurement items SN1-SN8 and PBC5 in Table 1. Establish the Safety Supervision Mechanism and Revenue Feedback Mechanism. In response to the analysis result (4), PR has a negative effect on BI. Especially in third-tier cities, PR has the greatest negative effect on BI, so in third-tier cities stakeholders should pay more attention to establish the safety supervision mechanism for shared parking to reduce safety risks and cost risks, these policies are reflected in measurement items PR1-PR5 in Table 1. Among them, the supervision mechanism should be led by the operator and cooperate with other stakeholders. For example, if the safety and privacy of residents are violated by externally parked vehicles, punitive measures such as parking restrictions and violation of personal credit records should be implemented. Secondly, the community awareness can be enhanced by supplying a brief introduction to the operation of the shared parking system, discussing safety issues with community members, and developing ways to give back to the community. Community residents and landowners with private roads can benefit from this and reach a community consensus; these recommendations are reflected in measurement items PBC6 and SN5 in Table 1. Enhance the Attractiveness of Participation Benefits and Carry out Shared Parking Demonstration Projects. According to the analysis result (5), PB is the latent variable that has the greatest direct impact on BI, especially in second-tier and third-tier cities, shared parking space suppliers' BI are more affected by PB, so in second-and third-tier cities, it is more important to enhance the attractiveness of parking space sharing. However, there are many interested parties involved in sharing parking spaces, including operating platforms, properties, and parking space suppliers. It is difficult for shared revenue to meet the expectations of parking space suppliers. Although the government is not a profit-making organization in China, shared parking can generate more social benefits, such as reducing vacant parking spaces and reducing parking and traffic congestion. erefore, the government should support financial subsidies to reduce the cost pressure on suppliers and managers. In addition, shared parking demonstration projects also require government investment. Demonstration of shared parking projects can improve the public's perception of potential personal and public benefits. Shared parking demonstration projects should be implemented by the government, suppliers, and managers, and the implementation effects and social values of shared parking should be broadcast to the public. rough demonstration projects, the benefits and risks will be clearer in practice, thereby eliminating the anxiety of suppliers. In addition, the promotion plan can be gradually expanded according to land use types, employment distribution, and development status. In addition, the operating platform also needs to develop strategies to promote shared parking in the community for different participant groups. e government and community committees can discuss the most suitable business model for the community. Conclusions is study aims to explore the intention of private parking space owners of different levels of cities to participate in shared parking in China and the action mechanism of psychological factors on sharing intention. erefore, we constructed combined theoretical framework (C-TPB-BRA) and used the structural equation model to verify the relationship between the psychological factors. Our results show that the combined theoretical framework has a high explanatory ability, which confirms that the combination of TPB and BRA model can explain the differences in the sharing intentions of private car space owners in cities of different levels. is research has made many contributions both in theory and in practice. To our knowledge, this is the first study to apply C-TPB-BRA to explore suppliers' intention to offer shared parking spaces. is research could also help for decision makers to formulate smart city construction strategies. Intelligent parking systems will eventually become an integral part of all communities, and shared parking will undoubtedly play an important role [56]. According to the research results, PB has the greatest direct impact on BI, not all psychological variables will directly affect the parking sharing intention, but some psychological variables indirectly affect the sharing intention through other psychological variables, which may be due to the structure of these psychological factors and emerging technologies caused by contact problems. One reason is that psychological variables are mostly imported from foreign research, and their applicability in China needs to be studied and must be verified by many domestic empirical studies. e second reason is that shared parking has not been widely used in real life, and the public's perception still needs to be improved. In future research, researchers can seek to confirm the selection results of suppliers under different cultural backgrounds and discuss the sensitivity of suppliers to policy variables such as fees. In addition, the intention to share parking space is also affected by other family members, so the perspective of family as a unit is worth further study. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
9,307.2
2021-05-20T00:00:00.000
[ "Business", "Economics" ]
A Modified Strip-Yield-Saturation-Induction Model Solution for Cracked Piezoelectromagnetic Plate A strip-yield-saturation-induction model is proposed for an impermeable crack embedded in piezoelectromagnetic plate. The developed slide-yield, saturation, and induction zones are arrested by distributing, respectively,mechanical, electrical, andmagnetic loads over their rims. Two cases are considered: when saturation zone exceeds induction zone and vice-versa. It is assumed that developed slide-yield zone is the smallest because of the brittle nature of piezoelectromagnetic material. Fourier integral transform technique is employed to obtain the solution. Closed form analytic expressions are derived for developed zones lengths, crack sliding displacement, crack opening potential drop, crack opening induction drop, and energy release rate. Case study presented for BaTiO 3 –CoFe 2 O 4 shows that crack arrest is possible under small-scale mechanical, electrical, and magnetic yielding. Introduction The work on magnetoelectroelastic (MEE) fracture problem was started late back in the last century.The field is a natural extension of piezoelectric media since electricity and magnetism go in hand.Due to coupling effect of magneto-, electro-, and elastic fields, MEE materials become more popular than piezoelectric materials and serve as the excellent sensor, actuator, and transducer. Wang and Shen [1] obtained energy release rate for a mode-III magnetoelectroelastic media based on the concept of energy-momentum tensor.Based on the extended Stroh formalism combined with complex variable technique, Green's function is obtained for an infinite two-dimensional anisotropic MEE media containing an elliptic cavity which degenerates into a slit crack, by Jinxi et al. [2].Sih and Song [3] proposed a model which showed that crack growth in a magnetoelectroelastic material could be suppressed by increasing the magnitude of piezomagnetic constants in relation to these for piezoelectricity.They [4] further derived energy density function for cracked MEE medium and studied the additional magnetic strictive effect which could influence crack initiation as applied field direction is altered.Wang and Mai [5] addressed the problem of a crack in a MEE medium possessing coupled piezoelectric, piezomagnetic, and magnetoelastic effects.Wang and Mai [6] further extended above problem to calculate a conservative integral based on governing equations for MEE media.Gao et al. [7] investigated the fracture mechanics for an elliptic cavity in a MEE solid under remotely applied uniform in-plane electromagnetic and/or antiplane mechanical loadings.Reducing cavity into a crack they considered two extreme cases for impermeable crack and permeable crack cases.Hu and Li [8] obtained singular stress, electric and magnetic fields in MEE strip containing a Griffith crack under longitudinal shear for a crack situated symmetrically and oriented in a direction parallel to the edges of the strip.Tian and Rajapakse [9] obtained the solution for single, multiple, and slowly growing impermeable cracks in a MEE solid using generalized edge dislocation theory.The solution for an elliptic cavity in an infinite two-dimensional MEE medium subjected to remotely applied uniform combined mechanical, electric, and magnetic loadings under permeable crack face boundary condition along the cavity of the surface had been obtained by Zhao et al. [10].Wang and Mai [11] discussed different electromagnetic boundary conditions on permeable and impermeable crack faces in 2 International Journal of Engineering Mathematics a MEE material.Ma et al. [12] addressed an antiplane problem of a functionally graded MEE strip containing an internal or edge permeable/impermeable crack lying transversely to the edges of the strip.A mixed boundary value problem was solved by Zhong and Li [13] for a crack in MEE solid.Kirilyuk [14] gave a method which enabled to find stress intensity factor (SIF) for a cracked MEE body directly from the analogous problem of elasticity.Zhao and Fan [15] proposed a strip electric-magnetic breakdown model for an electrically and magnetically impermeable crack in MEE media using extended Stroh formalism and the extended dislocation modelling of a crack.They have also proposed [16] a strip electric-magnetic polarization saturation model for an infinite and finite MEE strip.Using finite element method Krahulec et al. [17] discussed a crack problem for MEE solid under various electromagnetic boundary conditions on the crack rims.Recently, we, Bhargava and Verma [18], proposed a mechanical, electrical, and induction yield model (based on Dugdale [19] type model) for a cracked MEE 2D media. In present paper, we have generalized the classical Dugdale model [19] and propose strip-yield-saturation-induction yield model for an unbounded cracked piezoelectromagnetic plate with electric and magnetic polarization in direction.Due to mechanical brittleness, it is assumed that developed mechanical yielding zone is the smallest zone.In the problem, two cases are considered: Case I is when developed saturation zone is bigger than induction zone and Case II is when developed saturation zone is smaller than induction zone.Fourier transform technique is employed to obtain the solution and derived closed form expressions for developed zone sizes, energy release rate, crack opening potential drop, and crack opening induction drop.To test the proposed model, a case study is presented for BaTiO 3 -CoFe 2 O 4 piezoelectromagnetic ceramic.Numerical results are presented graphically and these confirm that the proposed model is capable of crack arrest under small-scale electrical and magnetic yielding. Fundamental Formulation and Solution Methodology As are well known, the out-of-plane displacement and inplane electric and magnetic fields problems may be expressed as in -orthogonal coordinate system.For transversely isotropic magnetoelectroelastic medium which is poled along -direction, the constitutive equations for stress, , electric-displacement, , and magnetic induction, ( = , ), components may be expressed as where 44 , 15 , ℎ 15 , 11 , 11 , and 11 denote elastic constant, piezoelectric coefficient, piezomagnetic coefficient, magnetic permeability, and dielectric coefficient, respectively.A comma in subscript denotes the partial differentiation with respect to argument following it. The equilibrium equations in the absence of body force, electric charge, and magnetic flux, respectively, are expressed as The gradient equations for strain components, , magnetic field, , and electric field, , may be written as where , = , . And the governing equations for electrically and magnetically poled piezoelectromagnetic ceramic under mode-III deformation may be written as where ) is two-dimensional Laplacian operator and = [, , ] .The superscript denotes the transpose of matrix. To obtain the desired potentials , , and , equation ( 5) is solved by taking Fourier integral transform.The solution of corresponding ordinary differential equation is given by where ∼ over the functions denotes its Fourier transform and superscripts + and − denote the value of function on the rims of crack as approached from > 0 and < 0 planes, respectively. and are 3 × 1 column vectors and is Fourier transform variable.For convenience of calculations, the boundary conditions of the problem are also written in terms of their Fourier transform.Fourier transform of constitutive equations (2) may be written as where ) . Solution of equation ( 8) may be given as The continuity condition for traction on = 0 and || < yields Equations ( 10) and ( 11) together yield where The jump in displacement, electric-displacement, and magnetic induction are defined by And the dislocation function, 1 (), dipole density function, 2 (), and induction density function, 3 (), are defined as Taking Fourier transform of ( 14) one obtains Equations ( 12) and ( 15) may also be expressed as Inverse Fourier transform of the above equation and ( 11) and (16) yields a system of integral equations to determine f(x) as The Problem A piezoelectromagnetic ceramic plate occupies entire plane and is thick enough in -direction to allow antiplane deformations.The plate is poled along positive -direction both magnetically and electrically.The plate is cut along a hairline, quasi-stationary straight crack.The crack occupies the interval [−, ] on -axis. Remote boundary of the plate is prescribed: (i) an antiplane shear stress: Case I.When developed saturation zone exceeds developed induction zone, ( < ). To arrest the crack from further opening, the rims of saturation zone are subjected to normal cohesive in-plane saturation limit electrical displacement; = .And the rims of developed induction zone are subjected to in-plane normal cohesive magnetic load; = (/) , where denotes saturation limit of magnetic induction and is any point on induction zone. Case II.When developed saturation zone is smaller than developed induction zone, ( > ). For this case, developed induction zone rims are prescribed in-plane, normal cohesive saturation limit of magnetic induction, = , and developed saturation zone is subjected to in-plane normal cohesively linearly varying saturation limit electric displacement, = (/) , where is saturation limit electrical displacement and is any point on saturation zone. For both considered cases, developed slide-yield zone is taken to be the smallest ( < &) and is prescribed cohesive yield-point shear stress, = . Case I: Solution and Applications Schematically, the configuration of the problem for Case I is depicted in Figure 1. Consequently, the solution for 1 () may be written using (17) as provided that which on evaluation leads to a transcendental equation to determine : Hence the slide-yield zone is determined using | − |.Using (21), the solution of ( 19) may be written as where Analogously, the solution for 3 () is written using equation (16) ) . The solution of equation ( 25) is obtained under the condition Equation ( 27) enables to determine .The saturation zone length is then determined using | − |. Applications for Case I 4.1.1. Crack Sliding Displacement (CSD), Crack Opening Induction Drop (COI), and Crack Opening Potential Drop (COP). Quantities of interest, namely, CSD, COI, and COP, are obtained in closed form analytic expressions in this section.The crack sliding displacement, Δ I (), is obtained using the following formula: Substituting 1 () from ( 22) and evaluating, one obtains Crack opening induction drop (COI), Δ I (), across the developed induction zone rims is calculated using the definition (32) The loads fixed for this study are As may be noted from Figure 2, CSD shows a linear increase as slide-yield zone length is increased for fixed crack length.But as volume fraction is increased the crack opening decreases in magnitude although the increasing trend is maintained. Crack opening induction drop (COI) versus induction zone to half-crack length ratio is plotted in Figure 3. COI shows a nonlinear parabolic increase as induction zone length is increased.COI stabilizes for bigger induction zone.However as the volume fraction is increased, the COI further drops although the increasing trend continues to stabilize. Figure 4 depicts the variation of crack opening potential drop (COP) with respect to saturation zone length for a fixed crack length.COP shows a negative variation as the saturation zone length is increased.It may be noted that for higher volume fraction, there is substantial drop in COP.As the saturation zone length is increased, COP shows a continuous negative increase.Figure 5 depicts the variation of energy release rate (ERR) with respect to slide-yield zone for different volume fractions.For higher volume fraction, energy release rate is positive and increases as slide-yield zone length is increased.And for > 0.7, it is observed that there is little change in ERR variation; in fact, it slightly decreases for bigger values of slide-yield zone length keeping crack length fixed.It may be noted that, for smaller value of volume fraction, the ERR is negative but increases continuously as slide-yield zone size is increased. Figure 6 shows the variation of ERR with respect to increase in saturation zone size for different volume fractions.The ERR shows a uniform constant behaviour even when saturation zone is increased.For volume fraction value equal to 0.1 the ERR is negative; it makes no contribution to crack propagation.And for volume fraction value higher than 0.3, the ERR becomes positive.For volume fraction values equal and higher than 0.5 the ERR remains constant; that is, there is no effect of volume fraction increase. Variation of ERR against induction zone for different values of volume fraction is plotted in Figure 7.For volume fraction equal to 0.1 the ERR continuously decreases parabolically and becomes negative as the induction zone length is increased.And for the values 0.5-0.9 of volume fraction the ERR decreases even when induction zone size is increased, indicating that the crack gets arrested for higher volume fraction values. Case II: Solution and Applications Schematically, the configuration of the problem is depicted in Figure 8.The traction boundary conditions II ( = 1, 2, 3) for Case II may be mathematically expressed as and for this case it is assumed that < < < . Equations to determine the desired potentials () ( = 1, 2, 3) may be written using (16) as The dislocation function, under the constrain The above equation enables to determine , hence giving induction zone length using | − |. Figure 1 : Figure 1: The schematic representation of the problem for Case I. Figure 2 :Figure 3 : Figure 2: Normalized crack sliding displacement versus slide-yield zone to half-crack length ratio for Case I. 1 Figure 5 : Figure 5: Normalized energy release rate versus slide-yield zone to half-crack length ratio for Case I. Figure 7 : Figure 7: Normalized energy release rate versus saturation zone to half-crack length ratio for Case I. Figure 8 : Figure 8: The schematic representation of the problem for Case II. Table 1 : Material constants of BaTiO 3 -CoFe 2 O 4 for different volume fraction of BaTiO 3 in composite material.
3,050.6
2014-08-14T00:00:00.000
[ "Physics" ]
Spatial-temporal Super-resolution Land Cover Mapping with a Local Spatial-temporal Dependence Model —The mixed pixel problem is common in remote sensing. A soft classification can generate land cover class fraction images that illustrate the areal proportions of the various land cover classes within pixels. The spatial distribution of land cover classes within each mixed pixel is, however, not represented. Super-resolution land cover mapping (SRM) is a technique to predict the spatial distribution of land cover classes within the mixed pixel using fraction images as input. Spatial-temporal SRM (STSRM) extends the basic SRM to include a temporal dimension by using a finer-spatial resolution land cover map that pre-or post-dates the image acquisition time as ancillary data. Traditional STSRM methods often use one land cover map as the constraint, but neglect the majority of available land cover maps acquired at different dates and of the same scene in reconstructing a full state trajectory of land cover changes when applying STSRM to time series data. In addition, the STSRM methods define the temporal dependence globally, and neglect the spatial variation of land cover temporal dependence intensity within images. A novel local STSRM (LSTSRM) is proposed in this paper. LSTSRM incorporates more than one available land cover map to constrain the solution, and develops a local temporal dependence model, in which the temporal dependence intensity may vary spatially. The results show that LSTSRM can eliminate speckle-like artifacts and reconstruct the spatial patterns of land cover patches in the resulting maps, and increase the overall accuracy compared with other STSRM methods. I. INTRODUCTION AND cover and its dynamic play a major role in global change. Understanding the distribution and dynamics of land cover is essential to better understand the Earth's processes. Remotely sensed images are the main data source in mapping land cover and monitoring land cover changes at different spatial resolutions. However, land parcels managed at a local scale are often smaller than the resolution of satellite data, in which one pixel often represents composite spectral responses from multiple land cover types. Hard classification methods cannot accurately map the mixed pixels, because they assign a mixed pixel to a single land cover class. Soft classification can generate land cover class fraction images that represent the areal proportions of different land cover classes within a pixel. The output of a soft classification is a number of fraction images equal to the number of land cover classes. However, the spatial distribution of land cover classes within the mixed pixel is still unknown. By dividing the pixel into numerous sub-pixels and assuming the sub-pixels are pure, super-resolution mapping (SRM) can assign the class fractions spatially to sub-pixels [1]. SRM can be viewed as the post processing of soft classification that predicts the spatial distribution of land cover classes at the sub-pixel scale. The fraction images output from a soft classification are inputted to an SRM to produce a land cover map with a finer spatial resolution than the original remotely sensed image. In general, SRM is an ill-posed problem and the result unavoidable contains uncertainty. In order to decrease the uncertainty, various ancillary data such as panchromatic band image [2], vector boundaries [3] and LIDAR [4] have been used in the SRM models to provide more information. However, these SRM methods require that the acquisition dates of these remotely sensed image and ancillary data should be the same or closer, which restricts the use of these data in SRM. Besides the aforementioned ancillary data, land cover maps with finer spatial resolution than the remotely sensed images obtained at different dates are an alternative ancillary data for SRM. The land cover maps can be generated from remotely sensed images obtained from various platforms. Given that land cover map and remotely sensed images can be acquired at different times and considering that a great number of historical land cover maps which cover almost the entire earth may be available, SRM using these land cover maps is very promising. The spatial-temporal super-resolution mapping (STSRM) first proposed by Ling et al. [5] is an approach that incorporates a finer-spatial resolution land cover map that pre-or post-dates the remotely sensed image acquisition time as ancillary data. Hereafter, we refer the input remotely sensed image has a coarse spatial resolution or coarse resolution, and the input and output land cover maps have fine spatial resolution or fine resolution in STSRM. Note that the terms 'coarse' and 'fine' do not mean the absolution spatial resolution of the data, but the relative spatial resolution of the image and the land cover map. STSRM is suited to monitor land cover change [6], and has been applied in many fields such as forest mapping [7], change detection [8], and land cover map updating [9]. Various STSRM models have been proposed and can be generally categorized into two groups. The first group of STSRM models are based on the change detection analysis (CDA_STSRM). In CDA_STSRM models, the remotely sensed image pixels are unmixed to coarse resolution land cover fraction images from soft classification, and the fine resolution land cover map is spatially degraded to coarse resolution land cover fraction images. By comparison of these two kinds of fraction images, if the land cover fraction of a class is unchanged in a coarse resolution pixel, the fine resolution pixels that belong to that class in the coarse resolution pixel are assumed to have unchanged class label. Otherwise, the fine resolution pixels that belong to that class in the coarse resolution pixel are assumed to have changed class label, and the label can be determined using various algorithms such as the pixel-swapping algorithm [5], the Hopfield neural network [10], the maximum a posteriori method [9,11], the learning based model [12], the interpolation based model [13], the swarm intelligence theory [14], the adaptive cellular automata [15], and the artificial neural network [16]. Since error in fraction images is usually unavoidable, a threshold used to distinguish unchanged and changed fractions in each coarse resolution pixel is necessary to be incorporated in CDA_STSRM, and the results depend greatly on the fraction change detection threshold value used. However, the fraction error is usually spatially variable in different pixels and for different classes. The accurately estimation of the threshold in CDA_STSRM is very difficult [10,11]. The second group of STSRM models are based on spatial-temporal dependence (STD_STSRM). STD_STSRM assumes that fine resolution pixels are spatially dependent with their spatially closest fine resolution pixels, and are temporally dependent with the corresponding fine resolution pixels in images that pre-or post-date the image under analysis. Xu and Huang [17] proposed the spatial-temporal pixel swapping algorithm (STPSA) model that is applied to bi-temporal data, and Wang et al. [18] extended STPSA to multi-temporal data. Li et al. [7] proposed a Markov Random Field based model to map forest cover from Moderate Resolution Imaging Spectroradiometer (MODIS), and Zhang et al. [19] extended this model by using both the 240 m and 480 m spatial resolution MODIS bands. Generally, STD_STSRM predicts all fine pixel labels based on the spatial-temporal dependence model, and removes the need for a change detection threshold. Considering this advantage, this paper is focused on STD_STSRM. All STD_STSRM methods predict the fine resolution land cover map based on the a-priori spatial distribution and temporal transition information about land covers. The a-priori information in STD_STSRM includes the a-priori spatial information that is used to predict the land cover spatial patterns at the fine resolution pixel scale and a-priori temporal information that is used to model the temporal transitions between the class labels in the predicted and the input pre-or post-dated land cover maps. The a-priori spatial models have been studied in SRM researches including the spatial dependent model [20][21][22][23], the direct mapping model [24], the geostatistical model [25,26], the multi-point simulation based model [27], the learning based model [28,29], the adaptive model [30], and the linear spatial distribution model [31,32]. In these models, [20][21][22][23][24] are suitable for predicting spatial patterns of patches that are larger than the coarse resolution pixel, [25][26][27] are suitable for predicting spatial patterns of patches that are smaller than the coarse resolution pixel, [31,32] are suitable for linear patch, and [28][29][30] are suitable for patches with different spatial patterns, respectively. These a-priori spatial models used in SRM can be directly applied in STD_STSRM. In contrast, the study on the a-priori temporal information in STD_STSRM is very rare. Challenges in STD_STSRM remain, especially if seeking to use a time series of images. First, in many cases, more than one land cover map of the same scene is available in the area of interest. Since these land cover maps may record the land covers at different dates, incorporating as much fine resolution land cover maps as possible is very useful in accurately mapping land cover trajectories with STSRM. There are three scenarios for the acquisition time of the fine resolution maps and coarse resolution image in STSRM. The first case is that fine resolution maps which pre-and post-date the coarse resolution image are available, the second case is that only a fine resolution map which pre-dates the coarse resolution image is available, and the third case is that only a fine resolution map which post-dates the coarse resolution image is available. For the first case, two available fine resolution maps can be used as the a-priori temporal information to constrain STSRM. By contrast, for the second and third cases, only one fine resolution map is available as the a-priori temporal information. Existing STSRM methods, including both STD_STSRM and CDA_STSRM models, are focused on the second and third cases in which only one fine resolution map is available [5,[9][10][11][12][13][14][15][16][17][18][33][34][35], but fail to explore the first case. A thorough study on the three cases should be developed in order that the entire land cover change trajectory from remotely sensed image series can be extracted. Second, the nature of the temporal dependence is often spatially variable. In STD_STSRM models, the temporal dependence is used to link the fine resolution pixels in different date. All existing STD_STSRM models consider the temporal dependence globally, and the intensity of the temporal dependence is, therefore fixed across the entire image. The global temporal dependence model is used for its simplicity but may not be sufficient to model the spatial and temporal variation that may exist [7]. Generally, the land cover temporal dependence is related to the land cover transition probability. For instance, in the case of forest change, the transition probability from forest to nonforest is usually spatially variable depending on many physical and economic factors such as accessibility and land value [36]. Assuming the transition probability from forest to nonforest is invariant in the area covered by the entire image using the global temporal dependence model is not plausible. It is desirable to change the global temporal dependence to the local scale and to accommodate the spatial variation of its intensity. Third, land cover fraction errors caused by the soft classification procedure have to be handled in STD_STSRM. In general, the land cover fraction images that are spectrally unmixed from remotely sensed images are inputted in STD_STSRM. As soft classification continues to be an open problem, fraction images errors are often unavoidable in practice. The errors might lead a decrease in the accuracy for existing STSRM methods which constrain the result land cover map in a way that the class fractions within each coarse resolution pixel should be unchanged between the input coarse resolution fraction images and the output fine resolution land cover maps [10]. STD_STSRM should be developed not to strictly preserve the class fractions from the input coarse resolution class fraction images into the result fine resolution land cover map and act to eliminate fraction errors caused by soft classification. In this paper, a novel Local STD_STSRM (LSTSRM) model is proposed to generate fine resolution land cover maps from a series of coarse resolution class fraction images and a few fine resolution land cover maps. Unlike traditional STSRM models that consider only one fine resolution land cover map, LSTSRM can use both fine resolution maps that pre-and post-date the coarse resolution image to fully explore information in all available datasets and constrain the STSRM problem. In the proposed model, the temporal dependence intensity may vary from pixel to pixel at the fine resolution scale. In addition, the proposed model is developed not to strictly preserve the class fractions from the input class fraction images into the result land cover map, in order to eliminate fraction errors caused by the soft classification procedure. The remainder of this paper is organized as follows. Section II introduces the LSTSRM method. Section III examines the performance of LSTSRM using synthetic data experiment, real Sentinel-2 images experiment and real MODIS images experiment. Section IV discusses the influencing factors of the proposed method. Section V concludes this paper. A. The LSTSRM framework LSTSRM inputs coarse resolution class fraction images F t at the observation time t (t=1,2,…,T), as well as the fine resolution land cover maps X t-1 and/or X t+1 that covers the same geographical region but obtained at times that pre-and/or post-date F t , as input, and outputs the fine resolution land cover map X t, at time of t. The coarse resolution fraction images F t can be produced from remotely sensed image using various soft classification procedures. The fine resolution maps X t-1 and X t+1 can often be produced from fine resolution remotely sensed image by, for example, classification or manual digitization. Although a series of fine resolution land cover maps may be available and can be incorporated in LSTSRM, only those which are acquired temporally closest to and pre-or post-date Y t are selected as X t-1 and/or X t+1 . This is because the land cover datasets are temporally more dependent if they are obtained at temporally closer time [6,18]. F t contains I × J × C pixels (I × J is the number of coarse resolution pixels and C is the number of land cover classes). X t , X t-1 and X t+1 each contains I × s × J × s pixels, where s is the scale factor and each coarse resolution pixel contains s × s fine resolution pixels. Each pixel in X t , X t-1 and X t+1 has a land cover class label in C. In LSTSRM, the three scenarios, that is, both X t-1 and X t+1 are available (case 1), only X t-1 is available (case 2), and only X t+1 is available (case 3), are considered ( Fig. 1). Based on Bayesian theory, the optimal X t can be expressed as: Fig. 1. The spatial and temporal neighborhoods for a fine resolution pixel. Case 1: Both X t-1 and X t+1 are available. Case 2: X t+1 is available. Case 3: X t+1 is available. The fine resolution pixel highlighted in black in X t is the target pixel. The fine resolution pixels highlighted in blue and the coarse resolution pixels highlighted in yellow in X t are the spatial neighborhood pixels. The fine resolution pixels highlighted in red in X t-1 and X t+1 are the temporal neighborhood pixels. where P(X t |F t ,X t-1 ,X t+1 ) is the posterior probability of X t , given F t , X t-1 and/or X t+1 . The Markov random field can model contextual information by characterizing the local statistical dependence among pixels in terms of conditional prior distribution [37]. The Markov random field can simplify the global model in (1) to a model of the local image properties, and largely reduces the model complexity to make the maximum a posteriori (MAP) model solvable. The optimal X t , given F t , X t-1 and/or X t+1 , can be formulated by applying the MAP rule, i.e., by solving the maximization problem: where U(X t |F t ,X t-1 ,X t+1 ) is the posterior energy function of X t and Z is a normalizing constant. Based on the Markov random field approach, the searching of the optimal X t is equivalent to minimization the posterior energy function, which can be specified to model the spatial and temporal dependencies of pixel on its spatial and temporal neighborhoods. where US(X t ) and UT(X t |X t-1 ,X t+1 ) are the spatial and temporal constraint functions, and UF(F t |X t ) is class fraction constraint function that represents the inconsistency between F t and X t . B. The Spatial and Temporal Constraint Functions The LSTSRM spatial and temporal constraint functions are used to incorporate land cover a-priori spatial and temporal dependence model according to the spatial-temporal neighborhood system (Fig. 1). Each fine resolution pixel in X t highlighted in black is spatially dependent on its spatial neighborhood fine resolution pixels marked in blue and spatial neighborhood coarse resolution pixels highlighted in yellow in X t , and is temporally dependent on its temporal neighborhood fine resolution pixels highlighted in red in X t-1 and X t+1 . 1) Spatial Constraint Function The LSTSRM spatial constraint function is based on the spatial dependence principle which is the tendency of spatially proximate observations of a given property to be more similar than distant observations. It assumes that a fine resolution pixel and its neighboring fine resolution pixels have high probabilities to be labeled with the same class. At present, there are two methods to describe the spatial dependence of pixels, namely intra-pixel spatial dependence and inter-pixel spatial dependence, which are used to represent the spatial dependence within and between image pixels, respectively [20,[38][39][40]. is the intra-pixel spatial dependence of fine resolution pixel p t ijk when it has a label of c (c=1,…,C), and D S inter (c(p t ijk )=c) is the inter-pixel spatial dependence of fine resolution pixel p t ijk when it has a label of c. The spatial energy function US(X t ) can be written as: (4) where α1 and α2 define the weight of the intra-and inter-pixel spatial dependence values. In Eq. (4), -1 is multiplied because the LSTSRM seeks the minimum value as the optimal solution. The determination of intra-pixel spatial dependence and inter-pixel spatial dependence is explained as follows. 1.1) Intra-pixel Spatial Dependence: The intra-pixel spatial dependence is computed at the sub-pixel/sub-pixel (fine resolution pixel/fine resolution pixel) scale, meaning the spatial dependence of a sub-pixel (highlighted in black in Fig. 1) is determined by its neighborhood same-class sub-pixels (highlighted in blue in Fig. 1). The intra-pixel spatial dependence D S intra (c(p t ijk )=c) is determined by the pixel labels of its neighboring eight fine resolution pixels in the neighborhood system: where N(p t ijk ) is the fine resolution spatial neighborhood, and p t l is a neighborhood fine resolution pixel in N(p t ijk ). c(p t ijk ) and c(p t l ) are the land cover class labels for fine resolution pixels p t ijk and p t l . δ(c(p t l ),c) equals 1 if c(p t l ) and c are the same and 0 otherwise. 1.2) Inter-pixel Spatial Dependence: The inter-pixel spatial dependence is calculated at the sub-pixel/pixel scale, meaning that the inter-pixel spatial The input coarse resolution class fraction images at time of t; The predicted fine resolution land cover map at time of t; The input fine resolution land cover map at time of The input fine resolution land cover map at time of The k th fine resolution pixel in coarse resolution pixel The intra-pixel spatial dependence of fine resolution pixel p t ijk when it has a label of c (c=1,…,C, and C is the total number of classes in the image). The inter-pixel spatial dependence of fine resolution pixel p t ijk when it has a label of c (c=1,…,C) ; The global temporal dependence intensity of fine resolution pixel p t ijk if p t ijk belongs to the c th class; The local adjust factor of fine resolution pixel dependence of a sub-pixel (highlighted in black in Fig. 1) is determined by the same-class fine resolution pixels in the neighboring coarse resolution pixel (highlighted in yellow in Fig. 1) [39]. Spatial interpolation algorithms such as inverse distance weighted function or Kriging can be used to represent the relationship between sub-pixel and pixels [39,41]. By spatially interpolating the neighborhood coarse pixel class fractions of each class to the sub-pixel scale, the inter-pixel spatial dependence D S inter (c(p t ijk )=c) is defined as: where fc(p t ijk ) is the spatially interpolated fractions of the c th class at sub-pixel p t ijk . The value of fc(p t ijk ) is related to the c th class fractions in the neighborhood coarse pixels, the distance between p t ijk and the neighborhood coarse pixels, and the spatially interpolation method being used [42]. 2) The Spatial and Temporal Constraint Functions The LSTSRM fine pixel temporal dependence intensity is defined according is the global temporal dependence intensity of fine resolution pixel p t ijk if p t ijk belongs to the c th class, and D T L (c(p t ijk )=c) is the local adjust factor of fine resolution pixel p t ijk if p t ijk belongs to the c th class: where β is the temporal constraint function weight parameter. The global temporal dependence intensity D T G (c(p t ijk )=c) is assigned to 1 if the k th fine resolution pixel in coarse resolution pixel (i,j) belongs to the c th class in X t-1 or X t+1 , and is assigned to 0 otherwise. Therefore, the fine resolution pixels in X t and X t-1 (or X t+1 ) are temporally dependent if they have the same class label, and are temporally independent if they have different class labels. The local adjust factor D T L (c(p t ijk )=c) depends not only on the fine resolution class labels in X t-1 and/or X t+1 , but also on the coarse resolution class fractions at times of t-1, t and t+1. The calculation of the local adjust factor D T L (c(p t ijk )=c) according to different input data are explained as follows. 2.1) Both X t-1 and X t+1 are Available: Before the calculation of the local adjust factor D T L (c(p t ijk )=c), the fine resolution pixels in coarse pixel (i,j) in X t are grouped into different sets according to the spatial distribution of fine resolutions of the c th class in X t-1 and X t+1 , assuming different set of fine pixels may have different temporal dependence and different local adjust factors. An example on grouping different fine resolution pixel sets is shown in Fig. 2. Fig. 2 (a) and (c) shows fine pixels that belong or not belong to c th class in the coarse pixel (i,j) in X t-1 and X t+1 . By comparing Fig. 2 (a) and (c), four sets of fine resolution pixels in coarse resolution pixel (i,j), which are p t-1&t+1 ij,c , p t-1 ij,c , p t+1 ij,c and p non ij,c , are defined in Fig. 2(b). The detailed definitions are given in Fig. 2. Let f(p t ij,c ) be the fractions of the c th class in coarse resolution pixel (i,j) in F t that is unmixed using soft classification. Let ij,c ) and f(p t+1 ij,c ) be the fractions of the c th class in coarse resolution pixel (i,j), which are calculated by dividing the number of pixels in p t-1&t+1 ij,c and p t+1 ij,c in coarse resolution pixel (i,j) by s 2 (s is the scale factor). The local adjust factor is quantified by comparing . In LSTSRM, a simple rule is used in which a fine pixel that belongs to the c th class is more probably to be in the set p t-1&t+1 ij,c than in the sets p t-1 ij,c and p t+1 ij,c . In addition, a fine resolution pixel is not likely to belong to the c th class if this pixel does not belong to the c th class in X t-1 and X t+1 , and the local adjust factor for fine resolution pixels in p non ij,c is set to 0. Based on this rule, the local adjust factor according to the following cases is calculated. If should all belong to the c th class in X t , and the corresponding local adjust factor equals to 1, which is the maximal local adjust factor value. If f(p t ij,c ) is lower than f(p t-1&t+1 ij,c )+f(p t-1 ij,c )+f(p t+1 ij,c ) but higher than f(p t-1&t+1 ij,c ), the fine resolution pixels in p t-1&t+1 ij,c should belong to the c th class and the corresponding local adjust factor remains to be 1, whereas the fine resolution pixels in p t- 1 ij,c and p t+1 ij,c are not definitely to belong to the c th class and the corresponding local adjust factor is lower than 1. More specifically, the probability of fine resolution pixels in p t-1 ij,c and p t+1 ij,c belonging to the c th class is proportional to the difference between f(p t ij,c ) and f(p t-1&t+1 ij,c ) in Eqs (8)(9). In addition, the probability of fine resolution pixels in p t-1 ij,c and p t+1 ij,c belonging to the c th class decreases with the time interval between X t and X t-1 or between X t and X t+1 , assuming the temporal dependence decreases with the time interval between images. Let Δt(X t-1 , X t ) and Δt(X t , X t+1 ) be the time interval from the acquisition time from X t-1 to X t and from the acquisition time from X t to X t+1 , respectively. The local adjust factor for the sets p t-1 ij,c and p t+1 ij,c are calculated as ij,c ), the fine resolution pixels in the set p t-1&t+1 ij,c are temporally dependent with the corresponding fine pixels in X t-1 and X t+1 , and the probability that fine resolution pixels in p t-1&t+1 ij,c belongs to the c th class is proportional to the value of f(p t ij,c ): In contrast, the fine resolution pixels in the sets p t-1 ij,c and p t+1 ij,c are temporally independent, and the corresponding local adjust factor equals to 0. 2.2) Only X t-1 is Available: Only the number of fine resolution pixels of p t-1 ij,c in X t-1 is considered. If f(p t ij,c )> f(p t-1 ij,c ), the local adjust factor equals to 1 if p t ijk belongs to p t-1 ij,c and 0 otherwise. If f(p t ij,c )≤ f(p t-1 ij,c ), the local adjust factor is calculated as: if p t ijk belongs to p t-1 ij,c and 0 otherwise. 2.3) Only X t+1 is Available: Only the number of fine resolution pixels of p t+1 ij,c in X t+1 is considered. If f(p t ij,c )> f(p t+1 ij,c ), the local adjust factor equals to 1 if p t ijk belongs to p t+1 ij,c and 0 otherwise. If f(p t ij,c )≤ f(p t+1 ij,c ), the local adjust factor is calculated as: if p t ijk belongs to p t+1 ij,c and 0 otherwise. Although only X t-1 or X t+1 is considered in cases (2) and (3), the LSTSRM temporal dependence model is different from those in [7,11,17,18,43], because the local information is considered in LSTSRM but not in the previous studies. C. Fraction Constraint Function The land cover fraction constraint function represents the difference between the class fractions in the input fraction images F t and the final fine resolution map X t : ff (13) where f t ij,F t is a C × 1 vector of different class fraction values in the coarse resolution pixel (i,j) in F t . f t ij,X t is a C × 1 vector of different class fraction values in the coarse resolution pixel (i,j) in X t calculated by dividing the number of fine resolution pixels of different classes in coarse resolution pixel (i,j) by s 2 in X t . 2  indicates the L2 norm. D. Fine Resolution Map Initialization and Updating The flowchart of LSTSRM is shown in Fig. 3. An initial fine resolution land cover map is used as input to LSTSRM at the outset. The initial map is produced according to the land cover class fraction images. The fine resolution pixels are allocated class labels randomly in a manner that maintains the class proportional information conveyed by fraction values [44]. The class labels in the initial fine resolution land cover map are then updated iteratively. The iterative conditional mode, a simple gradient-based optimization algorithm, was applied for updating the fine resolution pixel class labels. III. EXPERIMENT AND RESULTS The proposed LSTSRM model was assessed in three experiments. The first used the National Land Cover Database (NLCD) of U.S.A. [45], the second used Sentinel-2 and Google Earth images, and the third used MODIS and Landsat images. In each experiment, in order to explore the influence of input fine resolution map on the proposed method, LSTSRM using two fine resolution maps (i.e., LSTSRM t-1&t+1 , the superscripts ' t-1 ' and ' t+1 ' indicate the fine resolution land cover maps that pre-and post-dates the prediction map) was compared with LSTSRM using only one fine resolution map (i.e., LSTSRM t-1 and LSTSRM t+1 ). In addition, the proposed method using two fine resolution maps but using global land cover temporal dependence model (i.e., GSTSRM t-1&t+1 ) was also compared. In GSTSRM t-1&t+1 , the local adjust factor, which may vary for different fine pixels and for different classes in LSTSRM, is set to 1 for all fine resolution pixels and for different classes in the entire image in Eq. (7). Several popular SRM algorithms were used for comparison including the pixel swapping algorithm based SRM (PSA) [20], the Kriging interpolation based SRM (KI) [41,46,47], the Hopfield neural network based SRM (HNN) [25], the spatial-temporal pixel swapping algorithm (STPSA) [17], the subpixel land cover change mapping algorithm (SLCCM) [5], and the SRM based on spatial-temporal dependence from a former map (SRM_STD) [34]. Among different methods, PSA, KI and HNN are applied to mono-temporal coarse resolution land cover fraction images, which are referred as mono-temporal SRM (MTSRM) methods, and STPSA, SLCCM and SRM_STD are applied to coarse resolution land cover fraction images and a fine resolution land cover map, which are referred as spatial-temporal SRM (STSRM) methods. The LSTSRM weight parameters in all experiments were set through trial and error. A. Simulated NLCD Experiment 1) Data Preparation The 30 m resolution NLCD maps were adopted in this experiment. NLCD is a land cover classification scheme of Albers Equal Area projection, which has been applied consistently at a spatial resolution of 30 m across the conterminous USA primarily on the basis of Landsat satellite data. The study area is located in Charlotte (33º7'00"N and 81º3'00"W), U.S.A. The NLCD maps acquired in 2001, 2006 and 2011, each contains 800 × 800 pixels in size, were used as the fine resolution land cover maps ( Fig. 4(a-c)). The original NLCD maps contain sixteen classes according to the NLCD classification system modified from the Anderson Land Cover Classification System [45]. The original sixteen classes were reclassified to eight classes, namely water, developed, barren, forest, shrubland, herbaceous, planted/cultivated, and wetlands in this experiment. This This approach can produce error-free fraction images compared with those produced by soft classification [20,24]. 2) Results The resulting maps in the zoomed area from different methods are shown in Fig. 5. The KI map contained unsmoothed boundaries because it discarded the intra-pixel spatial dependence which can help to generate locally smoothed boundaries in the result (Fig. 5(b)). The PSA map failed to reconstruct the holistic land cover spatial patterns because it discarded the land cover inter-pixel spatial dependence (Fig. 5(c)). The PSA map contained many land cover patches with small size represented as speckle-like artifacts. In contrast, the HNN map eliminated the speckle-like artifacts (Fig. 5(d)). This difference lays in the fact that KI and PSA must preserve class fractions in the resulting map, and the land covers with small area proportion or fraction in a coarse resolution pixel would be aggregated to small land cover patches represented as speckle-like artifacts. In contrast, HNN does not strictly preserve the class fractions from the input class fraction images into the result land cover map, and would eliminate the speckle-like artifacts due to spatial smoothing effect [20,25]. However, since KI, PSA and HNN only considers the land cover spatial information but neglect land cover temporal information in the land cover map that preand/or post-dates the prediction date, the spatial details were not represented in the result maps. The STSRM methods, including STPSA, SLCCM, SRM_STD and LSTSRM, preserved the spatial details of land cover classes in Fig. 5 (Fig. 5(e-n)). All these maps were very similar to the reference NLCD 2006 map. The linear developed class was connected in these maps, and the shapes of objects such as herbaceous, planted/cultivated, and water were reconstructed. 6 shows the error maps from different methods. The MTSRM algorithms of KI, PSA and HNN generated more error pixels (highlighted in red and blue in Fig. 6(b-d)) compared with those STSRM methods in Fig. 6(e-n). In addition, the MTSRM generated more wrongly-labeled unchanged pixels highlighted in blue than wrongly-labeled changed pixels highlighted in red in Fig. 6(b-d)), whereas STSRM eliminated most wrongly-labeled unchanged pixels highlighted in blue in Fig. 6(e-n), showing that incorporating temporal dependence in SRM can reduce the commission error especially for unchanged pixels. Among all the result maps, the LSTSRM t-1&t+1 contained the least wrongly-labeled fine pixels in Fig. 6(n), and the wrongly-labeled fine pixels highlighted in the circles in other STSRM maps (Fig. 6(e-l)) were eliminated in the LSTSRM t-1&t+1 map. This result not only shows that the proposed LSTSRM t-1&t+1 increased the accuracy than the existing STSRM algorithms of STPSA, SLCCM and SRM_STD, but also shows that the proposed STSRM using two fine maps is superior to that using only one fine map, and the proposed STSRM using local temporal dependence model is superior to that using global temporal dependence model. The overall accuracies of different methods are shown in Table II. The overall accuracies of MTSRM methods were lower than 81%, whereas those of STSRM methods were higher than 89%. The accuracies of STPSA t-1 , SLCCM t-1 and SRM_STD t-1 were similar, and the accuracies of STPSA t+1 , SLCCM t+1 and SRM_STD t+1 were similar, showing that the input fine resolution map plays a key role for STPSA, SLCCM and SRM_STD. In addition, the overall accuracy for LSTSRM t-1 was higher than those of STPSA t-1 , SLCCM t-1 and SRM_STD t-1 , and the overall accuracy for LSTSRM t+1 was higher than those of STPSA t+1 , SLCCM t+1 and SRM_STD t+1 . It shows that LSTSRM improves the accuracy compared with STPSA, SLCCM and SRM_STD when only one fine map is used in STSRM. LSTSRM t-1&t+1 generated the highest overall accuracy among all methods, showing the advantage of the proposed method. 1) Data Preparation The LSTSRM was tested using real Sentinel-2 remotely sensed images in this experiment. Sentinel-2 was launched by the European Space Agency in 2015, and can provide global acquisitions of fine resolution multi-spectral images with a fine revisit frequency. The Sentinel-2 image is useful in land cover mapping due to its appealing properties (10 days at the equator with one satellite, and 5 days with 2 satellites which result in 2-3 days at mid-latitudes) and the free access. In this experiment, Sentinel-2 image was utilized to map land covers in an urban area located in Wuhan (30°27′30″N and 114°32′30″E), Hubei province, China. The Sentinel-2 image acquired on September 7 2016 with four 10 m spatial resolution Sentinel-2 bands (blue, green, red and infrared bands) was used to generate land cover map in the study area (Fig. 7(d)). A Google Earth image acquired on September 26 2016 was digitized to a 2 m spatial resolution land cover map for accuracy assessment (Fig. 7(b)). Two fine resolution Google Earth images that acquired on February 20 2016 and December 20 2017, respectively, were digitized to 2 m spatial resolution land cover maps as the STSRM input ( Fig. 7(a),(c)). The study area covers 400 × 400 Sentinel-2 pixels, which correspond to 2000 × 2000 fine resolution pixels in the input and reference maps, with a scale factor s = 5. There are three land cover types, which are water, vegetation and impervious/bareland, contained in the input and reference land cover maps. 2) Results Fig . 8 shows the result maps and zoomed images from different methods. Different to the NLCD experiment which used error-free coarse spatial resolution land cover fraction images, this experiment used fraction images that were unmixed from the Sentinel-2 image which inevitably contained errors in Fig. 8. The zoomed image for KI (b) and PSA (c) contained many speckle-like artifacts which were resulted from soft classification error. For instance, if a coarse pixel does not contain pixels of water class and the unmixed water fraction is 12% in this coarse pixel, then a total number of 5 2 ×12% =3 (5 is the scale factor) fine pixels are labeled as water class within this coarse pixel, which may be represented as speckle-like artifacts since these methods must preserve class fractions in the resulting map. KI and PSA preserved class fractions in the resulting map, resulting in speckle-like artifacts due to soft classification errors in the class fraction images. HNN eliminated the speckle-like artifacts because it had the spatial smoothing effect and did not strictly preserve the class fractions from the input class fraction images into the result land cover map. However, the impervious&bareland patch highlighted by the ellipse in the zoomed area in Fig. 8(d) was wrongly labeled as water, and the detailed spatial pattern of the linear impervious&bareland patch highlighted by the circle was not reconstructed. Among the STSRM results in Fig. 8, the STPSA, SLCCM and SRM_STD (Fig. 8(e-g),(i-k)) contained a large number of speckle-like artifacts due to soft classification error, whereas LSTSRM ( Fig. 8(h),(l),(n)) and GSTSRM (Fig. 8(m)) eliminated these errors due to spatial smoothing effect. Similar to HNN, LSTSRM and GSTSRM eliminated the speckle-like artifacts because they have the spatial smoothing effect and do not to strictly preserve the class fractions from the input class fraction images in the result land cover map. The LSTSRM and GSTSRM maps in Fig. 8 (h) and (l-n) were much similar to the reference map than the HNN map in Fig. 8(d). This is because LSTSRM and GSTSRM incorporated land cover temporal information from the input land cover map whereas HNN did not. As soft classification error is usually unavoidable in real applications, LSTSRM and GSTSRM would be more suitable for land cover mapping of image series compared with STPSA, SLCCM and SRM_STD in practice. In the zoomed images, LSTSRM t-1 , LSTSRM t+1 and GSTSRM t-1&t+1 failed to reconstruct the linear impervious&bareland patch highlighted by the circles in Fig. 8(h),(l) and (m). LSTSRM t+1 erroneously labeled a part of impervious&bareland as water highlighted by the ellipse in the zoomed image for Fig. 8(l). LSTSRM t-1 and GSTSRM t-1&t+1 erroneously labeled a part of impervious&bareland as vegetation highlighted by the circle in the zoomed image for Fig. 8 (h), (m). LSTSRM t-1&t+1 correctly labeled the impervious&bareland patch highlighted by the ellipse and reconstructed most parts of the linear impervious&bareland patch highlighted by the circles in the zoomed image for (n), and was similar to the reference image. The overall accuracy, producer's and user's accuracies of different methods are presented in table III. The overall accuracies for KI, PSA, STPSA, SLCCM and SRM_STD, which strictly preserve the class fractions from the input class fraction images in the result land cover map, were lower than 80%, showing that the soft classification error strongly affects these MTSRM and STSRM methods. HNN had an overall accuracy of about 85%, and LSTSRM and GSTSRM increased the overall accuracy to higher than 93%. Among LSTSRM and GSTSRM, for the water class, LSTSRM t+1 has the highest producer's accuracy but the lowest user's accuracy served as a high commission error of water. This is shown in the zoomed image of (l) in which some impervious&bareland pixels were wrongly labeled as water highlighted by the ellipse in LSTSRM t+1 . In addition, among LSTSRM and GSTSRM and for the water class, GSTSRM t-1&t+1 has the highest user's accuracy, but the lowest producer's accuracy served as a high omission error of water. For the vegetation to impervious&bareland class which have a large degree of land cover change in Fig. 7(e-f), LSTSRM t-1&t+1 has the highest producer's and user's accuracies served as the lowest omission and commission errors for these two classes. LSTSRM t-1&t+1 has the highest overall accuracy, showing the advantages of the proposed method. C. MODIS Experiment 1) Data Preparation The LSTSRM was tested using real MODIS image in this experiment. The study area is located near Sorriso (12º33'00"S and 55º42'00"W) in Mato Grosso State, Brazil. This area was mostly covered by tropical forests but has suffered from deforestation in recent years [8]. Three Landsat TM images (path 226, row 069) acquired on July 12 2002, July 23 2003 and June 23 2004 were downloaded from the USGS website. Data in six bands (the 120 m thermal infrared band was excluded) at the 30 m spatial resolution with the Universal Transverse Mercator projection were used. The three Landsat images were classified at a 30 m spatial resolution ( Fig. 9 (a-c)). Two land cover classes, forest and nonforest, were considered in this experiment. The endmembers of each class were manually selected from each Landsat image, and the maximum likelihood classifier was applied to generate the fine resolution forest/nonforest maps each year. A 8-day surface reflectance MODIS product (MOD09A1) datasets comprising seven spectral bands (620 nm -2055 nm) with a spatial resolution of 463 m acquired in July 2003 was used ( Fig. 9(d)). The MODIS image was re-projected to the UTM coordinate system and resampled to a spatial resolution of 450 m using the nearest neighbor interpolation which may not over-smooth the resized image. The study area covers 300 × 300 MODIS pixels, which correspond to 4500 × 4500 Landsat pixels, with a scale factor s=15. The same MTSRM and STSRM methods that used in the NLCD and Sentinel-2 experiments were used in this experiment. The multiple endmember spectral mixture analysis was applied to the MODIS image to generate coarse resolution land cover fraction images. STSRM incorporated the 2002 and/or 2004 land cover maps in Fig. 9(a), (c) as ancillary data. The accuracies of different methods were assessed using the 30 m resolution 2003 land cover maps ( Fig. 9(b)). 2) Results The zoomed areas of the result maps from different methods were shown in Fig. 10. Similar to the Sentinel-2 experiments, the KI, PSA STPSA, SLCCM and SRM_STD maps contained speckle-like artifacts due to soft classification errors in the class fraction images in Fig. 10. The HNN maps eliminated speckle-like artifacts due to spatial smoothing effect. However, HNN generated disconnected forest patches highlighted by the circle in Fig. 10(d), and the shape of the forest patch was smoothed and dissimilar to that in the reference map. By contrast, the LSTSRM and GSTSRM maps were similar to the reference map than those generated from other methods. The shape of the forest patch was mostly reconstructed by LSTSRM and GSTSRM. Both LSTSRM t-1 and LSTSRM t+1 generated disconnected forest patch that were highlighted by the circle in Fig. 10 (h) and (l), whereas GSTSRM t-1&t+1 and LSTSRM t-1&t+1 generated more connected forest patches in Fig. 10 (m-n), showing incorporating two fine maps that pre-and post-dates the predicting time usually increase the accuracy than those using only one fine image. In addition, the LSTSRM t-1&t+1 was more similar to the reference image than GSTSRM t-1&t+1 such as those highlighted by the ellipse and circle. The accuracies of different methods are shown in table IV. The overall accuracies of KI, PSA, STPSA, SLCCM and SRM_STD were lower than 90%, whereas that of HNN, LSTSRM and GSTSRM were higher than 90%. It shows that the soft classification error has a strong effect on these MTSRM and STSRM methods. The overall accuracies of LSTSRM and GSTSRM were all higher than 95%. Among LSTSRM and GSTSRM, for the forest class, GSTSRM t-1&t+1 has the highest producer's accuracy and the second lowest user's accuracy served as commission errors of forest. Among LSTSRM and GSTSRM and for the nonforest class, LSTSRM t+1 has the highest producer's accuracy but the lowest user's accuracy served as the highest commission error of nonforest. For instance, the forest patch was erroneously labeled as nonforest highlighted by the ellipse in Fig. 10(l), resulting in a higher commission error of nonforest for LSTSRM t+1 . In contrast, the small forest patch which was erroneously labeled as nonforest highlighted by the ellipse in LSTSRM t+1 was correctly predicted in LSTSRM t-1&t+1 in Fig. 10(n). LSTSRM t-1&t+1 has relatively high producer's and user's accuracies for both forest and nonforest, and the highest overall accuracy among all methods. IV. DISCUSSION In this section, the similarity and difference between STSRM and two popular image fusion methods, which are spatial-temporal image fusion and hyper-spectral image super-resolution, are discussed. Then the influencing factor of the changed and unchanged pixels to the proposed LSTSRM is discussed. A. Comparison of STSRM, Spatial-temporal Image Fusion (STIF), and Hyper-spectral Image (HSI) Super-resolution With the development of remote sensing society, a huge number of remotely sensors have been launched recently. The optical remotely sensed images usually have a tradeoff between the spatial, temporal, and spectral resolutions, due to technical limitations factors and the orbit of the platforms. Various methods are proposed to fuse images of the same scene, using complementary information provided. LSTSRM was compared theoretically with STIF and HSI super-resolution. 1) Comparison of STSRM and STIF STIF is an approach that generates a fine resolution image for the date represented by a coarse resolution image by integrating the spatial and temporal information in a pair of fine and coarse resolution images of the same region acquired at other dates [49][50][51]. Both STSRM and STIF aim to overcome the limitation caused by the tradeoff between spatial and temporal resolutions of optical remotely sensed images. The main difference lays in that STIF predicts fine spatial-temporal resolution reflectance images or indices such as Normalized Difference Vegetation Index (NDVI) time-series [52] which can be used in applications such as the monitoring of vegetation seasonal change [53] and in the assessment of vegetation status [54]. In contrast, STSRM predicts fine spatial-temporal resolution land cover maps which can be used in applications such as land cover change analysis. STIF is more appropriate in the analysis based on image reflectance whereas STSRM is more appropriate in the analysis based on land cover types. 2) Comparison of STSRM and HSI Super-resolution HSI super-resolution is an approach that fuses coarse spatial resolution HSI with fine spatial resolution multispectral images or panchromatic images in order to obtain super-resolution (spatial and spectral) hyperspectral images [55][56][57]. HSI super-resolution aims to overcome the limitations caused by the tradeoff between spatial and spectral resolutions of optical remotely sensed images, using the complementary characteristics in the inference of images with fine spatial-spectral resolutions. There are two main differences between HSI super-resolution and STSRM. First, HSI super-resolution usually requires the input coarse spatial resolution HSI and fine spatial resolution multi-spectral images to be acquired at the same or close date so that land cover does not change between the acquisition dates of these images. In contrast, for STSRM, the input coarse spatial resolution class fraction images and the fine spatial resolution land cover map are derived from remotely sensed images that are acquired at different dates. Second, HSI super-resolution is used to predict fine spatial-temporal resolution images whereas STSRM directly outputs fine spatial-temporal resolution land cover maps. If the aim is to extract land cover information, a procedure of land cover classification is still need to be applied to image outputted from HSI super-resolution. B. Influence of Changed and Unchanged Pixels on LSTSRM The influence of the percentage of changed and unchanged pixels in the land cover maps on LSTSRM is explored. Take the NLCD experiment for example, table V showed the percentage of changed pixels for each class as well as the producer's and and LSTSRM t-1&t+1 , for classes with a low (<=5%) percentage of changed pixels such as developed and planted/cultivated during 2001-2011, the producer's and user's accuracies were higher than 97%, and for classes with a high (>50%) percentage of changed pixels such as barren and herbaceous classes, the producer's and user's accuracies were usually lower than 80%. This shows that the proposed method is more competent in predicting pixels with unchanged labels. Other land cover temporal models could be developed to deal with the complicated land cover change scenarios. V. CONCLUSION In this paper, a novel local adaptive dependence based spatial-temporal super-resolution mapping model was proposed. Unlike traditional STSRM models using only one fine resolution land cover map as ancillary data, the proposed LSTSRM model considers the fine resolution maps pre-and/or post-dates the coarse resolution cases, and develops the local temporal dependence model, in which the dependence intensity may vary from fine resolution pixel to fine resolution pixel. LSTSRM does not to strictly preserve the class fractions from the input class fraction images into the result land cover map, and can eliminate fraction errors caused by the soft classification procedure to some extent. The LSTSRM performance was validated using NLCD data, real Sentinel-2 imagery and real MODIS imagery by comparing with several popular SRM algorithms. Results showed that LSTSRM resulting maps eliminated most speckle-like artifacts. Moreover, the LSTSRM resulting maps maintained the connectivity for the linear shaped patches, and were closer to the reference maps than other methods. The proposed LSTSRM generated the highest overall accuracies in all the experiments. In addition, the proposed method using two fine resolution maps and using local temporal dependence model improved the accuracy by comparing with that using two fine resolution maps but using global temporal dependence model, and by comparing with that using only one fine resolution map and using local temporal dependence model. The producer's and user's accuracies were higher for unchanged classes than for changed classes for different methods in the NLCD experiment. Research focusing on using more fine resolution land cover maps and improving the local land cover transitions for changed land cover classes in STSRM should be studied in the future.
11,863.6
2019-03-29T00:00:00.000
[ "Environmental Science", "Mathematics" ]
PAK4 Functions in Tumor Necrosis Factor (TNF) α-induced Survival Pathways by Facilitating TRADD Binding to the TNF Receptor* PAK4 is a member of the group B family of p21-activated kinases. Its expression is elevated in many cancer cell lines, and activated PAK4 is highly transforming, suggesting that it plays an important role in tumorigenesis. Although most previous work was carried out with overexpressed PAK4, here we used RNA interference to knock down endogenous PAK4 in cancer cells. By studying PAK4 knockdown HeLa cells, we demonstrated that endogenous PAK4 is required for anchorage-independent growth. Because cell survival is a key part of tumorigenesis and anchorage-independent growth, we studied whether PAK4 has a role in protecting cells from cell death. To address this, we studied the role for PAK4 downstream to the tumor necrosis factor (TNF) α receptor. Although overexpressed PAK4 was previously shown to abrogate proapoptotic pathways, here we demonstrate that endogenous PAK4 is required for the full activation of prosurvival pathways induced by TNFα. Our results indicate that PAK4 is required for optimal binding of the scaffold protein TRADD to the activated TNFα receptor through both kinase-dependent and kinase-independent mechanisms. Consequently, activation of several prosurvival pathways, including the NFκB and ERK pathways, is reduced in the absence of PAK4. Interestingly, constitutive activation of the NFκB and ERK pathways could compensate for the lack of PAK4, indicating that these pathways function downstream to PAK4. The role for PAK4 in regulating prosurvival pathways is a completely new function for this protein, and the connection between PAK4 and cell survival under stress helps explain its role in tumorigenesis and development. Tumor necrosis factor (TNF) 2 ␣ was originally discovered as an anticancer cytokine that can induce apoptosis in certain tumor cells (1). TNF␣ was also shown to play important roles in regulating cell proliferation and differentiation, inflammatory responses, and immune functions (1)(2)(3). The mechanisms by which different signaling transduction pathways activated by TNF␣ interact with each other are only recently being defined. It is well established that binding of TNF␣ to the TNFR1 (TNF␣ receptor 1) on the cell membrane leads to activation of prosurvival pathways followed by proapoptosis pathways. According to recent models (4), the prosurvival pathways are activated by a rapid recruitment of a protein complex, known as complex I, to the cytosolic portion of the activated TNFR1. Formation of complex I, including TNFR1, TRADD, RIP, and TRAF2 proteins, leads to activation of the NFB pathway, as well as mitogen-activated protein kinase pathways such as the ERK, JNK, and p38 pathways (3). The NFB pathway is considered to be the major prosurvival pathway (5). The proapoptosis pathways are activated by a second complex, known as complex II or DISC (death-inducing signaling complex), which includes TRADD, RIP, and FADD proteins (4). The molecular mechanism by which complex I transitions to complex II is still not clear, and it is not certain whether TNFR1 is even included in complex II (4,6). FADD is the essential component of this complex. FADD can recruit and activate the apoptosis initiators caspase 8 and 10, leading to the activation of two different apoptosis pathways (7). The extrinsic mitochondria-independent apoptosis pathway is activated through directly cleavage and activation of executor caspases 3 and 7 by caspases 8 and 10. Activated caspases 3 and 7 then regulate the activities of target proteins that play important roles in various aspects of apoptosis (8). The intrinsic mitochondria-dependent apoptosis pathway is mediated by cleavage and activation of the Bcl-2 family protein BID by activated caspase 8 (9). The resulting cleaved BID translocates to mitochondria, where it interacts with other Bcl-2 family members to promote cytochrome c release (10). Released cytochrome c leads to activation of caspase 9, followed by cleavage and activation of caspase 3 and apoptosis (9,11). Upon activation of the TNF␣ receptor, the cell responds either by undergoing apoptosis or by activating survival pathways. For the cell to survive, full activation of the survival pathways triggered by complex I is critical. Activation of the NFB pathway leads to increased expression of several antiapoptotic proteins such as FLIP and c-IAP, which can bind to complex II. If the NFB pathway is fully activated and sufficient amounts of FLIP and C-IAP are presented in complex II, the activation of caspase 8 will be blocked, and the cell will survive (4). Because the NFB-mediated survival depends on the production of new proteins, it is disrupted by drugs such as the translation inhibitor cycloheximide (CHX). The combination of TNF␣ and CHX therefore favors activation of the apoptosis pathway, leading to cell death (4). The p21-activated kinase family of serine/threonine kinases were originally identified as targets for the Rho GTPases Cdc42 and Rac (12). Although originally identified as proteins that regulate cell morphology (13), the six mammalian PAKs have also been shown to play important roles in regulating cell survival and apoptosis. For example, apoptotic stimuli can induce rapid activation of full-length PAK2 to promote cell survival (14). At later time points, PAK2 is also cleaved, probably by caspase 3, and the resulting activated fragment is associated with DNA fragmentation and membrane changes that occur during apoptosis (15). PAK1, PAK4, and PAK5 were also reported to protect cells from apoptosis by directly phosphorylating the proapoptotic protein Bad (16,18,19), which prevents cytochrome c release in the mitochondrial pathway (20). Along with PAK5 and PAK6, PAK4 is a member of the group B family of PAKs. PAK4 has important roles in embryonic development, and deletion of PAK4 in mice leads to embryonic lethality, along with defects in both cardiac and neural development (21). In cultured cells, PAK4 regulates cell adhesion, cytoskeletal organization, and transformation (22,23). Overexpression of wild type or activated PAK4 was shown to protect cells from apoptosis induced by UV irradiation and serum withdrawal. This is associated with direct phosphorylation and inactivation of BAD by PAK4 (19). Overexpressed PAK4 also protects cells from apoptosis induced by a fusion of TNFR1 and the Fas receptor. In this case the protective response appears to be kinase-independent and is associated with an abrogation of the recruitment and activation of caspase 8 to DISC (24). Although PAK4 is expressed at low levels in most normal tissues, PAK4 mRNA levels are greatly up-regulated in many cancer cell lines from various origins (25). This led us to study whether PAK4 is important for oncogenesis and cell survival in cancer cells. Whereas our previous work was carried out in cell lines that overexpress exogenous PAK4, here we took advantage of RNAi technology to knock down endogenous PAK4 expression in a cancer cell line that expresses high levels of PAK4. We found that endogenous PAK4 is required for anchorage-independent growth and for the full activation of the survival pathways induced by TNF␣ in HeLa cells. Specifically, we found that PAK4 is required for optimal binding of TRADD to the activated TNF␣ receptor. The kinase activity is not absolutely required but can further enhance the interaction between TNF␣ and TRADD. Consequently, activation of the NFB pathway by TNF␣ was reduced in the absence of PAK4, as well as the ERK and JNK pathways. Interestingly, constitutive activation of the NFB and ERK pathways could over-ride the loss of PAK4 and restore cell survival, indicating that these pathways operate downstream to PAK4. Thus, PAK4 acts not only as a direct inhibitor of the proapoptotic pathway, but it is also required for the activation of the survival pathways. Because increased cell survival is an important part of oncogenesis, our results help explain why PAK4 is highly transforming and why its expression is associated with oncogenic transformation. EXPERIMENTAL PROCEDURES Cell Culture and Transfections-HeLa cells were cultured in Dulbecco's modified Eagle's medium (Invitrogen) containing 10% fetal bovine serum. All of the media were supplemented with 50 units/ml penicillin, 50 g/ml streptomycin, and 4 mM glutamine. 1.5 g/ml puromycin was added for stable cell lines. Transient transfection assays were carried out in HeLa cells using the calcium phosphate precipitation or Lipofectamine method. Construction of PAK4 Knockdown Stable Cell Lines-Two small interfering RNA (siRNA) oligonucleotides were synthesized to target two different regions in the PAK4 cDNA: PAK4-RNAi-1 targeting a linker region between the regulatory domain and the kinase domain AACTTCATCAAGATTGGCGAG and PAK4-RNAi-2 targeting a sequence within the kinase domain AACGAGGTGGTAATCAT-GAGG. Transient transfection of both siRNAs could disrupt PAK4 expression in HeLa cells, whereas a scrambled double-stranded RNA did not affect PAK4 expression. pSuper vector was kindly provided by the lab of Dr. Ron Prywes, and 64-mer oligonucleotide DNA nucleotides targeting the same region as PAK4-RNAi-1 were designed according to the specifications recom-mended (26) and synthesized by Invitrogen. pSuper-PAK4 was constructed by ligation of the annealed nucleotides into the BgII and Hin-dIII sites of the pSuper vector. Stable cell lines were made by co-transfection of pSuper or pSuper-PAK4 together with the pLPC vector (containing a puromycin-resistant gene) into HeLa cells using the calcium phosphate precipitation method. The cells were selected with puromycin (1.5 g/ml), and the colonies were picked ϳ2 weeks after selection. 20 clones from both transfections were picked, and expression of PAK4 was determined by Western blot. The expression of PAK4 was successfully knocked down in 4 of the 20 clones from the pSuper-PAK4 transfection. Growth Curve-To estimate the growth rate of stably transfected cell lines, equal numbers of each stable cell line were seeded in growth medium in six-well plates. Each day after the seeding, one set of cells was collected using trypsin and counted. Each point on the curve is the average of three duplicates. Flow Cytometry-After UV or TNF␣ϩCHX treatment, both floating and attached cells were collected by low speed centrifugation, washed in PBS, and fixed in ice-cold methanol for overnight. The cells were then stained with propidium iodide (50 mg/ml) in the presence of 50 mg/ml RNase A for 30 min at room temperature. The DNA content indicated by propidium iodide staining was analyzed using a FACSCalibur flow cytometer (Becton Dickinson). Apoptosis Assay-For the apoptosis assay, equal numbers of cells were seeded in 6-cm plates. For TNF␣-CHX treatment, the cells were washed once with PBS, and then medium containing 10 ng/ml TNF␣ and 10 g/ml CHX either alone or together were added. For UV irradiation, the cells were washed twice with PBS and then exposed to 50 J/m 2 UV light in a UV cross-linker (Fisher) followed by the addition of fresh medium. After stimulation the cells were collected at the indicated time points (both attached and floating dead cells unless otherwise indicated). Two different methods were used to quantify the apoptosis level in the treated cells: quantitation of the sub-G 1 population using FACS analysis and the detection of the 85-kDa proteolytic product of PARP by Western blot. The percentage of the PARP cleavage was quantified using NIH Image J. Soft Agar Assay-2 ml of 0.6% Bacto agar in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum, antibiotics, and glutamine was plated into six-well plates. Stable cell lines suspended in 2 ml of 0.3% Bacto agar in the same medium were seeded into these six-well plates at 5,000 or 10,000 cells/well. Each cell line was tested in duplicates. After 2-3 weeks, the colonies were visualized under an inverted light microscope. Digital pictures were also taken for each well, and the number and size of the colonies were measured using NIH Image J. TdT-mediated dUTP Nick End Labeling Assays-Wild type and PAK4 knockout embryos were obtained as shown previously (21). TdT-mediated dUTP nick end labeling staining was performed on paraffin-embedded sections using an ApopTag peroxidase in situ apoptosis detection kit (Integrin), according to the manual from the manufacturer. Immunoprecipitation-The immunoprecipitation protocol was adapted from those as described previously (24,27). In brief, the cells were washed by cold PBS twice before being lysed in lysis buffer (25 mM Tris-HCl, pH 7.6, 150 mM NaCl, 1% Nonidet P-40, 1 mM EDTA), supplemented with proteinase and phosphatase inhibitors (2 mM dithiothreitol, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml leupeptin, 10 g/ml aprotinin, 20 mM ␤-glycerophosphate, 1 mM Na 3 VO 4 ). The cell lysates were collected, rotated at 4°C for an hour, and cleaned by centrifugation to obtain whole cell extracts. For immunoprecipitation, 50 l of protein G-agarose slurry (Santa Cruz) preloaded with antibodies or goat serum was added to equal amounts of cell extracts and rotated overnight at 4°C. The immune complexes were precipitated by centrifugation, washed two times with lysis buffer, then washed one time with lysis buffer plus 250 mM NaCl, and then washed two times again with lysis buffer. The precipitated proteins were denatured in SDS loading buffer, separated by SDS-PAGE, transferred to polyvinylidene difluoride membrane, and analyzed by Western blotting. The results of the Western blots were quantified using NIH Image J. For the PAK4 rescue experiments, control and PAK4 knockdown (RNAi) cells were left untreated (0 min) or treated with TNF␣ϩCHX for 5 min. Equal amounts of cell lysate were combined with equal amount of lysate from HeLa cells stably overexpressing hemagglutinintagged wild type PAK4 (PAK4WT) (19), Myc-tagged kinase-dead PAK4 (PAK4KM) (24), or Myc-tagged constitutively active PAK4 (PAK4NE) (19). Equal amounts of the cell lysates were then incubated with protein G-agarose beads loaded with anti-TNFR1 antibody. TRADD protein bound to the TNFR1was analyzed by Western blot and was also quantified using NIH Image J. The expression levels of TRADD protein in lysates from the HeLa cells stably overexpressing wild type PAK4 (PAK4WT), kinase-dead PAK4 (PAK4KM), or constitutively active PAK4 (PAK4NE) and from the control and PAK4-RNAi cells was assessed by Western blot. The blots were also probed with PAK4 antibody to show the expression of endogenous PAK4, as well as hemagglutinin-tagged PAK4WT, Myc-tagged PAK4NE, and Myc-tagged PAK4KM. Western Blot Analysis-Western blots were carried out as described in Ref. 28. The results of the Western blots were quantified using NIH Image J. PAK4 Is Required for Anchorage-independent Growth in HeLa Cells- Overexpression of activated PAK4 has been shown to induce anchorage-independent growth on soft agar (23,25). To determine whether endogenous PAK4 is also required for anchorage-independent growth in cancer cell lines, we used siRNA to knockdown PAK4 expression in HeLa cells. HeLa cells were chosen because they are an example of a cancer cell line (derived from a cervical carcinoma) expressing high levels of PAK4 (25). Two siRNAs were synthesized to target two different regions in the PAK4 cDNA: PAK4-RNAi-1 (AACTTCATCAA-GATTGGCGAG) targeting a linker region between the regulatory domain and kinase domain and PAK4-RNAi-2 (AACGAGGTGGTA-ATCATGAGG) targeting a sequence within the kinase domain. Transient transfection of both siRNAs could disrupt PAK4 protein expression in HeLa cells, whereas a scrambled double-stranded RNA did not affect PAK4 expression, as shown by Western blot in Fig. 1A. An expression vector (pSuper-PAK4) targeting the same region as PAK4-RNAi-1 was constructed and used to generate stable cell lines. Expression of PAK4 in control (empty vector alone) or PAK4 knockdown (pSuper-PAK4) HeLa cell lines was assessed by Western blot. As shown in Fig. 1B, the expression of PAK4 was successfully knocked down in cells containing pSuper-PAK4 (subsequently referred to as PAK4 knockdown cells), whereas the control pSuper vector alone did not affect PAK4 expression. PAK1 and PAK2 expression levels were not affected by the PAK4 RNAi (Fig. 1B). Wild type HeLa cells, control stable cell lines, and PAK4 knockdown cells were plated on soft agar. After 2 weeks, PAK4 knockdown stable cells formed significantly fewer and smaller clones on soft agar compared with control cells, as shown in Fig. 1C and as quantified in TABLE ONE. These data indicate that PAK4 plays an essential role in the anchorage-independent growth of HeLa cells. To determine whether the decreased growth in soft agar was due to an overall decrease in growth rate, we analyzed the growth rate of PAK4 knockdown versus wild type cells under normal growth conditions. As shown in Fig. 1D, PAK4 knockdown cells grew at the same rate as the control cells. Likewise, the PAK4 knockdown cells had the same profile of G 1 , G 2 , and S phase, as indicated by FACS data (Fig. 1E), which indicates that the PAK4 knockdown cells had no defects in cell growth or progression through the cell cycle under normal conditions. PAK4 Knockdown HeLa Cells Are More Sensitive to Both UV-and TNF␣-induced Apoptosis.-PAK4 knockdown cell lines formed fewer and smaller colonies in soft agar, whereas their growth rates were normal; one possible explanation for this is that PAK4 knockdown cells are more sensitive to apoptosis. Overexpression of PAK4 was previously shown to protect cells from apoptosis (24). Furthermore, deletion of PAK4 in mice leads to a pronounced increase in apoptosis in certain parts of the PAK4 knockout embryos (21). This suggests that PAK4 is required for protecting cells from apoptosis in some tissues during development. To determine whether PAK4 is also required for protecting cancer cells from apoptosis, knockdown and control cells were treated with either TNF␣ϩCHX or UV irradiation. Results from FACS analysis indicate that there were more sub-G 1 (apoptotic) cells in the PAK4 knockdown cells compared with the control cells after 4 h of TNF␣ϩCHX treatment ( Fig. 2A). Likewise, PAK4 knockdown cells were also more sensitive to UV-induced apoptosis ( Fig. 2A). We also analyzed another apoptosis indicator, the cleavage of PARP protein by the executor caspase 3. Western blot analysis indicates that PARP cleavage occurred earlier in the PAK4 knockdown cells, as indicated in Fig. 2B. The cleavage states of apoptosis initiator caspase 8 and its downstream effector BID were also analyzed by Western blot. As shown in Fig. 2C, both caspase 8 and BID were cleaved and activated earlier in PAK4 knockdown cells compared with the control cells. Our results indicate that the absence of PAK4 leads to an earlier onset of apoptosis, indicating that PAK4 plays an essential and early role in protecting cells from cell death. Activation of NFB, ERK, and JNK Pathways by TNF␣ Are Reduced by the Absence of PAK4-In HeLa cells, TNF␣ treatment can activate survival pathways so that apoptosis is blocked. Activation of survival signals depends on the transcription and translation of new proteins. Therefore, if the survival pathways are disrupted by translation inhibitor CHX, the apoptosis pathway predominates, and the cell dies (4). Interestingly, we found that when treated with TNF␣ alone for 24 h, the PAK4 knockdown cells underwent apoptosis even in the absence of CHX, whereas control cells, as expected, did not undergo apoptosis in the absence of CHX (Fig. 3A). These results suggest that one way PAK4 may protect cells from apoptosis is by activating the survival pathway rather than merely blocking the proapoptotic pathway. Activation of the survival pathway is triggered by recruitment of complex I to the TNFR1 (4). In response to formation of this complex, the NFB pathway is activated, along with other pathways, including the ERK, JNK, and p38 signaling pathways (3). Interestingly, we found that activation of three of these pathways, ERK, JNK, and NFB, were reduced in PAK4 knockdown cells (Fig. 3B-D), whereas the activation of p38 pathway was not significantly affected by the absence of PAK4 (data not shown). Constitutive Activation of the NFB and ERK Pathways Can Rescue the Sensitivity to Apoptosis That Is Induced by the Absence of PAK4 -Among the three pathways affected by the absence of PAK4, the NFB pathway is the major survival pathway to antagonize apoptosis in cells treated with TNF␣ (5). The ERK pathway, however, also has a well established role in cell proliferation and survival (29), and although JNK has different roles depending on the cell type, it was shown to be antiapoptotic in tumor cells (30,31). To determine whether any of these pathways have an essential role in PAK4-mediated survival, they were restored in the knockdown cells by expression of constitutively active mutants. A constitutively activated IKK␣ (EM) mutant (32) and a constitutively active Raf (Raf CAAX) mutant (33) were introduced into the PAK4 knockdown cells and control cells to activate the NFB or ERK signaling pathways, respectively, followed by treatment of TNF␣ϩCHX (1KK␣ (EM) has both EE mutants (S177E, S181E) and M10 mutations, which have 10 alanine mutations in the C-terminal serine cluster of the protein). As shown in Fig. 4, constitutive activation of either the NFB or ERK signaling rescued the sensitivity to TNF␣-induced apoptosis in the PAK4 knockdown cells. Activation of the JNK pathway, however, via expression of activated JNKK-JNK, did not rescue the apoptosis-sensitive phenotype (data not shown). These results suggest that the ERK and The Binding of TRADD to the TNFR1 Was Attenuated in PAK4 Knockdown Cells-Our results indicate that the absence of PAK4 affects the activation of several distinct signaling pathways, including the NFB, ERK, and JNK pathways. This suggests that PAK4 functions upstream from all of these pathways. The pathways indicated above are all activated rapidly in response to recruitment of plasma membranebound complex I (which includes TRADD, RIP, and TRAF2) to the activated TNF receptor. Our results suggest that PAK4 may function early in the TNF␣ pathway, possibly by affecting the recruitment of components of complex I to the TNF␣ receptor. To test this, the TNF␣ receptor was immunoprecipitated from both PAK4 knockdown stable cell lines and control cell lines. TRADD binding to the activated TNF␣ receptor was then analyzed by Western blot. As shown in Fig. 5, TRADD binding to the TNFR was reduced in the PAK4 knockdown cells. These results indicate that PAK4 functions in TNF␣-induced survival pathways by facilitating the binding of TRADD to the activated TNFR1. To determine whether the PAK4 kinase activity is required for this function, lysates from HeLa cells in which either wild type PAK4 (PAK4WT), kinase-dead PAK4 (PAK4KM), or constitutively activated PAK4 (PAK4NE) was overexpressed by stable transfection were mixed with cell lysate from either control or PAK4-RNAi cells before pulling down TNFR. As shown in Fig. 6, PAK4WT and PAK4NE both increased the binding of TRADD to TNFR in PAK4-RNAi cells. PAK4NE increased binding even more than wild type PAK4, even though it was expressed at a much lower level. Interestingly, however, even PAK4KM led to a slight increase in TRADD binding. Our results suggest that PAK4 can facilitate TRADD binding to TNFR1 via both kinase-independent and kinase-dependent mechanisms, although binding is enhanced when a kinase active PAK4 is used. DISCUSSION PAK4 was first identified for its role in regulating the organization of the actin cytoskeleton (22). Recent work indicates that it also has important roles in oncogenesis. PAK4 is overexpressed in cancer cells (25), and we previously showed that its overexpression promotes cell survival (19) as well as anchorage-independent growth (23), important hallmarks of oncogenic transformation. Although previous work relied largely on overexpression studies, here we used siRNA technology to block endogenous PAK4. We found that PAK4 is actually required for anchorage-independent growth in a tumor cell line. Furthermore, we found that blocking PAK4 made the cells hypersensitive to apoptosis. However, rather than affecting only proapoptotic pathways, as suggested by overexpression studies (19), our results indicate that endogenous PAK4 is required for activation of TNF␣ induced prosurvival pathways. Specifically, PAK4 is required for normal binding of TRADD to the TNFR1. TRADD is a key component of complex I, which leads to activation of survival pathways including the NFB pathway. Activation of ERK and NFB pathways is also reduced in PAK4 knockdown cells. It is interesting that although TRADD is required for both the apoptosis pathway and the survival pathway, only the survival pathway is abrogated in the PAK4 knockdown cells. It should be noted that although TRADD binding to the TNFR1 was greatly reduced in the PAK4 knockdown cells, a small amount of residual TRADD binding could be seen. Our results suggest that this small amount of TRADD binding is sufficient to promote apoptosis but not sufficient to lead to full activation of the survival pathways. This makes sense in light of findings that the apoptosis pathway can be regulated by a positive feed back mechanism in which caspase 8 leads to cleavage of RIP, which in turn promotes the proapoptosis pathway and inhibits the survival pathway (34). Because of this positive feedback mechanism, a small amount of TRADD binding to the TNFR1 may be sufficient for activation of apoptosis yet not sufficient to fully activate the survival pathways. It is still not clear how PAK4 functions in the formation of complex I. One possibility is that PAK4 itself is a part of the complex. Although PAK4 did not co-immunoprecipitate with either TNFR or TRADD antibody (data not shown), it remains possible that PAK4 can bind to were treated with TNF␣ in the absence of CHX for 24 h. PARP cleavage was then analyzed by Western blot as an indicator of apoptosis. In the absence of CHX, no apoptosis was seen the control cells, as expected. In contrast, TNF␣ induced apoptosis in PAK4 knockdown cells, even in the absence of CHX. B, activation of the ERK pathway is reduced in PAK4 knockdown cells: Control and PAK4 knockdown (RNAi) cells were left untreated (0 h) or treated with TNF␣ ϩCHX for 10 and 40 min and 2, 3, 4, and 6 h. Western blots of whole cell lysates were then probed with anti-phospho-ERK (Thr 202 /Tyr 204 ) antibody to assess ERK activation. The level of phosphorylated ERK was quantified using NIH Image J, and the results of three experiments were averaged and plotted. ERK activation was reduced and more transient in the PAK4 knockdown cells. Total ERK levels, however, were not affected. C, activation of the NFB pathway is reduced in PAK4 knockdown cells. Control and PAK4 knockdown (RNAi) cells were left untreated (0 h) or treated with TNF␣ϩCHX for 2, 5, and 10 min. Western blots of whole cell lysates were probed with anti-phospho-IB (pS32pS36) antibody as an indicator of NFB pathway activation. The level of phosphorylated IB was quantified using NIH Image J, and the results of three experiments were averaged and plotted. IB phosphorylation was reduced in the knockdown cells, although total IB levels were not affected. The same blot was also probed with anti-actin antibody as a loading control. D, activation of the JNK pathway is reduced in PAK4 knockdown cells: Control and PAK4 RNAi stable cells were left untreated (0 h) or treated with TNF␣ϩCHX for 10, 20, and 35 min. Western blots of whole cell lysates were then probed with anti-phospho-JNK (Thr 183 /Tyr 185 ) antibody as an indicator of JNK activation. The level of phosphorylated JNK was quantified using NIH Image J, and the results of three experiments were averaged and plotted. JNK phosphorylation was reduced in PAK4 knockdown cells, although total JNK levels were not changed. The same blot was also probed with anti-actin antibody as a loading control. DECEMBER 16, 2005 • VOLUME 280 • NUMBER 50 other unknown proteins in the complex to facilitate TRADD binding. Another possibility is that one or more of the proteins in the complex, or proteins that may facilitate formation of the complex, need to be phosphorylated by PAK4. Our results suggest that PAK4 can operate by both mechanisms. The fact that kinase-dead PAK4 can promote a slight increase in the binding of TRADD to TNFR indicates that this process is not completely dependent on the kinase activity of PAK4. However, the fact that wild type and constitutively activated PAK4 further increased TRADD binding to TNFR implies that an additional mechanism requiring PAK4 kinase activity of PAK4 can also further strengthen TRADD binding to TNFR. These results are consistent with previous results in which we have found that PAK4 can have both kinase-dependent and kinase-independent functions in cell survival (19,24). Furthermore, members of the group A PAKs have also been shown to have both kinase-dependent and kinase-independent functions (35). PAK4 Facilitates TRADD Binding to TNFR Following complex I formation at the TNFR1, the NFB pathway, which is the major survival pathway, is activated. NFB promotes survival by promoting the expression of genes encoding many anti-apoptotic proteins such as FLIP and IAP to protect cells from apoptosis (5). In addition to the NFB pathway, other pathways, including the ERK, JNK, and p38 pathways, are also activated downstream to complex I (3). Interestingly, the NFB, ERK, and JNK pathways were all abrogated in TNF␣-treated PAK4 null cells. This suggests that PAK4 functions at a level upstream to all of these pathways during cell survival. This is consistent with our finding that PAK4 is required for binding of TRADD to the TNFR1 and hence the formation of complex I, which lies upstream to all three pathways. Both the NFB and the ERK pathways play important roles in cell survival and growth (5,29,36), and it is interesting that adding back activators of these pathways rescued the apoptosis-sensitive phenotype in PAK4 knockdown cells. These results suggest that the ERK and NFB pathways lie downstream to PAK4 in TNF␣ signaling. It is interesting that although both pathways were abrogated in the PAK4 knockdown cells, activation of either pathway on its own was sufficient to rescue the apoptosis-sensitive phenotype. Because these studies were carried out by overexpression of constitutively active mutants, one possible explanation is that high levels of activation of one pathway may compensate for the loss of the other pathway. More work will be needed to determine whether only one or both of these pathways are actually required for PAK4-mediated survival. Like the ERK and NFB pathways, the JNK pathway was also abrogated in TNF␣-stimulated PAK4 knockdown cells. Interestingly, however, activation of the JNK pathway, via expression of activated JNKK-JNK, not only did not rescue the apoptosis-sensitive phenotype, it actually made the cells more sensitive to TNF␣-induced apoptosis (data not shown). The JNK pathway has been shown to play different roles in the apoptosis pathway depending on the cell type, the stimulus, and the duration of its activation (37)(38)(39)(40). Although transient activation of JNK suppresses apoptosis via phosphorylation of the proapoptotic Bcl-2 family protein BAD (41), prolonged JNK activation has been shown to FIGURE 4. Constitutive activation of the ERK or NFB pathways can rescue the sensitivity to apoptosis induced by the absence of PAK4. A, constitutive activation of the ERK pathway rescues the increased sensitivity to apoptosis seen in the absence of PAK4. Control and PAK4 knockdown (RNAi) cells were transfected with a constitutively active Raf (Raf CAAX) mutant to activate the ERK pathway. 48 h after transfection, the cells were treated with TNF␣ϩCHX to induce apoptosis. The progress of apoptosis is indicated by Western blot analysis of PARP cleavage in whole cell lysates. The cleavage of PARP was also quantified using NIH Image J, and the percentages of cleaved PARP in three experiments were averaged and plotted. After TNF␣ϩCHX treatment, the PAK4 knockdown cells typically show an increased level of apoptosis compared with control cells. However, when Raf CAAX was added to the cells, the level of apoptosis in PAK4 knockdown cells went back. The same blot was also probed with phospho-ERK antibody to assess ERK activation by Raf and with actin as a loading control. B, constitutive activation of the NFB pathway rescues the sensitivity to apoptosis induced by the absence of PAK4. Control and PAK4 knockdown (RNAi) stable cell lines were transfected with a constitutively active IKK␣ (EM) mutant to activate the NFB pathway. 48 h after transfection, the cells were treated with TNF␣ϩCHX to induce apoptosis. The progress of apoptosis is indicated by PARP cleavage. The cleavage of PARP was also quantified using NIH Image J, and the percentages of cleaved PARP in three experiments were averaged and plotted. The increased apoptosis level seen in the PAK4 knockdown cells were rescued in cells transfected with activated IKK␣. The same blot was also probed with anti-IKK␣ antibody to visualize the overexpressed IKK␣ and with actin as a loading control. promote apoptosis via a caspase 8-independent cleavage of Bid (42). Activation of the NFB pathway has been shown to prevent the prolonged activation of JNK pathway (43), and thus activation of JNK pathway by TNF␣ treatment favors cell survival. In tumor cells where there may be abnormal activation of NFB pathway, JNK has been shown to be anti-apoptotic (30,31), whereas in primary mouse embryonic fibroblast cells, JNK has been shown to be proapoptotic (44). The different effects of prolonged versus transient expression of JNK may explain why prolonged JNK activation induced by expressing constitutively activate JNKK-JNK promotes apoptosis rather than cell survival in our system. Unlike the ERK, JNK, and NFB pathways, activation of the p38 pathway by TNF␣ was not affected by the knockdown of PAK4. This suggests that other pathways in addition to, or instead of, the complex I-mediated pathway can lead to p38 activation in response to TNF␣. The role for p38 in the TNF␣ pathway is still not clearly defined (45,46), and it has been shown to be antiapoptotic or proapoptotic depending on the cell type or the stimulus (45,47). Our results suggest that p38 does not play an essential role in TNF␣-induced cell survival pathways in HeLa cells. The fact that PAK4 is required for the full activation of survival pathways and for anchorage-independent growth sheds new light on the potential role for this protein in oncogenesis. Anchorage-independent growth requires the ability of cells to survive under conditions where they would normally stop growing or undergo apoptosis. We found that PAK4 is required for full activation of survival pathways in response to TNF␣. If PAK4 also promotes survival in response to other stimuli, this could help explain why it is required for anchorage-independent growth and why its overproduction is associated with transformation. In addition to its potential role in the oncogenic process, PAK4 was also shown to be required for normal embryonic development (21). The present study gives us some insight into the role of PAK4 during normal development. Deletion of PAK4 in mice results in embryonic lethality, and we have found that there are regional increases in apoptosis in certain parts of the PAK4 null embryos. The fact that the increase in apoptosis is regional is very interesting and at first glance difficult to explain, because PAK4 expression is ubiquitous. Our new data provide an intriguing possible explanation. We found that PAK4 is not required for normal cell growth but is required for the full activation of the survival pathways under stressful conditions such as exposure to TNF␣. Our results raise the possibility that in PAK4 null embryos, apoptosis is increased only in regions that are being exposed to specific cellular stresses that require activation of PAK4-mediated survival pathways to develop normally. The same amount of cell lysate was incubated with protein G-agarose beads loaded with anti-TNFR1 antibody or control (goat serum). Proteins that bound to the TNFR1 or control immunoprecipitates were analyzed by Western blot. TRADD binding to TNFR was also quantified using NIH Image J, and the results of three experiments were averaged and plotted. TRADD binding to the TNFR1 was greatly reduced (but not completely abolished) in the PAK4 knockdown cells. No PAK4 was seen in the complex. The total amount of TNFR1 and PAK4 in the lysates is shown in the bottom panel. IP, Western blot of immunoprecipitates; WB, direct Western blot of whole cell lysates; WCL, whole cell lysates. FIGURE 6. Restoring PAK4 increased TRADD binding to TNFR. A, control and PAK4 knockdown (RNAi) cells were left untreated (0 min) or treated with TNF␣ϩCHX for 5 min. Equal amounts of cell lysate were combined with equal amount of lysate from HeLa cells stably overexpressing hemagglutinin-tagged wild type PAK4 (PAK4WT), Myc-tagged kinase-dead PAK4 (PAK4KM), or Myc-tagged constitutively active PAK4 (PAK4NE). The mixture of cell lysate was incubated with protein G-agarose beads loaded with anti-TNFR1 antibody. TRADD protein bound to the TNFR1 was analyzed by Western blot and was also quantified using NIH Image J. All three different PAK4 proteins increased TRADD binding to TNFR in PAK4 knockdown cells, although PAK4KM showed the least stimulation and constitutively activate PAK4 mutant increased the binding the most, even though it was expressed at low levels. Both kinase-dead and constitutively activated PAK4 also led to a slight increase in TRADD binding to TNFR in control cells. B, Western blot analysis showing the expression of TRADD protein and PAK4 in whole cell lysate from the HeLa cells stably overexpressing wild type PAK4 (PAK4WT), kinase-dead PAK4 (PAK4KM), or constitutively active PAK4 (PAK4NE) and from the control and PAK4-RNAi cells. IP, immunoprecipitation. In summary, in this paper, we propose a completely new role for PAK4 in the promotion of survival in response to cell stress. Using HeLa cells as a model system, we found that PAK4 is required for anchorageindependent growth in cancer cells. We also found that PAK4 is required for the full activation of survival pathways in response to TNF␣. PAK4 functions early in the TNF␣ signaling pathways by facilitating the binding of TRADD to the TNFR1, which is necessary for the rapid formation of complex I followed by activation of survival pathways, especially the NFB pathway. Our data give insight of the role of PAK4 in tumorigenesis as well as mouse embryonic development.
8,600.2
2005-10-14T00:00:00.000
[ "Biology", "Medicine" ]
The Status and Future of Consciousness Research As papers about consciousness are so often introduced, consciousness was until few decades ago considered a philosophical problem only, and the current interest in empirical consciousness research was unforeseen. This development was of course influenced by the technological advancements in neuroscience during those decades, but more important and fundamental was a new openness to interdisciplinary integration of research questions, methods and arguments. Cognitive scientists and neuroscientists agreed that the philosophical problems of why and how there is consciousness are also their problems. Philosophers agreed that empirical evidence may resolve or at least influence this debate. Scientists across disciplines generally agree that consciousness is subjective, characterized by a kind of privileged first-person access. Consciousness research has proven to be an actual and functioning discipline able to provide meaningful and reproducible results. Nevertheless, it has yet only scratched the surface in the attempt to solve some its bigger challenges, e.g., its many underlying questions of metaphysics (i.e., why does consciousness exist?) and questions of mechanisms (how does consciousness exist?). One major obstacle for consciousness research is the lacking consensus of how to optimally measure consciousness empirically. Another major challenge is how to identify neural correlates of consciousness. This challenge clearly relates to the first as one needs to apply a measure of consciousness in order to identify its correlates. Current consciousness research is already occupied with these questions that may even be said to dominate the scientific debate. As it will be argued below, consciousness research may face problems in the future that are currently less debated but which are logical extensions of the challenges above. It is a natural ambition when developing a measure of consciousness to be able to determine whether nonreporting subjects or even machines are conscious and of what. And it is a natural ambition when finding neural correlates of consciousness to understand how these correlates relate to a deeper metaphysical understanding of the relation between subjective experience and the physical substrate of the brain. As papers about consciousness are so often introduced, consciousness was until few decades ago considered a philosophical problem only, and the current interest in empirical consciousness research was unforeseen. This development was of course influenced by the technological advancements in neuroscience during those decades, but more important and fundamental was a new openness to interdisciplinary integration of research questions, methods and arguments. Cognitive scientists and neuroscientists agreed that the philosophical problems of why and how there is consciousness are also their problems. Philosophers agreed that empirical evidence may resolve or at least influence this debate. Scientists across disciplines generally agree that consciousness is subjective, characterized by a kind of privileged first-person access. Consciousness research has proven to be an actual and functioning discipline able to provide meaningful and reproducible results. Nevertheless, it has yet only scratched the surface in the attempt to solve some its bigger challenges, e.g., its many underlying questions of metaphysics (i.e., why does consciousness exist?) and questions of mechanisms (how does consciousness exist?). One major obstacle for consciousness research is the lacking consensus of how to optimally measure consciousness empirically. Another major challenge is how to identify neural correlates of consciousness. This challenge clearly relates to the first as one needs to apply a measure of consciousness in order to identify its correlates. Current consciousness research is already occupied with these questions that may even be said to dominate the scientific debate. As it will be argued below, consciousness research may face problems in the future that are currently less debated but which are logical extensions of the challenges above. It is a natural ambition when developing a measure of consciousness to be able to determine whether nonreporting subjects or even machines are conscious and of what. And it is a natural ambition when finding neural correlates of consciousness to understand how these correlates relate to a deeper metaphysical understanding of the relation between subjective experience and the physical substrate of the brain. HOW DO WE MEASURE CONSCIOUSNESS? Historically, the attempt to "measure" consciousness has unfolded as a debate between direct and indirect approaches. Direct approaches, at least intuitively, are the most informative as participating experimental subjects here simply report about their own experiences. As subjective reports however have demonstrable limits (e.g., lack of insights into personal bias, memory problems etc.), many scientists have refrained from their use and insisted on the use of objective measures only (e.g., Nisbett and Wilson, 1977;Johansson et al., 2006). Experiments on consciousness that are based on objective measures-the "indirect" approachtypically involve asking subjects to choose between alternatives, e.g., in forced-choice tasks. Although such methods may stay clear of classical limitations of subjective methods, they are confronted with other problems, which, according to some scientists, are greater. For one thing, objective measures must assume that the "threshold" of giving a correct response is the same as the "threshold" of having a subjective experience of the same content (Fu et al., 2008;Timmermans and Cleeremans, 2015). Furthermore, in order to arrive at any one particular objective method, one must have "calibrated it" to something else in order to know that this particular behavior can be considered a measure of consciousness-and not something else. This would typically involve associating a subjective report with a particular behavior-a process by which one would "import" all the weaknesses related to subjective reports that one tried to avoid in the first place (Overgaard, 2010). Proponents of the "direct" approach have attempted to develop precise and sensitive scales to capture minor variations in subjective experience, e.g., the Perceptual Awareness Scale and gradual confidence ratings (Ramsøy and Overgaard, 2004;Sandberg and Overgaard, 2015). Although different approaches to this idea disagree about what consistutes the optimal measure (Dienes and Seth, 2010;Timmermans et al., 2010;Szczepanowski et al., 2013), they share the view that a detailed subjective report may be imprecise yet better than an indirect measure. In recent years, the arsenal of indirect measures have been supplied with what is named "no-report paradigms." Essentially, all paradigms using objective measures only are without report, so in a certain sense, paradigms labeled "no-report paradigms" have not introduced anything new. Nevertheless, experiments of this kind attempt first to associate a particular objective measure (e.g., a behavior or a brain activation) with conscious experience, and then to apply this measure as a measure of consciousness so that no direct report is needed (e.g., Frässle et al., 2014;Pitts et al., 2014). Such methods intuitively seem to circumvent some of the criticism mentioned above. However, and as mentioned above, the only way one may associate a phenomenon as nystagmus with conscious experience is by the direct use of introspection (to establish the "correlation") (Overgaard and Fazekas, 2016). It has been proposed that the best and most practical way forward is to combine methods and learn what we can from the results we get (Tsuchiya et al., 2016). Whereas this is most likely what is necessary, it is important to notice that different methods seem to generate different results, so that some methods are associated with the finding that the neural correlates of visual consciousness involve prefrontal activity, whereas other methods are associated with the finding that visual consciousness mainly involve occipital/parietal activity but not prefrontal. NEURAL CORRELATES OF CONSCIOUSNESS Most neuroscientific research on consciousness has had the explicit aim to identify the neural correlates of consciousness. Although it is rarely debated what we mean with a "neural correlate of consciousness, " most experiments aim to identify the minimal neural activations that are sufficient for a specific content of consciousness (Chalmers, 2000). Contrary to this, other scientists are preoccupied with finding neural correlates of consciousness "as such"-i.e., neural correlates that mark the difference between being dead, asleep, awake, etc., and which are not content-specific in the sense above. With regards to the attempt to isolate neural correlates of conscious content, one central debate in recent years has been whether neural correlates of consciousness should primarily be associated with prefrontal cortex ("late" activations) or whether (visual) consciousness should be associated with occipital/parietal activations ("early" activations). According to most recent reviews and articles, evidence is lending support toward the latter view (Andersen et al., 2016;Koch et al., 2016;Hurme et al., 2017). According to this view, "late" activations are not actual correlates of consciousness, but are confounds associated with metacognition and report (e.g., Aru et al., 2012). Nevertheless, proponents of the opposite view-that consciousness is associated with prefrontal cortex activity-argue that "early" activations in fact represent preconscious states-i.e., information that is not yet conscious (e.g., Lau and Rosenthal, 2011). According to other perspectives, this debate is partially misunderstood. Block (2005) argues that there may be two neural correlates of consciousness: One relating to phenomenal consciousness (the "early" activations), and one relating to access consciousness (the "late" activations). Others suggests that there is an identity between subjective experience and certain causal properties of physical systems rather than an identity between experience and particular brain parts . According to the REF-CON model of consciousness, subjective experience is intrinsically related to a particular kind of "strategy" that makes information available for action (Overgaard and Mogensen, 2014;Mogensen and Overgaard, 2017). From this perspective, there need not be any "universal" correlate of consciousness at all. But even in such theoretical models according to which finding neural correlates of consciousness is very different from explaining consciousness, neural correlates of consciousness are essential as evidence to show how and if they work in practice. There has been relatively more research into the neural correlates of the contents of consciousness than into "consciousness as such." Research attempting to identify particular "levels" of consciousness obviously also face many methodological challenges, not least relating to contrastive analysis. Some studies have attempted to contrast healthy subjects with patients in vegetative state or minimally conscious state (Boly et al., 2013), although there are a number of problems: Some experiments indicate that not all such patients are unconscious (Owen et al., 2006), and-at the same timemost brain injured patients have many different lesions and, consequently, massive reorganization, which makes comparisons very difficult. THE FUTURE CHALLENGES The "upsurge" of interest in a science of consciousness did not begin but certainly took off with the publications of Chalmers (1995Chalmers ( , 1996 and the Tucson-based conference series "Toward a Science of Consciousness"-soon to be further strengthened by the annual conferences organized by the ASSC (Association for the Scientific Study of Consciousness). Since then, much has happened in the attempt to discover neural and cognitive correlates of consciousness. It is however as uncertain today as it was then how exactly to apply these findings. In principle, there are many potential applications of consciousness research, but whereas some are extensions of the more fundamental questions (e.g., ethics and law), others are close to the heart of what consciousness research is (arguably) about, i.e., the mind-brain or mind-body problem. One such fundamental problem relates to the fact that consciousness is subjective and in this way accessible from the first person only. Whereas we still have no universally accepted measures of consciousness, much progress has been made with regards to how one may grasp the content of an experience in the context of an experiment. One major future challenge will be how to measure consciousness "from the outside." This problem is currently being faced in coma and vegetative state patients who either do not respond or respond in a minimal or strange fashion. It will very likely be an even greater challenge for a future science of consciousness to consider how to evaluate whether artificial systems (e.g., computers or robots) can be conscious or whether experience is a privilege for biological creatures. Essentially, these questions force us to try to make scientifically based decisions about how to measure conscious experience in highly different situations: In coma/vegetative state patients, there are little or no responses, yet a neural (however altered) system. In artificial systems, there may be high responsiveness (even, in principle, explicit expressions of being conscious) but no neural (biological) system. One possibly even greater challenge will be to reintegrate the philosophical metaphysical debate into the scientific work. It will be a challenge to the future science of consciousness to demonstrate that empirical work on consciousness directly aids an understanding of the fundamental questions about consciousness. This challenge may seem unavoidable as the current preoccupation with cognitive functions and neural activations associated with subjective experience in most cases seems so directly linked to and motivated by the mind-brain problem. Existing data, however, seems to fit easily into every theoretical understanding of this problem. In and of itself, it seems not to be the case that evidence that perceptual experience is associated with-say-activity in primary visual cortex also provides evidence to determine whether consciousness should be seen as-say-identical to or metaphysically different from brain activity. Accordingly, it will require something "extra" to answer this challenge. Either, if possible, experimental investigations must be designed in order to "test" theoretical positions that currently are stated within the framework of philosophy of mind. Alternatively, experimental consciousness research must work even closer with theoretical consciousness research in order to make empirical data available as arguments. FUTURE DIRECTIONS The challenges highlighted above obviously only represent a few of the many scientific and theoretical issues that scientists in this area face. Consciousness remains one of the biggest scientific challenges among all disciplines as the most fundamental questions are not simply unanswered-it is still highly unclear how one should even begin to answer them. Currently, consciousness research is often considered a "topic"-or even "niche"-under the umbrella of cognitive neuroscience. Nevertheless, consciousness researchers often point out that subjective experience is the underlying and fundamental reason for many questions in neuroscience. Scientists interested in the brain are often seeking answers to questions such as why we become addicted, how we remember, perceive, or solve problems. Such questions arguably presume conscious experience and make little sense without. Terms such as "memory" or "perception" do not solely refer to behavior, but also to particular kinds of conscious content which we know about from introspection. For this reason, one future ambition for consciousness research could be to become a more integral part of the overall ambition to understand the brain, and as such become part of the basic curriculum for any neuroscientist. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and approved it for publication.
3,343.2
2017-10-10T00:00:00.000
[ "Philosophy" ]
Dual approaches for defects condensation We review two methods used to approach the condensation of defects phenomenon. Analyzing in details their structure, we show that in the limit where the defects proliferate until occupy the whole space these two methods are dual equivalent prescriptions to obtain an effective theory for the phase where the defects (like monopoles or vortices) are completely condensed, starting from the fundamental theory defined in the normal phase where the defects are diluted. Introduction The quantum field theory description of a physical system relies on a proper identification of its degrees of freedom which are then interpreted as excited states of the fields defining the theory. However it is sometimes the case that the theory may contain important structures which are not described in this way and cannot be expressed in a simple manner in terms of the fields appearing in the Lagrangian, having a non-local expression in terms of them. These structures appear under certain conditions as defects; prescribed singularities of the fields defining the theory. A general conjecture [2] claims that defects are described by a dual formulation in which they appear as excitations of the dual field, but this can be proved only in some particular instances. Nevertheless, much can be gained just with the information that these structures appear as singularities of the fundamental fields even without knowing their precise dynamics. A pressing question is if it is possible to address, with this limited information, the situation in which the collective behavior of defects becomes the dominant feature of a theory. It is one of the purposes of this work to discuss an extreme case of sorts. We want to present a general proposal of how to describe a situation in which the singularities of the fields proliferate defining a new vacuum for the system. In this picture * Corresponding author. This view is supported by the fact that if we are interested only in the low lying excitations it is perfectly reasonable to take the condensate as given, not worrying how it was set on, and construct an effective field theory describing the excitations. It is well known for instance that the pions, which can be recognized as excitations of the chiral symmetry breaking condensate composed of quark-antiquark pairs, can be described by an effective field theory without knowing about QCD. Even though we need not know the details of how the condensate is formed it is important to stress that the condensate defines the vacuum and carries vital information about the symmetry content used in the construction of the effective theory. It is in this way also bound to have an effect in all the other fields comprising the system. The example of a superconducting medium also comes to mind, where the condensate vacuum endows the electromagnetic excitations with a mass. This same idea is employed on the electroweak theory where a condensate is the only consistent way to give mass to the force carriers, the W and Z , and in fact to account for all the masses of the standard model. This is an example where the properties of the condensate itself are not completely established and still a matter of debate. The currently accepted view is that its low lying excitations are the Higgs particles, still to be detected, described by a scalar field. More akin to our take on the condensate concept, as a collective behavior of defects, is the dual superconductor model of confine-ment which is based on the superconductor phenomenology [1]. It is expected that the QCD vacuum at low energies is a chromomagnetic condensate leading to the confinement of color charges immersed in this medium. In dual superconductor models of color confinement, magnetic monopoles appear as topological defects in points of the space where the abelian projection becomes singular [9]. There are in fact many other examples in which the condensation of defects is responsible for drastic changes in the system by defining the new vacuum of the theory. We may mention vortices in superfluids and line-like defects in solids which are responsible for a great variety of phase transitions [6]. All these instances point to the importance of getting a better understanding of the condensation phenomenon. In all these examples there are some general features of the condensates which can tell a lot about what to expect of the system when condensation sets in without the precise knowledge of how this happened. These general features are what we intend to explore in this Letter. The main inspiration for this work comes from the study of two particular approaches to this problem: one is the Abelian Lattice Based Approach (ALBA) discussed by Banks, Myerson and Kogut in [3] within the context of relativistic lattice field theories and latter also by Kleinert in [5] in the condensed matter context. The other one is the Julia-Toulouse Approach (JTA) introduced by Julia and Toulouse in [4] within the context of ordered solid-state media and later reformulated by Quevedo and Trugenberger in the relativistic field theory context [7]. The ALBA was used, for example, by Banks, Myerson and Kogut to study phase transitions in abelian lattice gauge theories [3]. A few years latter Kleinert obtained a disorder field theory for the superconductor from which he established the existence of a tricritical point separating the first-order from the second-order superconducting phase transitions [5]. In this Letter we shall be using the notations in the recent book by Kleinert [6]. Developing in the work of Julia and Toulouse, Quevedo and Trugenberger studied the different phases of field theories of compact antisymmetric tensors of rank h − 1 in arbitrary space-time dimensions D = d + 1. Starting in a coulombic phase, topological defects of dimension d − h − 1 ((d − h − 1)-branes) may condense leading to a confining phase. In that work one of the applications of the JTA was the explanation of the axion mass. It was known that the QCD instantons generate a potential which gives mass to the axion. However, the origin of this mass in a dual description were a puzzle. When the JTA is applied it is clear that the condensation of instantons is responsible for the axion mass. Recently, some of us and collaborators have made a proposal that the JTA would be able to explain the dual phenomenon to radiative corrections [10] and used this idea to compute the fermionic determinant in the QED 3 case. This result was immediately extended to consider the use of the JTA to study QED 3 with magnetic-like defects. By a careful treatment of the symmetries of the system we suggested a geometrical interpretation for some debatable issues in the Maxwell-Chern-Simons-monopole system, such as the induction of the non-conserved electric current together with the Chern-Simons term, the deconfinement transition and the computation of the fermionic determinant in the presence of Dirac string singularities [11]. It is important to point that the main signature of the JTA is the rank-jumping of the field tensor due to the defects condensation. However, this discontinuous change of the theory still puzzles a few. It is another goal of this investigation to shed some light in this matter. In the present work we hope to help clarify the above mentioned issues focusing in the analysis of the structure of these two methods, i.e., JTA and ALBA, by working out an explicit example. Introducing a new Generalized Poisson's Identity (GPI) for p-branes in arbitrary space-time dimensions and the novel con-cept of Poisson-dual branes we show that in the specific limit where the defects proliferate until they occupy the whole space these two approaches are dual equivalent prescriptions to obtain an effective theory for the phase where the defects are completely condensed, starting from the fundamental theory defined in the normal phase where the defects are diluted. Setting the problem The example we will work here is the Maxwell theory in the presence of monopoles that eventually condense, which serves as an abelian toy model that simulates quark confinement. The Maxwell field A μ minimally coupled to electric charges e and non-minimally coupled to magnetic monopoles g is described by the following action: is the magnetic Dirac brane, with δ μν (x; S) a δ-distribution that localizes the world surface S of the Dirac string coupled to the monopole [8] and has the current j μ in its border. The field A μ experiences a jump of discontinuity as it crosses S, hence F μν has a δ-singularity over S [13] that μν is the regular combination which expresses the observable fields E and B. As we shall see, the quantum field theory associated to this action has two different kinds of local symmetries: the first one is the usual electromagnetic gauge symmetry, one corresponds to the freedom of moving the unphysical surface S over the space: , being δ μνρ (x; V ) a δ-distribution that localizes the volume V spanned by the deformation S → S (the boundary ∂ S of S is physical and is kept fixed in the transformation such that ∂ S = ∂ S ). We name here this second kind of local symmetry as brane symmetry. Taking into account the current conservation we see that the action (1) is invariant under gauge transformations. But (1) changes under brane (2) is not a symmetry of (1). But being the Dirac string unphysical we should not be able to detect it experimentally. So we need to impose some consistency condition to make the Dirac string physically undetectable within the present formalism. We can do it only by means of a quantum argument: the phase factor appearing in the partition function associated to (1) changes under brane symmetry as e i S → e i S = e i(S+ S) = e i S e −iegn , n ∈ Z. It should be clear now that to keep the physics unchanged under brane transformations the consistency condition needed to impose is e −iegn ≡ 1, n ∈ Z ⇒ eg ≡ 2π N, N ∈ Z, which is the famous Dirac quantization condition [8], a possible explanation for the charge quantization. Now in order to consider the monopole condensation (which will induce the electric charge confinement) it is best to go to the dual picture. To obtain the dual action to (1) we introduce an auxiliary field f μν and define the master action by lowering the order of the derivatives appearing in (1) via Legendre transformation: ExtremizingS with respect to f μν we get f μν = F obs μν and substituting that in (3) we reobtain the original action (1) while ex-tremizingS with respect to A μ we get the condition ∂ μ f μν = j ν , which can be solved by f μν ≡ 1 2 μναβF αβ obs := 1 2 μναβ (F αβ −F αβ E ). We introduced the dual vector potentialà μ inF μν := ∂ μÃν − ∂ νÃμ and the electric Dirac braneF E μν that localizes the world surface of the electric Dirac string coupled to the electric charge. Substituting this result in (3) and discarding an electric branemagnetic brane contact term that does not contribute to the partition function due to the Dirac quantization condition, we obtain the dual action: where the couplings are inverted relatively to the ones in the original action (1): here the dual vector potentialà μ couples minimally with the monopole and non-minimally with the electric charge. Abelian lattice based approach We are now in position to consider monopole condensation by applying the ALBA to the dual Maxwell action (4). The main goal of this approach is to obtain an effective action for the condensed phase in the dual picture. The ALBA is based on the observation that upon condensation, the magnetic defects initially described by δ-distributions are elevated to the field category describing the long-wavelength fluctuations of the magnetic condensate. The condition triggering the complete condensation of the defects is given by the disappearance of the Poisson-dual brane (defined below) coming from a Generalized Poisson's Identity (see the discussion in Appendix A). We suppose that for the electric charges there are only a few fixed (external) worldlines L while for the monopoles we suppose that there is a fluctuating ensemble of closed worldlines L that can eventually proliferate (the details of how such a proliferation takes place is a dynamical issue not addressed neither by the ALBA nor by the JTA). The magnetic current is written in terms of the magnetic Dirac brane as˜j σ = 1 2 σρμν ∂ ρ F M μν . In order to allow the monopoles to proliferate we must give dynamics to their magnetic Dirac branes since the proliferation of them is directly related to the proliferation of the monopoles and their worldlines. Thus we supplement the dual action (4) with a kinetic term for the magnetic Dirac branes of the form − c 2j 2 μ , which preserves the local gauge and brane symmetries of the system. This is an activation term for the magnetic loops. Hence, the complete partition function associated to the extended dual action reads: where the Lorentz gauge has been adopted for the dual gauge field A μ and the partition function for the brane sector Z c [à μ ] is given by, where the functional δ-distribution enforces the closeness of the monopole worldlines. Next, use is made of the Generalized Poisson's Identity (GPI) where L is a 1-brane andṼ is the 3-brane of complementary dimension. The GPI works as an analogue of the Fourier transform: when the lines L in the left-hand side of (7) proliferate, the vol-umesṼ in the right-hand side become diluted and vice versa (see the discussion in Appendix A). We shall say that the branes L and V (or the associated currents δ μ (x; L) orδ μ (x;Ṽ )) are Poisson-dual to each other. Using (7) we can rewrite (6) as: In the first line we introduced the auxiliary field η μ which will replace the δ-distribution current in the condensed phase as discussed above. In the second line we exponentiated the current conservation condition through use of theθ field and also made use of the GPI to bring into the game the Poisson-dual current θ V μ = 2πδ μ (x;Ṽ ). We also made an integration by parts and discarded a constant multiplicative factor since it drops out in the calculation of correlation functions. Integrating the auxiliary field η μ in the partial partition function (8) and substituting the result back in the complete partition function (5) we obtain, as the effective total action for the condensed phase in the dual picture, the London limit of the Dual Abelian Higgs Model (DAHM): where we defined m 2 This effective action is the main result of this approach. In the next section we shall dualize this result and one could be concerned with the fact that (9) constitutes a nonrenormalizable theory, thus requiring a cutoff in order to be well defined as an effective quantum theory. However, one can always think of its UV completion, in this case the complete DAHM, which is renormalizable, and then take its dual, taking the London limit afterwards [9]. At least in the case considered here, the result is exactly the same one obtains by directly dualizing the London limit (9) of the DAHM, thus justifying the procedure we shall adopt in the next section. Considering now that a complete condensation of monopoles takes place we let their worldlines L proliferate and occupy the whole space, implying thatθ V μ → 0 as seen from (7) and the discussion afterwards (notice thatθ V μ appears as a vortex-like defect for the scalar fieldθ describing the magnetic condensate, being a parameter that controls the monopole condensation). Integrating the Higgs fieldθ we get a transverse mass term forà μ (Higgs mechanism) such that the electric field has a finite penetration depth λ = 1 mà = √ c in the DSC: this is the dual Meissner effect. Integrating now the fieldà μ we obtain after some algebra the effective action: (10) The first term in (10) is responsible for the charge confinement: it spontaneously breaks the electric brane symmetry such that the electric Dirac stringF E μν acquires energy becoming physical and constitutes now the electric flux tube connecting two charges of opposite sign immersed in the DSC. The flux tube has a thickness equal to the penetration depth of the electric field in the DSC: λ = 1 mà = √ c . The shape of the Dirac string is no longer irrelevant: the stable configuration that minimizes the energy is that of a straight tube (minimal space). Substituting in the first term of (10) such a solution for the string term,F E μν = 1 2 μναβ 1 n·∂ (n α j β −n β j α ), where n μ := (0, R := R 1 − R 2 ) is a straight line connecting +e in R 1 and −e in R 2 , and taking the static limit we obtain a linear confining potential between the electric charges [9]. We also note that eliminating the magnetic condensate (i.e., taking the limit mà → 0) we recover the diluted phase with no confinement: the interaction between the electric currents in (10) becomes of the long-range (Coulomb) type and the confining term goes to zero (in terms of the flux tube we see that it acquires an infinite thickness such that the electric field is no longer confined and occupies the whole space). In summary, the supplementing of the dual action with a kinetic term for the magnetic Dirac branes which respects the local symmetries of the system, the subsequent use of the GPI (A.6) and the consideration of the limit where the Poisson-dual currentθ V μ goes to zero gives us a proper condition for the complete condensation of monopoles, leading to confinement, as viewed from the dual picture. Julia-Toulouse approach Now we want to analyze the monopole condensation within the direct picture, where the defects couple non-minimally with the gauge field A μ . Using the Dirac quantization condition we can rewrite (1) as: Julia and Toulouse made the crucial observation that if the monopoles completely condense we have a complete proliferation of the magnetic strings associated to them, hence the field A μ can not be defined anywhere in the space. This implies that F obs μν can no longer be written in terms of A μ . The JTA consists in the rankjump ansatz of taking the object F obs μν as being the fundamental field describing the condensed phase. Hence F obs μν acquires a new meaning and becomes the field describing the magnetic condensate. Defining F obs μν := −m Λ Λ μν and supplementing (11) with a kinetic term of the form 1 12 (∂ μ Λ αβ + ∂ β Λ μα + ∂ α Λ βμ ) 2 for the new field Λ μν , we obtain as the effective action for the condensed phase, in the direct picture, the massive Kalb-Ramond action: (12) whereΛ μν := 1 2 μναβ Λ αβ . Notice that in implementing the JTA the fundamental field of the theory experiences a rank-jump through the phase transition: we started with a 1-form in the normal phase and finished with a 2-form in the completely condensed phase. The rank-jump is a general feature of the JTA since in implementing this prescription we always use the ansatz of reinterpretating the kinetic term with non-minimal coupling for the field describing the diluted phase as being a mass term for the new field describing the condensate formed in the phase where the defects proliferate until occupy the whole space. Let us now apply the duality transformation in (9). For this we introduce an auxiliary field f μν such that the master action reads: Extremizing (13) with respect to f μν we get f μν =F obs μν and substituting this result back in the master action we recover (9). On the other hand, extremizing (13) with respect toà μ we obtain: Substituting (14) in (13), it follows that: where we integrated by parts and considered the antisymmetry of f μν in order to use ∂ μ ∂ ν f μν = 0. Defining now f μν := mÃΛ μν and making the identification mà ≡ m Λ , we get as the dual action to (9) the massive Kalb-Ramond action in the presence of vortices, a generalization of the result obtained by Quevedo and Trugenberger in [7]: More precisely, this extension consists in the construction of an action for the case with an incomplete condensate that is however already described by a rank-jumped tensor. If we now take the limitθ V μ → 0 in (16) we recover exactly the massive Kalb-Ramond action (12) obtained in [7] through the application of the JTA to (1). That establishes the duality between the JTA and the ALBA in the limit where the Poisson-dual current goes to zero, which physically corresponds to the limit of complete condensation of the defects. However, (16) withθ V μ = 0 displays a new and important result, which is a consequence of this formalism, showing that the rank-jump which is the signature of the JTA also occurs in the partial condensation process with the presence of vortex-like defects. Conclusion We established the equivalence through duality of two different approaches developed to handle defects, represented by magnetic monopoles in the example worked here, in the physically interesting context where the defects dominate the dynamics of the system. It was clearly shown that the two approaches are complementary, being different descriptions of the same phenomenon in the limit where the Poisson-dual current vanishes which characterizes the complete condensation of the defects. Indeed, within the formalism here called as ALBA the transition becomes smoother since the Poisson-dual currentθ V μ appears as a parameter that controls the proliferation of the magnetic defects. On the other hand, within the formalism referred to as JTA the phase transition is signalized by a rank-jump of the tensor field and seems to be a discontinuous phenomenon. However, the duality JTA-ALBA brings a new possibility. It is important to say that this dual equivalence was possible due to a suitable interpretation of the generalization of the Poisson identity developed here. We clearly showed that this identity is an essential tool to use in the context of defects condensation: the proliferation of the branes in one of the sides of the identity is accompanied by the dilution of the branes of complementary dimension in the other side of the identity. Due to this observation we were able to identify the signature of the complete condensation of defects in the dual picture (ALBA) with the vanishing of the Poisson-dual current. As the main result, we showed that in this specific limit, when the Poisson-dual current is set to zero, the ALBA and the JTA are two dual equivalent prescriptions for describing condensation of defects. As the final remark we point out the fact that when we consider nonzero configurations of the Poisson-dual currentθ V μ we allow the description of an intermediary region interpolating between the diluted and the completely condensed phases. As discussed, this corresponds to the presence of vortex-like defects in the condensate. It is possible to see that this new phase with the presence of vortices (θ V μ = 0), just like in the extreme case where the complete monopole condensation sets in, is also described within the direct picture by a rank-jumped action. The JTA as originally described by Quevedo and Trugenberger, therefore, will describe the physically interesting extreme case where all defects are condensed.
5,560.8
2009-08-04T00:00:00.000
[ "Physics" ]
Using scanning electron microscopy and molecular data to discover a new species from old herbarium collections: The case of Phlomoideshenryi (Lamiaceae, Lamioideae) Abstract Phlomoides is one of the largest genera of Lamiaceae with approximately 150–170 species distributed mainly in Eurasia. In this study, we describe and illustrate a new species, P.henryi, which was previously misidentified as P.bracteosa, from Yunnan Province, southwest China. Molecular phylogenetic analyses revealed that P.henryi is found within a clade in which most species lack basal leaves. In this clade, the new species is morphologically distinct from P.rotata in having an obvious stem and, from the rest, by having transparent to white trichomes inside the upper corolla lip. In addition, micro-features of trichomes on the calyx and leaf epidermis can differentiate the new species from other species grouped in the same clade and a key, based on trichome morphology for these species, is provided. The findings demonstrate that the use of scanning electron microscopy can reveal inconspicuous morphological affinities amongst morphologically similar species and play an important role in the taxonomic study of the genus Phlomoides. As currently defined, Phlomoides consists of approximately 150-170 species and ranks the second largest genus within subfamily Lamioideae (Salmaki et al. 2012a, b;F Zhao et al. 2021).Species of Phlomoides are mainly distributed from central Europe to the Russian Far East, but highly diversified in three regions: Central Asia (59 spp.; Czerepanov (1995)), the Iranian highlands (ca.41 spp.; Salmaki et al. (2012a)) and China (58 spp.; Xiang et al. (2014); Zhao et al. (2021aZhao et al. ( , 2024))).In China, most species are found in the southwest region and 29 species and 11 varieties are endemic and geographically restricted (Li and Hedge 1994).The existing infrageneric classification of Chinese Phlomoides (= Phlomis section Phlomoides Briq.) was established by Hsuan (1977), who divided Chinese species into two subsections and 17 series, based on external morphology (e.g. the absence/presence of the basal leaves, shape of stem leaves, length and density of trichomes on stems and leaves etc.).However, most infrageneric categories were not recovered as monophyletic (Zhao et al. 2024) and those external and quantitative characters used for traditional taxonomy are highly variable amongst different species or at different populations for the same species.In contrast, some micro-features probably have taxonomic significance within Phlomoides.For example, Seyedi and Salmaki (2015) and Khosroshahi and Salmaki (2019) found trichome morphology to be important for the delimitation of sections and species of Phlomoides.In addition, trichome characters have significant taxonomic values in other genera of Lamiaceae (Gairola et al. 2009;Xiang et al. 2010;Hu et al. 2012;Yao et al. 2013).However, micro-features of trichomes and other characters of Chinese Phlomoides species are poorly known. During the past ten years, phylogenetic and taxonomic studies have focused on Phlomoides from China (Xiang et al. 2014;Zhao et al. 2021aZhao et al. , b, 2023aZhao et al. , b, 2024) ) resolving some taxonomic puzzles.In the process of the continuing taxonomic study of the genus, two collections attracted our attention when investigating historical specimens.One collection with three sheets were collected by Augustine Henry in 1898 (A.bracts subulate, simple long trichomes on calyces, bracts and both sides of leaves) shown in these specimens are obviously different from those of P. bracteosa (upper floral leaves sessile, lower floral leaves with petioles up to 20 mm long, bracts lanceolate-linear, branched trichomes on calyces, bracts and both sides of leaves).Fortunately, we re-discovered the plant in the wild from the possible locality where specimens were collected by Henry, after more than 125 years since the first collection in 1887.Molecular phylogenetic analyses and macro-and micro-morphological studies demonstrate that the species is a new species, P. henryi and we describe and illustrate it in this study. Taxon sampling In total, we sampled 49 out of 58 (84.48%)Chinese species of Phlomoides for molecular phylogenetic analyses.Sampling is primarily based on previous molecular phylogenetic studies of Phlomoides (Zhao et al. 2024) and only samples of the potential new species and P. bracteosa were newly sequenced.Fresh leaves of the putative new species (P.henryi) were collected and dried with silica-gel in the field (Jianshui County, Yunnan Province) and herbarium materials of P. bracteosa were collected from the herbarium BM. In addition, six species from the subclade comprising the potential new species, as well as P. bracteosa, were sampled to investigate macro-micro-features of trichomes on flora bracts and leaves.The list of sampled species and their origins are given in Table 1 and voucher specimens were deposited in the Herbarium of the Kunming Institute of Botany (KUN) and Institute of Botany (PE), Chinese Academy of Sciences. DNA extraction, selection of markers and molecular phylogenetic analyses Total genomic DNA was extracted using the CTAB method (Doyle and Doyle 1987).Previous studies revealed that plastid DNA phylogeny can better resolve relationships of Phlomoides than the tree inferred from the nuclear ribosomal internal and external transcribed spacer regions (nrITS and nrETS) (Zhao et al. 2023a, b;2024).In order to test systematic placement of the new species, nine plastid DNA regions (atpB-rbcL, psbA-trnH, rpl16, rpl32-trnL, rps16, trnK, trnL-trnF, trnS-trnG, trnT-L) were selected for phylogenetic reconstruction.Primers, polymerase chain reaction (PCR), sequencing and alignment followed those described in Zhao et al. (2024).The sequences newly generated in this study together with their GenBank accession numbers are listed in Appendix 1. Morphological investigations Species concept, definitions of characters and depiction generally follow Li and Hedge (1994).Type specimens and protologues for all species of Phlomoides in China were collated.Morphological features were based on herbarium as well as field investigations.Specimens at B, BM, C, CDBI, E, FI, GH, HIB, IBSC, K, KUN, LE, M, MA, MAO, MICH, MO, MW, NAS, P, PE, S, SG, TI, W, WUK and XJBI (herbarium acronyms followed Thiers 2022) and our collections from the field were examined for characterisation and morphological comparison.Additional morphological information (including habit, habitat, root, leaf, calyx, flower etc.) was taken from field observations, as well as literature (Hsuan 1977;Wu et al. 1977;Li 1985;Li and Hedge 1994). Micro-features of leaf epidermis and floral bracts were investigated using Light Microscopy (LM) and Scanning Electron Microscopy (SEM).Photographs and morphological observations were taken using a Leica DM2500 light microscope (Leica Microsystems GmbH, Wetzlar, Germany).Mature leaves and floral bracts were taken from our collection (Table 1) for SEM investigation.Materials were mounted on to stubs and coated with gold, using a ZEISS EVO LS10 scanning electron microscope (Carl ZEISS NTS, Germany) with 10 kV voltage (Kunming Institute of Botany, Yunnan, China).Terminology of morphological characteristics of trichomes followed Khosroshahi and Salmaki (2019). Molecular phylogeny and systematic placement of Phlomoides henryi A total of 18 sequences were newly sequenced in the present study and they were submitted to GenBank under accession nos.OR674852-OR674869.The aligned length of the combined plastid dataset was 9259 bp (2380 bp for atpB-rbcL, 421 bp for psbA-trnH, 1361 bp for rpl16, 681 bp for rpl32-trnL, 967 bp for rps16, 958 bp for trnK, 868 bp for trnL-trnF, 831 bp for trnS-trnG and 792 bp for trnT-L), respectively.The topologies of the BI and ML trees were consistent with each other, only the Bayesian 50% majority-rule consensus tree being presented, with the posterior probabilities (PP) and Bootstrap support (BS) and values being superimposed near the nodes (Fig. 1).Monophyly of the genus Phlomoides was recovered (Fig. 1: PP =1.00/BS = 100%).The backbone topologies of Phlomoides recovered in present study are largely consistent with those of previous studies (Zhao et al. 2024), clade I is sister to Clade II with strong support values (Fig. 1: 1.00/100%), then sister to a large clade consisting of Clades III, IV, V and VI.Chinese Phlomoides species can subdivided into six clades (Fig. 1). As shown in Fig. 1, the new species, Phlomoides henryi is distantly related to P. bracteosa.Instead, P. henryi is sister to a subclade (Fig. 1 Figs 3, 4 and Table 2 show the morphology and distribution of trichomes on leaves and bracts of the investigated taxa.Sub-sessile/sessile glandular trichomes occur widely in every part of each species of Phlomoides (Table 2).Short stalked glandular trichomes were observed on the abaxial leaf surface in five species and on the bracts of only one species, i.e.P. breviflora.Branched glandular trichomes were only recorded on the abaxial leaf surface of P. breviflora. Simple short eglandular trichomes were observed in every species on leaf and bract surface, but were missing in the abaxial leaf of Phlomoides nyalamensis, since it was nearly glabrous (Fig. 4J).Adaxial leaf surfaces were often covered by simple eglandular trichomes, except for P. bracteosa (Fig. 4C), which has dense branched eglandular trichomes on the adaxial leaf surface.Simple long eglandular trichomes were most common on bracts (Fig. 3B, H, J, L, O).Abaxial leaf surfaces often had branched eglandular trichomes, but these are not present in the new species (Fig. 4B). Trichomes were transparent to white or brown to black in Phlomoides.Trichomes inside the upper corolla lip of the new species (P.henryi), P. bracteosa and P. rotata were transparent to white, while the other five species were brown to black.Bract trichomes of P. tibetica and P. milingensis were brown to black (Fig. 3K, M), the other six species were transparent to white (Fig. 3A, C, E, G, I, O). Discussion Herbaria house millions of specimens that embody the plant diversity on the Earth.Many new species have been lurking in herbaria for many years before being published.Bebber et al. (2010) estimated that 84% of new species' descriptions were from old specimens collected more than five years prior to publication and 25% from specimens more than 50 years old.During the taxonomic review of some groups of Lamiaceae, we have also found some new species from old herbarium specimens (Chen et al. 2014;Dong et al. 2015), indicating taxonomic work, based on herbaria, is still a very important resource for the discovery of new taxa. The resulting phylogenetic tree of Phlomoides in this study was similar to that in previous study (Zhao et al. 2024).The new species, P. henryi, was nested within Clade II and formed a separate branch (Fig. 1: 1.00/100%) that is sister to a subclade containing P. rotata and five species with brown to black trichomes on the upper corolla.Geographically, Phlomoides henryi is distributed in southern Yunnan, while the other six species in this subclade were mainly distributed in the Qinghai-Tibetan Plateau and Himalaya.The new species is morphologically distinct from the other six species in this subclade.For example, trichomes on the upper corolla lip of P. henryi and P. rotata are colourless and perceptually transparent to white, but brown to black in the other five species.Morphologically, P. rotata is distinct from all other species of Phlomoides by the very short stem producing a rosette of leaves with the plant often less than 10 cm high, while P. henryi is generally taller than 1 m.As we observed, all the species with trichomes brown to black were embedded within this subclade.The sister clade to that containing P. henryi contains 23 species that are mainly distributed in Hengduan Mountains.Phlomoides henryi is similar to other species in Clade I and Clade II in lacking basal leaves.Only four species have basal leaves in Clade II, i.e.P. rotata, P. tibetica, P. milingensis and P. atropurpurea, while all the species in Clades III-VI have basal leaves. As above mentioned, we believe that the differences merit recognition of the new species and we describe it below. Phlomoides is a morphologically diverse and taxonomically difficult group with many characters used for traditional taxonomy being highly variable.In this study, we investigated trichome micro-morphology on bracts and leaves of Phlomoides henryi and related species.We found that trichomes are a useful character to distinguish some morphologically similar species.Based on the colour of trichomes, we can separate two groups of those species.Phlomoides nyalamensis, P. macrophylla, P. tibetica, P. milingensis and P. breviflora have brown to black trichomes on the upper corolla lip, while the other species (P.rotata, P. bracteosa and the new species described here, P. henryi) have transparent to white trichomes on the upper corolla lip.Trichome density and bract trichome colour can separate P. tibetica from the similar P. milingensis.Both species are distributed in Xizang at an altitude from 3500-4500 m and Hsuan (1977) placed them within Series Tibeticae.Phlomoides tibetica has floral bracts with black simple trichomes and no branched trichomes, while P. milingensis has floral bracts with brown simple and branched trichomes.Similarly, the new species described here, P. henryi, can be distinguished from the six related species in the subclade by the absence of branched trichomes on the abaxial leaf surface (Fig. 5B).Phlomoides bracteosa can easily be separated from these six species by having branched trichomes on the adaxial leaves (Fig. 5C).Azizian and Cutler (1982) have found that adaxial and abaxial leaf surfaces have different trichome types, but in that work, Phlomoides was treated as a section of Phlomis and they only discussed the differences amongst Phlomis sect.Phlomis, Phlomis sect.Phlomoides and Eremostachys and not at the species level.Subsequent studies did not observe trichomes on different structures (Seyedi and Salmaki 2015;Khosroshahi and Salmaki 2019).However, here we found different structures were covered with significantly different trichomes and these differences can be used as evidence to separate morphologically similar species.Future studies should focus on micro-morphological investigation of trichomes and other characters (i.e.appendages, calyces, roots, mericarps) and those micro-features are probably helpful for taxonomy and species identification of Phlomoides species. In order to distinguish those species grouped with the new species in the phylogenetic tree (Fig. 2), as well as P. bracteosa, we provide a key, mainly based on macro-and micro-morphological trichomes.Diagnosis.Within the subclade, Phlomoides henryi is morphologically similar to P. rotata for having transparent to white trichomes inside the upper corolla lip rather than brown to black and is distinct from all other species by lacking branched hairs.P. bracteosa has similar transparent to white trichomes inside the upper corolla lip, but with branched trichomes on both sides of leaves and floral bracts.The differences between P. henryi, P. rotata and P. bracteosa are listed in Table 3. Figure 1 . Figure 1.Phylogeny of Phlomoides inferred by Bayesian Inference (BI), based on the combined plastid dataset cpDNA.Support values displayed on the branches follow the order BI-PP/ML-BS (" * " indicates PP = 1.00 or BS = 100%, "-" indicates incongruent relationship between BI and ML tree. Figure 3 . Figure 3. Photos of bracts, SEM of bracts of Phlomoides henryi and related species A, B P. henryi C, D P. bracteosa E, F P. breviflora G, H P. macrophylla I, J P. nyalamensis K, L P. tibetica M, N P. milingensis O, P P. rotata.A, C, E, G, I, K, M, O photos of bracts B, D, F, H, J, L, N, P SEM of bracts. Figure 4 . Figure 4. SEM of both sides of leaves of Phlomoides henryi and related species A, B P. henryi C, D P. bracteosa E, F P. breviflora G, H P. macrophylla I, J P. nyalamensis K, L P. tibetica M, N P. milingensis O, P P. rotata A, C, E, G, I, K, M, O SEM of adaxial leaves B, D, F, H, J, L, N, P SEM of abaxial leaves. Figure 5 . Figure 5. Phlomoides henryi Y.Zhao & C.L.Xiang A habitat B plant with linear-tuberous roots C inflorescence D verticillaster E flowers F dissected flower G appendages at base of posterior filaments H fruiting calyces I dissected calyces J bracts K floral leaves L stem leaves.Photographs by Yue Zhao, except C by Li Chen. Table 1 . List of sampled Phlomoides species to investigate macro/micro features of trichomes and their voucher information. Table 2 . Distribution of different types of trichome in the examined Phlomoides spp. Table 3 . Morphological comparisons amongst Phlomoides henryi, P. rotata and P. bracteosa.No branched trichomes With branched trichomes on abaxial leaves With branched trichomes on both sides of leaves and bracts
3,652.6
2024-02-20T00:00:00.000
[ "Biology", "Environmental Science" ]
My-Trac: System for Recommendation of Points of Interest on the Basis of Twitter Profiles : New mapping and location applications focus on offering improved usability and services based on multi-modal door to door passenger experiences. This helps citizens develop greater confidence in and adherence to multi-modal transport services. These applications adapt to the needs of the user during their journey through the data, statistics and trends extracted from their previous uses of the application. The My-Trac application is dedicated to the research and development of these user-centered services to improve the multi-modal experience using various techniques. Among these techniques are preference extraction systems, which extract user information from social networks, such as Twitter. In this article, we present a system that allows to develop a profile of the preferences of each user, on the basis of the tweets published on their Twitter account. The system extracts the tweets from the profile and analyzes them using the proposed algorithms and returns the result in a document containing the categories and the degree of affinity that the user has with each category. In this way, the My-Trac application includes a recommender system where the user receives preference-based suggestions about activities or services on the route to be taken. transport, services) for a Introduction Humans are social beings; we always seek to be in contact with other people and to have as much information as possible about the world around us. The philosopher Aristotle (384-322 B.C.) in his phrase "Man is a social being by nature" states that human beings are born with the social characteristic and develop it throughout their lives, as they need others in order to survive. Socialization is a learning process; the ability to socialize means we are capable of relating with other members of the society with autonomy, self-realization and self-regulation. For example, the incorporation of rules associated with behavior, language, and culture improves our communication skills and the ability to establish relationships within a community. In the search for improvement, communication, and relationships, human beings seek to get in contact with other people and to obtain as much information as possible about the environment in order to achieve the above objectives. The emergence of the Internet has made it possible to define new forms of communication between people. It has also made it possible to make a large amount of information on any subject available to the average user at any time. This is materialized in the development of social networks. The concept of social networking emerged in the 2000s as a place that allows for interconnection between people, and, very soon, the first social networking platforms appeared on the Internet that guides them via public transport. There are many different systems that incorporate both recommendation and navigation. However, there is no system that combines event recommendation and pedestrian navigation with (real-time) public transport. However, it does not employ multi-modal navigation between different public transport modes (bus, train, carpooling, plane, etc.) in different countries and that would use information from the user's social network profile. Instead, current systems utilize a set of information initially entered into the application which is not updated afterwards. Finally, Tables 1 and 2 present a review of similar works. Mobile phone application that suggests events and places to the user and guides them via public transport. The current systems utilize a set of information initially entered into the application which is not updated afterwards. There is no system that combines event recommendation and pedestrian navigation with (real-time) public transport. It does not employ multi-modal navigation between different public transport modes (bus, train, carpooling, plane, etc.) in different countries and that would use information from the user's social network profile. Systems for city-based tourism [2] A personalized travel route recommendation based on the road networks and users' travel preferences. The experimental results show that the proposed methods achieve better results for travel route recommendations compared with the shortest distance path method. It does not use information from public transport services in route recommendations. Tourism routes as a tool for the economic development of rural areas-vibrant hope or impossible dream? [3] This paper argues that the clustering of activities and attractions, and the development of rural tourism routes, stimulates co-operation and partnerships between local areas. The paper further discusses the development of rural tourism routes in South Africa and highlights the factors critical to its success. The article analyzes the realization of routes that include activities and attractions in a way that encourages and enhances rural development in Africa. Preliminary project that requires public cooperation (institutions, transport, services) for a comprehensive improvement of the proposal. This article improves on the previous system for the extraction of information regarding Twitter users [4]. The system is capable of obtaining information about a particular user and of elaborating a profile with the user's preferences in a series of preestablished categories. A review of existing reputation systems is presented in Section 2. Section 3 describes the proposal. Section 4 presents the assessment made with synthetic data. Section 5 shows how the system is integrated in My-Trac app. Finally, Section 6 presents the conclusions. Title / Publication Functionality Advantages Shortcomings Social Recommendations for Events [5] Outlife recommender assists in finding the ideal event by providing recommendations based on the user's personal preferences. In addition to the user's preferences, the recommender uses information from the user's group of friends to make event recommendations more satisfactory. Although it uses information from the user's groups of friends, no use is made of information from the user's social networks to complement the analysis and recommendation. Smart Discovery of Cultural and Natural Tourist Routes [6] This paper presents a system designed to utilize innovative spatial interconnection technologies for sites and events of environmental, cultural and tourist interests. The system discover and consolidate semantic information from multiple sources, providing the end-user the ability to organize and implement integrated and enhanced tours. The system adapts the services offered to meet the needs of specific individuals, or groups of users who share similar characteristics, such as visual, acoustic, or motor disabilities. Personalization is done in a dynamic way that takes place at the time and place of the service. The very comprehensive system that uses external services, scraping, crawling, geo-positioning but does not include information from social networks to complement the analysis and recommendation of events. Enhancing cultural recommendations through social and linked open data [7] Hybrid recommender system (RS) in the artistic and cultural heritage area, which takes into account the activities on social media performed by the target user and her friends The system integrates collaborative filtering and community-based algorithms with semantic technologies to exploit linked open data sources in the recommendation process. Furthermore, the proposed recommender provides the active user with personalized and context-aware itineraries among cultural points of interest. The main drawback is the absence of extensive control over the semantics that are not taken into account. It generates difficulties in justifying, explaining, and hence analyzing the resulting scores. Personalized Tourist Route Generation [8] Intelligent routing system able to generate and customize personalized tourist routes in real-time and taking into account public transportation. We have modeled the tourist planning problem, integrating public transportation, as the Time Dependent Team Orienteering Problem with Time Windows (TDTOPTW). We have designed a heuristic able to solve it in real time, precalculating the average travel times between each pair of POIs in a preprocessing step. Future works consists on extending the system to more cities with a different public transport network topology. The next one consists on integrating an advanced recommendation system in a wholly functional PET. The systema don't use social network capabilities, that allows to store, share and add travel experiences to better help tourists on the destination. Natural Language Processing Techniques Applied to Twitter Profiles In this section, we review the main techniques applied in the analysis that make it possible to get to know the users preferences through their tweets. This allows for recommendations to be made according to the user profile. Word Embedding Techniques NLP techniques allow computers to analyze human language, interpret it, and derive its meaning so that it can be used in practical ways. These techniques allow for tasks, such as automatic text summarization, language translation, relation extraction, sentiment analysis, speech recognition, and item classification, to be carried out. Currently, NLP is considered to be one of the great challenges of artificial intelligence as it is one of the fields with the highest development activity since it presents tasks of great complexity: how to really understand the meaning of a text, how to intuit neologisms, ironies, jokes, or poetry? It is a challenge to apply the techniques and algorithms that allow us to obtain the expected results. One of the most commonly used NLP techniques is Topic Modeling. This technique is a type of statistical modeling that is used to discover the abstract "topics" that appear in a series of input texts. Topic modeling is a very useful text mining tool for discovering hidden semantic structures in texts. Generally, the text of a document deals with a particular topic, and the words related to that topic are likely to appear more frequently in the document than those that are unrelated to the text. Topic Modeling collects the set of more frequent words in a mathematical framework, which allows one to examine a set of text documents and discover, on the basis of the statistics of the words in each one, what the topics may be and what the balance is between the topics in each document. The input of topic modeling is a document-term matrix. The order of words does not matter. In a document-term matrix, each row is a question (or document), each column is a term (or word), we label "0" if that document does not contain that term, "1" if that document contains that term once, "2" if that document contains that term twice, and so on. Algorithms, such as Bag-of-words or TF-IDF, among others, make it possible to represent the words used by the models and create the matrix defined above, representing a token in each column and counting the number of times that token appears in each sentence (represented in each row). • Bag-of-words. This model allows to extract the characteristics of texts (also images, audios, etc.). It is, therefore, a feature extraction model. The model consists of two parts: a representation of all the words in the text and a vector representing the number of occurrences of each word throughout the text. That is why it is called Bag-of-words. This model completely ignores the structure of the text, it simply counts the number of times words appear in it. It has been implemented through the Genism library [9]. • Term Frequency -Inverse Document Frequency (TF-IDF). This is the product of two measures that indicate, numerically, the degree of relevance that a word has in a document within a collection of documents [10]. It is broken down into two parts: -Term frequency: Measures the frequency with which certain terms appear in a document. There are several measurement options, the simplest being the gross frequency, i.e., the number of times a term t appears in a document d. However, in order to avoid a predisposition towards long documents, the normalized frequency is used: As shown in Equation (1), the frequency of the term is divided by the maximum frequency of the terms in the document. -Inverse document frequency: If a term appears very frequently in all of the analyzed documents, its weight is reduced. If it appears infrequently, it is increased. As shown in Equation (2), the total number of documents is divided by the number of documents containing the term. Term frequency-Inverse document frequency: The entire formula is as shown in Equation (3). Word embedding is a term used for the representation of words for text analysis, typically in the form of a real-valued vector that encodes the meaning of the word such that the words that are closer in the vector space are expected to be similar in meaning [11]. Word embeddings can be obtained using a set of language modeling and feature learning techniques where words or phrases from the vocabulary are mapped to vectors of real numbers [12]. • Word2vec. This technique uses huge amounts of text as input and is able to identify which words appear to be similar in various contexts [13][14][15]. Once trained on a sufficientñy big dataset, 300-dimensional vectors are generated for each word, forming a new vocabulary where "similar" words are placed close to each other. Pre-trained vectors are used, achieving a wealth of information from which to understand the semantic meaning of the texts. • Doc2vec. This technique is an extension of Word2Vec and is applied to a document as a whole instead of individual words, it uses an unsupervised learning approach to better understand documents as a whole [16]. Doc2Vec model, as opposed to Word2Vec model [17], is used to create a vectorized representation of a group of words taken collectively as a single unit. It does not only give the simple average of the words in the sentence. Topic Modeling As already presented in the previous section, topic modeling is a tool that takes an individual text (or corpus) as input and looks for patterns in word usage; it is an attempt to find semantic meaning in the vocabulary of that text (or corpus). This set of tools enables the extraction of topics from texts; a topic is a list of words that is presented in a way that is statistically significant. Topic modeling programs do not know anything about the meaning of the words in a text. Instead, they assume that each text fragment is composed (by an author) through the selection of words from possible word baskets, where each basket corresponds to a topic. If that is true, then it is possible to mathematically decompose a text into the baskets from which the words that compose it are most likely to come. The tool repeats the process over and over again until the most probable distribution of words within the baskets, the so-called topics, is established. The techniques executed by the proposed system are used to discover word usage patterns of each user on Twitter, and they make it possible to group users into different categories. To this end, a thorough review of the main tools for topic modeling has been carried out. Most of the algorithms are based on the paradigm of unsupervised learning. These algorithms return a set of topics, as many as indicated in the training. Each topic represents a cluster of terms that must be related to one of those categories. Precisely for this reason, a large number of tweets have been retrieved as training data. Keywords have been searched for for each category. As part of this research, a total of three algorithms have been evaluated: LDA, LSI, and NMF. In the NMF experiment, the best results were obtained, although the techniques applied in other works have been reviewed in order to contrast their results with this method. Apart from the comparison itself, there are numerous studies that have made similar comparisons between these techniques so that the decision is supported by similar studies. In the work of Tunazzina Islam, in 2019 a similar experiment was carried out to the one proposed in this paper [18]. In this paper, Apache Kafka is employed to handle the big streaming data from Twitter. Tweets on yoga and veganism are extracted and processed in parallel with data mining by integrating Apache Kafka and Spark Streaming. Topic modeling is then used to obtain the semantic structure of the unstructured data (i.e. Tweets). They then perform a comparison of the three different algorithms LSA, NMF, and LDA, with NMF being the best performing model. Another noteworthy work is that carried out by Chen et al. [19], in which an experiment is carried out to detect topics in small text fragments. This is similar to the proposal made in this paper, since tweets can be considered small texts. In this work a comparison is made between the LDA and NMF methods, the latter being the one that provided the best results. • Latent Dirichlet allocation (LDA). Is a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar [20][21][22]. For example, if observations are collections of words in documents, each document is a mixture of a small number of topics and each word's presence is attributable to one of the document's topics. LDA is an example of a topic model and belongs to the machine learning toolbox and in wider sense to the artificial intelligence toolbox. • Nonnegative Matrix Factorization (NMF). Is an unsupervised learning algorithm belonging to the field of linear algebra. NMF reduces the dimensionality of an input matrix by factoring it in two and approximating it to another of a smaller range. The formula is V ≈ W H. Let us suppose, observing Equation (4), a vectorization of P documents with an associated dictionary of N terms (weight). That is, each document is represented as a vector of N dimensions. All documents, therefore, correspond to a V matrix where N is the number of rows in the matrix, and each of them represents a term, while P is the number of columns in the matrix and each of them represents a document. Equations (5) and (6) shows matrices W and H. The value r marks the number of topics to be extracted from the texts. Matrix W contains the characteristic vectors that make up these topics. The number of characteristics (dimensionality) of these vectors is identical to that of the data in the input matrix V. Since only a few topic vectors are used to represent many data vectors, it is ensured that these topic vectors discover latent structures in the text. The H-matrix indicates how to reconstruct an approximation of the V-matrix by means of a linear combination with the W-columns. where N is the number of rows in matrix W, and each of them represents a term (weight), and r is the number of columns in matrix W, where r is the number of characteristics to be extracted. where r is the number of rows in matrix H, r is the number of characteristics to be extracted, and P is the number of columns, with one column for each document. The result of the matrix product between W and H is, therefore, a matrix of dimensions NxP corresponding to a compressed version of V. The use of Machine Learning techniques for the analysis of information extracted from Twitter is a very common case study today. It is convenient to study what kind of research is being carried out on this subject. One of the main applications is the use of Twitter and Natural Language Processing techniques in order to extract a user's opinion about what is being tweeted at a given time. The article "A system for real-time Twitter sentiment analysis of 2012 U.S. presidential election cycle", written by Hao Wang et al. [23], presents a system for real-time polarity analysis of tweets related to candidates for the 2012 U.S. elections. The system collects tweets in real time, tokens and cleans them, identifies which user is being talked about in the tweet, and analyzes the polarity. For training, it applies Naïve Bayes, a statistical classifier. It uses hand-categorized tweets as input. Another study similar to this one is the one proposed by J.M.Cotelo et al. from the University of Seville: "Tweet Categorization by combining content and structural knowledge" [24]. It proposes a method to extract the users' opinion about the two main Spanish parties in the 2013 elections. It uses two processing pipelines, one based on the structural analysis of the tweets, and the other based on the analysis of their content. Another possible line of research is based on categorizing Twitter content. This is the case of the article "Twitter Trending Topic Classification" written by Kathy Lee et al. [25]. It studies the way to classify trending topics (hashtags highlighted) in 18 different categories. To this end, Topic Modeling techniques were used. The key point lies in providing a solution based on the analysis of the network underlying the hashtags and not only the text: "our main contribution lies in the use of the social network structure instead of using only textual information, which can often be noisy considering the social network context". As it can be seen, there are many studies currently oriented to the analysis of Twitter using Machine Learning tools. The challenge to be faced in this work is to find the optimal way of classifying users according to their tweets. The sections that follow describe the objectives of the project and detail the research and testing that led to the construction of a stable system fit for the purpose for which it has been designed. Proposal This section proposes a system for the extraction of information about Twitter users. The system is capable of obtaining information about a particular user and of elaborating a profile with the user's preferences in a series of pre-established categories. From an abstract point of view, the proposal could be seen as a processing pipeline, as shown in Figure 1. The different phases of this pipeline contribute to the achievement of the main objective: user classification. Category Definition Matching a given profile to a specific category or topic is one of the objectives of NLP algorithms. As a starting point, it is necessary to prepare the training dataset that is used when investigating the algorithmic model. The strategy followed is based on the model of the Interactive Advertising Bureau (IAB) association [26]. Today, IAB is a benchmark standard for the classification of digital content. In particular, the IAB Tech Lab has developed and released a content taxonomy on which the present categorization is based. This taxonomy proposes a total of 23 categories with their corresponding subcategories covering the main topics of interest. In this way, 8000 tweets from each of these categories have been ingested. As a result, 23 datasets with examples of tweets related to each category were obtained, these datasets have been used to train the system at a later stage. Specifically, the list of topics is shown in Table 3. Twitter Data Extraction The Twitter data extraction mechanism is a fundamental element of the system. The goal of this mechanism is to recover two types of data. On the one hand, the system extracts a set of anonymous tweets related to each of the defined preference categories; these tweets are used to train the data classification algorithms. On the other hand, the mechanism extracts information about the given user for the analysis of their preferences. Twitter's API enables developers to perform all kinds of operations on the social network. It is, therefore, necessary for our system to use this powerful API. This API could be used by elaborating a module that would make HTTP requests to the API so that the endpoints of interest are executed. However, this involves a remarkably high development cost. Another option would be to make use of one of the multiple Python libraries that encapsulate this logic and offer a simple interface to developers. The latter option has been chosen for the development of this system, more specifically, library Tweepy [27]. Preprocessing of Tweets Once the data has been extracted, it must be prepared for the classification algorithms. Cleaning and preprocessing techniques must be applied, so that the text is prepared for topic modeling algorithms. Libraries, such as NLTK and Spacy, have been used, as can be observed in Listing 1. The first step involves cleaning tweets, by removing content that does not provide information for language processing. More specifically, this task consists in eliminating URLs, hashtags, mentions, punctuation marks, etc. Another of the techniques applied to obtain more information from tweets is the transformation of the emojis contained in the text into a format from which it is possible to extract information. To do this, a dictionary of emojis is used as a starting point for the conversion of the data. This dictionary contains a series of values that interpret each of the existing emojis when applying the corresponding analysis. In this way, it has been possible to identify and give a certain value to each emoji for its treatment. The key activity performed during the preprocessing consist of eliminating stopwords and tokenization. Whether it is a paragraph, an entire document or a simple tweet, every text contains a set of empty words or stopwords. This set of words is characterized by its continuous repetition in the document and its low value within the analysis. These words are mainly articles, determiners, synonyms, conjunctions, and others. Table 4 shows the results obtained after the tweets have gone through the preprocessing and preparation process which had been carried out using the tools listed above. Vectorization Vectorization is the application of models that convert texts into numerical vectors so that the algorithms can work with the data. Two algorithms have been considered for the performance of this task, "Bag-Of-Words" and "Tf-Idf". Both are widely used in the field of NLP, but, in general, creation of tf-idf weights from text works properly and is not very expensive computationally. Moreover, NMF expects as input a Term-Document matrix, typically a "Tf-Idf" normalized. The vectorizer have been tuned manually with some parameters according to the dataset, as can be observed in Listing 2. Min_d f was set to 100 to ignore words that appear in less than 100 tweets. In the same way, max_d f was set to 0.85 to ignore words that appear in more than 85% of the tweets. Thanks to that feature, it is possible to remove words that introduce noise in the model. Finally, the algorithm only takes into account single words, so, in order to include bigrams, the parameter ngram_range was set to (1, 2) Topic Modeling Topic Modeling is a typical NLP task that aims to discover abstract topics in texts. It is widely used to discover hidden semantic structures. In the present work, this technique has been used to discover the main topics of interest of the My-Trac application users based on their Twitter profiles, which should correspond to some of the previously defined categories. Regarding the features of the model, in Section 3.4, the training tweets were vectorized to create a Term-Document matrix which has been the input of the NMF model. In addition, NMF needs one important parameter, the number of topics to be discovered "n_components". In this case, n_components was set manually to 23, which is the number of topics that were defined initially in the categories taxonomy. Following this approach, the algorithm is trained with 184,000 tweets (8000 per category) with the aim of obtaining as many topics as categories were defined in the taxonomy. Once the model has been trained, it has been possible to determine in which topics a user's profile fits on the basis of their tweets. The implementation of the topic modeling algorithm has been carried out on the basis of NMF using SKLearn library, as is deailed in Listing 3. Finally, it is worth mentioning the use of some extra parameters which were set in the implementation of the model. The method used to initialize the procedure was set to "NNDSVa" which works better with the tweet dataset since this kind of data it is not sparse. Al pha and l1_ratio both are parameters which helps to define regularization. Evaluation and Results In order to evaluate the results of the algorithm, the most relevant terms have been identified for each resulting topic. Then, by reviewing the main terms for each topic, it is possible to determine if that words really represent the content of the topic. An example is shown in Figure 2, where the most relevant terms have been identified for 4 different topics, proving how well the algorithm identifies the terms associated with each one. As it can be seen, all of them are unambiguously related to their defined categories. Topic 1: Travel. Topic 2: Arts & Entertainment. Topic 9: Personal Finance. Topic 10: Pets. The full list of topics and their top 10 related keywords identified by the algorithm can be seen in Table 5. It should be noted that some of the previously defined categories in Table 3 have been removed during the evaluation of this model. This fact is due to the lack of tweets that would fit into those categories, as well as some topics were quite overlapped amongst them. The initially defined categories that have been removed during training process and evaluation are: "Home & Garden", "Real State", "Society", and "News". In the same way, the algorithm has been able to discover new categories related to the original ones, such as: "Movies", "Videogames", "Music", "Events", and "Medicine & Health", leaving a total of 23 categories in the system. Once the resulting model has been evaluated and verified, the next step is to check the effectiveness of the model with real Twitter profiles. The tests have been performed extracting 1200 tweets from different users and predicting for each user the most related topics based on their tweets. The final test results are shown in Table 6, where it can be observed how each profile name match with related topics according to the profile. As an example, the main topics for the profile "Tesla" are "Automotive", "Technology and computing", and "Travel". Finally, in order to suggest the main topics of a specific user in the My-Trac app, for each user, the model returns the associated categories, along with the percentage of weight that each category has on the user. The lower the percentage, the less relation the user has with the category. The results of the final classification using some known Twitter accounts are given in Table 7. It should be noted that only the three main categories are shown in the table (together with their associated percentage), as they are the most accurate for categorizing the user. Final System Integration in My-Trac Application Having passed the entire research and evaluation process, a trained algorithm has been obtained capable of classifying different Twitter accounts according to defined and discovered categories. In addition, a reliable data extraction method has been developed. Therefore, the next step consists of applying the algorithm to the My-Trac app to create a system that allows recommendations to My-Trac users based on their Twitter profiles, which is the objective of the present work. The final system for My-Trac app consists of a mobile app where the user logs in, as it can be seen in Figure 3, and is asked to grant access to their Twitter data. Once the user signs in to the application, My-Trac seeks for the optimal means of transport to reach a specific destination given by the user and suggests the best conveyance for the trip, as Figures 4 and 5 show. Finally, when the user chooses the route and mean of transport that best fits his trip, based on the present work, My-Trac app recommends some activities and points of interest for the user during the way based on its Twitter information, as can be observed in Figures 6 and 7. Moreover, is possible to get some detailed information for each activity recommended, as Figure 8 shows. In this way, thanks to My-Trac app, the user can improve his experience not only by receiving suggestions for the best conveyance for the trip but also receiving customized activity recommendations and points of interest. Conclusions and Future Work This article presents a novel approach to extracting preferences from a Twitter profile by analyzing the tweets published by the user for use in mapping applications. This approach has successfully defined a consistent and representative list of categories, and the mechanisms needed for information extraction have been developed, both for model training and end-user analysis. It is a unique system, with which it has been possible to develop an important feature in the My-Trac app, whereby it is possible to recommend relevant point of interest to the end users. Regarding future work on this system, many areas of improvement and development have been identified. Tweets are not the only source of information that allows to discern the interests of a profile. It may be the case that a user only writes about football but is the follows many news-related and political accounts. The current system would only be able to extract the sports category. Therefore, one of the improvements would be the implementation of a model that would analyze followed users. This has been started, by extracting the followers and creating wordclouds with the most relevant ones. Similarly, hashtags also provide additional information suitable for analysis. Another line of research is the training of a model that allows to analyze the tweets individually. This would open the doors to performing a polarity analysis that would allow us to know if a user who writes about a certain category does it in a positive, negative, or neutral way. As for the limitations of the system, it is possible that, in some regions, there may be restrictive regulations on the use of information published on social networks for this type of analysis. Therefore, the user should carry out a study of data protection and the legal framework adapted to each region where the service is to be provided. Furthermore, in terms of performance, it is possible that specific context-dependent systems training an algorithm for each individual user may perform slightly better than the proposed solution.
7,876.4
2021-05-25T00:00:00.000
[ "Computer Science" ]
Review of Handbook of alkali-activated cements, moRtaRs and concRetes The “Handbook of alkali-activated cements, mortars and concretes” is an excellent first hand read that provides readers with a comprehensive overview of alkali activated cements, covering complete range from their composition, manufacture, mechanical and durability properties, practical applications to life-cycle assessment of the binders. The book has been written and edited by world renowned researchers who have been actively working in the area of alkali activated cements and present to the readers the state-of-the-art in this area. Review of Handbook of alkali-activated cements, moRtaRs and concRetes Meenakshi sharMa* The "Handbook of alkali-activated cements, mortars and concretes" is an excellent first hand read that provides readers with a comprehensive overview of alkali activated cements, covering complete range from their composition, manufacture, mechanical and durability properties, practical applications to life-cycle assessment of the binders. The book has been written and edited by world renowned researchers who have been actively working in the area of alkali activated cements and present to the readers the state-of-the-art in this area. The book is divided into five key parts. The first part presents the chemistry, mixture design and manufacture of alkali activated cementitious binders (AACB). The second part of the book lists the engineering properties -from the workability, mechanical properties to the pore size distribution of AACBs. The third part covers some of the most important durability aspects such as resistance to corrosion, carbonation, chemical attack and efflorescence and more. The fourth part of the book focuses on the utilization of waste materials to make AACBs more sustainable. The book also discusses the possible applications of AACBs, such as soil stabilization, protective coating to OPC concrete, repair and strengthening of OPC concrete and toxic waste immobilization. The last part of the book very interestingly presents to the readers the life cycle assessment (LCA) and innovative applications of alkali-activated cement and concrete. Before sharing the review of each of these sections, it is worth sharing some specific qualities of the book. First, the book covers a vast variety of subjects that are important for construction materials. All chapters give a detailed comparison of the properties of AACBs with OPC and provides the readers an objective means to understand the potential of AACBs. Second, each of the chapters is followed by a detailed list of references, demonstrating the width of the literature covered. This list of references is also useful for researchers to understand the evolution of research into the subject. The third important highlight of the book is the brief introduction of the basics provided at the start of each chapter. For example, the introduction of standard fire scenarios, active and passive fire protection given at the start of chapter on fire resistance of AACBs are useful to obtain a preview of the chapter in brief and to understand the context of the chapter in advance. Most chapters also suggest the future research work required on the subjects discussed, making the book much more useful for researchers. The first part of the book has three chapters focusing on the chemistry and composition of AACBs with important insights for their mix design. This part gives a brief introduction of different raw materials required to produce different types of AACB such as high calcium, low calcium and hybrid AACB, including various types of waste materials. It also presents the reaction mechanism and the micro-and nanostructure of the main reaction products of different AACBs such as C-S-H gel for high calcium AACB and N-A-S-H gel for low calcium AACBs. Then, the two main components of AACBs, cementitious components or solid precursor and alkali activators, are discussed in detail with their main required properties. The most used solid precursors such as blast furnace slag, fly ash and metakaolin are discussed. This part presents the influence of cations and anions of the alkaline activators on the activation mechanism of different types of AACBs. It is shown that the high concentration of OHions is favourable for dissolution and formation of hydrates of silica and alumina and unfavourable for calcium dissolution. It provides a brief review of various combinations of cementitious This part also discusses about the use of waste glass for the manufacture of AACB. Interestingly, it is shown that the chemical processing of waste glass in an alkaline solution at 80 °C produces sodium silicate solution, which can potentially be used as an alkaline activator. The next part of the book, which talks about the engineering properties focuses on the fresh properties, mechanical properties and pore structure of AACBs. The setting time of AACBs containing different prime materials such as slag, metakaolin and fly ash are discussed with brief details on the different factors influencing setting time. A detailed introduction to the forming techniques, basic concepts of rheology of suspensions and various instruments used for measuring the rheological behaviour is provided. Based on the provided basics of rheology, authors discuss the limited data available on the rheology of AACBs and suggest future work required to understand the rheological behaviour of AACBs. This part presents a detailed discussion on the factors that influence the compressive strength and flexural strength of AACB based concretes. Chemical and physical properties of prime materials, synthesis conditions such as type and concentration of activators and curing conditions are mentioned as the main factors. It also talks about the elastic modulus of AACBs at different levels starting from the nano-level of the hydration gel to concrete level, which provides detailed information of elastic modulus at all the levels. One of the chapters in this part presents a neuro-fuzzy approach to model the compressive strength of geopolymers based on the curing time, Ca(OH) 2 content, NaOH concentration, mold type, aluminosilicate source and H 2 O/ Na 2 O molar ratio. The pore structure and permeability of AACBs is compared with OPC systems and it is shown that pore size distribution of AACBs in finer than the pore size of OPC. Despite of having limited research on the creep and shrinkage of AACBs, the book briefs on the existing studies and discusses the creep and shrinkage performance of AACBs. The third part of the book focuses on the durability properties of AACBs such as frost resistance, carbonation and corrosion resistance, etc. It is shown that the frost resistance of AACBs depends on a number of factors like the solid precursor type and the activator type. However, mostly the performance is found to be poorer than OPC. The authors also suggest that new air-entrained admixtures are required for AACBs. The book also suggests that a new methodology to access carbonation resistance for AACBs is required since the existing methods change the microstructure significantly and the same is not found in natural carbonation. The book not only talks about the corrosion in AACBs, it also provides details on the performance of various methods of preventing concrete corrosion such as the use of stainless steel and corrosion inhibitors in AACBs. The discussion about chemical attack on AACBs shows that AACBs have a higher resistance to chemical attack compared to Portland cements. The book also talks about alkali silica reaction (ASR) in AACBs. The readers may find it useful that affects of ASR have been divided into three categories: slight expansion lower than or similar to that of ordinary Portland cement (OPC), expansion that cannot be neglected but remains lower than in OPC, and significant expansion that is higher than in OPC. A detailed theoretical analysis of various types of alkali activated systems is provided using an existing thermodynamic model. It is shown that alkali activated binders have better fire performance compared to OPC and a protective layer of these binders on concrete can provide excellent fire protection. One of the major durability concerns of AACBs, i.e. efflorescence, is discussed in detail. Authors have also incorporated results from the literature that describe various methods of controlling efflorescence. The fourth part of the book focuses on the reuse of waste materials for producing AACBs and various applications of AACBs. The potential of using waste generating from various activities such as electricity, mining, ceramics/glass industries, chemical, petrochemical industries, etc. are discussed. It is shown that recycled aggregates, recycled concrete and recycled bricks from construction and demolition waste can be potentially used in AACBs without harmful effects. The possible application of AACBs in soil stabilisation, protective coating to OPC concrete, repair and strengthening of OPC concrete and toxic waste immobilization is discussed in detail. It is shown that for soil stabilisation, by-products based AACBs have potential to be used as sustainable replacement of traditional binders such as lime and cement. Based on a study conducted in Australia, it is shown that the cost of replacing cement binder with AACBs may be higher, especially if the cost of transporting the binder to the treatment site is included. Thus, authors suggest focusing on minimization of overall cost of using AACBs compared to traditional binders. The book also presents alkali activated metakaolin coating as an alternative to traditionally used protective coatings on OPC concrete. The details of an on-site trial of alkali-activated metakaolin (AAM) as a coating on OPC concrete at Shanghai Jinshan coast are provided. For each of the applications discussed, the book lists the most important requirements in detail. The application of AACBs for repair and strengthening of concrete is also explored and the potential of cost effective repairs using AACBs is shown in comparison with commercially available repair materials. It was suggested that further investigation is required for the application of these materials for repair and strengthening of concrete structures. Utilization of geopolymers made from waste materials as masonry units and their required properties such as density, water absorption, unconfined compressive strength and durability properties are also discussed. book review The last part of the book brings about the life cycle assessment and innovative applications of alkali activated cement and concrete. This part of the book starts with a brief description of the various LCA methodologies used, their strengths and their limitations. A new approach, which has been argued to be more suitable to concrete, has been presented and existing data in the literature has been analysed using this approach. While noting that there are conflicting reports about the environmental impact of AACBs, it is shown that using the new approach, the global warming potential of AACBs can be lower than that reported in the literature. The book presents strategies for environmentally efficient utilisation of AACBs. Additional benefits in the operational energy of buildings through the production of alkali activated insulating materials are described. The detailed methodology to produce foamed alkali activated concretes, the characteristics of their foam network and the thermal properties of these concretes will be useful both for designers and producers. The photocatalytic behaviour of AACBs for self-cleaning applications have also been discussed in detail. The book ends with describing some innovative applications of alkali activated binders. These applications range from electronics and catalysts to biological applications and drug delivery, to even the storage of hydrogen for energy. Although most of these applications are academic in nature and in the laboratory scale, these laboratory-scale results demonstrate the versatility of the material for many future applications. To summarise, this book is a complete and detailed resource on alkali activated binders. The book will be useful for teachers, students, researchers, designers and even those who are looking to develop and apply AACB based products. While most subjects are dealt with in detail, every chapter of the book lists resources for further reading. It is important to mention as a rapidly developing field, many developments are making way. So, it is important that readers refer to literature that has been published post publishing of this book. Our view, this book is an excellent resource as the first read.
2,742.6
2015-01-01T00:00:00.000
[ "Materials Science" ]
Interpreting observations of ion cyclotron emission from large helical device plasmas with beam-injected ion populations Ion cyclotron emission (ICE) is detected from all large toroidal magnetically confined fusion (MCF) plasmas. It is a form of spontaneous suprathermal radiation, whose spectral peak frequencies correspond to sequential cyclotron harmonics of energetic ion species, evaluated at the emission location. In ICE phenomenology, an important parameter is the value of the ratio of energetic ion velocity to the local Alfvén speed . Here we focus on ICE measurements from heliotron-stellarator hydrogen plasmas, heated by energetic proton neutral beam injection (NBI) in the large helical device, for which takes values both larger (super-Alfvénic) and smaller (sub-Alfvénic) than unity. The collective relaxation of the NBI proton population, together with the thermal plasma, is studied using a particle-in-cell (PIC) code. This evolves the Maxwell–Lorentz system of equations for hundreds of millions of kinetic gyro-orbit-resolved ions and fluid electrons, self-consistently with the electric and magnetic fields. For LHD-relevant parameter sets, the spatiotemporal Fourier transforms of the fields yield, in the nonlinear saturated regime, good computational proxies for the observed ICE spectra in both the super-Alfvénic and sub-Alfvénic regimes for NBI protons. At early times in the PIC treatment, the computed growth rates correspond to analytical linear growth rates of the magnetoacoustic cyclotron instability (MCI), which was previously identified to underlie ICE from tokamak plasmas. The spatially localised PIC treatment does not include toroidal magnetic field geometry, nor background gradients in plasma parameters. Its success in simulating ICE spectra from both tokamak and, here, heliotron-stellarator plasmas suggests that the plasma parameters and ion energetic distribution at the emission location largely determine the ICE phenomenology. This is important for the future exploitation of ICE as a diagnostic for energetic ion populations in MCF plasmas. The capability to span the super-Alfvénic and sub-Alfvénic energetic ion regimes is a generic challenge in interpreting MCF plasma physics, and it is encouraging that this first principles computational treatment of ICE has now achieved this. Ion cyclotron emission (ICE) is detected from all large toroidal magnetically confined fusion (MCF) plasmas. It is a form of spontaneous suprathermal radiation, whose spectral peak frequencies correspond to sequential cyclotron harmonics of energetic ion species, evaluated at the emission location. In ICE phenomenology, an important parameter is the value of the ratio of energetic ion velocity v Energetic to the local Alfvén speed V A . Here we focus on ICE measurements from heliotron-stellarator hydrogen plasmas, heated by energetic proton neutral beam injection (NBI) in the large helical device, for which v Energetic /V A takes values both larger (super-Alfvénic) and smaller (sub-Alfvénic) than unity. The collective relaxation of the NBI proton population, together with the thermal plasma, is studied using a particle-in-cell (PIC) code. This evolves the Maxwell-Lorentz system of equations for hundreds of millions of kinetic gyro-orbit-resolved ions and fluid electrons, self-consistently with the electric and magnetic fields. For LHD-relevant parameter sets, the spatiotemporal Fourier transforms of the fields yield, in the nonlinear saturated regime, good computational proxies for the observed ICE spectra in both the super-Alfvénic and sub-Alfvénic regimes for NBI protons. At early times in the PIC treatment, the computed growth rates correspond to analytical linear growth rates of the magnetoacoustic cyclotron instability (MCI), which was previously identified to underlie ICE from tokamak plasmas. The spatially localised PIC treatment does not include toroidal magnetic field geometry, nor background gradients in plasma parameters. Its success in simulating ICE spectra from both tokamak and, here, heliotron-stellarator plasmas suggests that the plasma parameters and ion energetic distribution at the emission location largely determine the ICE phenomenology. This is important for the future exploitation of ICE as a diagnostic for energetic ion populations in MCF plasmas. The capability to span the super- Interpreting observations of ion cyclotron emission from large helical device plasmas with beam-injected ion populations Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Introduction Suprathermal ion cyclotron emission [1,2] (ICE) is detected from all large toroidal magnetic confinement fusion (MCF) plasmas including the tokamaks TFR [3], PDX [4], JET [5], TFTR [6], JT-60U [7], ASDEX-U [8], KSTAR [9], DIII-D [10] and the stellarators LHD [11,12] and W7-AS [13]. ICE is notable as the first collective radiative instability driven by confined fusion-born ions that was observed in deuteriumtritium (D-T) plasmas in JET and TFTR [14][15][16][17]. The frequency spectrum of ICE typically exhibits narrow peaks at values which can be identified with sequential local cyclotron harmonics of a distinct energetic ion population in a spatially localised emitting region. The numerical value of the inferred ion cyclotron frequency Ω c = Z i eB/m i , where Z i is the ion charge and m i its mass, then determines the local value of the magnetic field strength in the emitting region, and hence its radial location. Typically, but not invariably, this is at the outer mid-plane edge of the toroidal plasma; ICE from the core plasma has been reported recently from DIII-D [18] and from ASDEX Upgrade [19], and earlier from JT-60U [7]. This development suggests great potential for the exploitation of ICE as a diagnostic for energetic particles in ITER [20]. Measurements of ion cyclotron emission (ICE) spectra have been obtained from heliotron-stellarator plasmas in the large helical device (LHD), both with an ICRH antenna during NBI heated plasmas [11] and by magnetic probes [12] during toroidal Alfvén eigenmodes (TAE's) [21][22][23][24]. Related numerical studies can be found in [24]. In combination with other advanced diagnostics, notably for MHD activity, these ICE measurements from LHD, can yield fresh insights into the physics of energetic ions in magnetically confined fusion (MCF) plasmas. We attribute this ICE to a neutral beam injected (NBI) proton population at energies ≈40 keV in the outer midplane edge regions of hydrogen plasmas in LHD [11,12], where the local electron temperature T e ≈ 20 eV to 150 eV, number density n e ≈ 10 19 m −3 and magnetic field strength B ≈ 0.5 T. These spectra were measured with an ICRF heating antenna in receiver mode. Importantly, these spectra span plasma regimes where the ratio of the velocity of the energetic ions V NBI to the local Alfvén speed V A in the ICE-emitting region of the LHD edge plasma takes values that can be either smaller or larger than unity. The transition between super-Alfvénic and sub-Alfvénic energetic ion phenomenology is of fundamental interest in MCF plasma physics, see section 2 below. Here, in particular, we examine LHD plasmas 79126 and 79003 where V NBI /V A = 0.872 and 1.125, respectively, in the ICE-emitting region. Our interpretation rests on first-principles numerical solutions of the Maxwell-Lorentz system of equations. We follow the full velocity-space trajectories, including gyromotion, of tens to hundreds of millions of fully kinetic energetic and thermal ions, together with all three vector components of the evolving electric and magnetic fields, with a massless neutralising electron fluid, using a fully nonlinear 1D3V PIC-hybrid particle-in-cell code [25]. The kinetic ions, fluid electrons, and fields are coupled self-consistently through the Lorentz force and Maxwell's equations in Darwin's approximation [26]. In this hybrid scheme [25], the Debye length does not need to be resolved. It therefore requires less computational resource than the full PIC scheme implemented in EPOCH [27], which retains electron kinetics, and is also used for contemporary theor etical studies of ICE [28][29][30][31][32]. We follow these simulations through the linear phase of an instability that we show is the magnetoacoustic cyclotron instability (MCI) [25,29,[33][34][35][36], and then deeply into its nonlinear saturated phase. The Fourier transforms of the excited fields in these simulations yield frequency spectra that qualitatively match the observed ICE spectra from the LHD plasmas. These simulation results for heliotron-stellarator plasmas complement and confirm earlier interpretation of ICE driven by sub-Alfvénic NBI ions in TFTR tokamak plasmas [15,37]. In general, ICE phenomenology in MCF plasmas has been found to reflect the plasma parameters and magnetic field strength in the emitting region, together with the velocity distribution of the driving energetic ion population. Important aspects of these two key features come together in the dimensionless parameter v Energetic /V A , which is the ratio of energetic ion velocity v Energetic to the local Alfvén speed V A . The wider experimental context and motivation In this work we focus on the role of the parameter v Energetic /V A and, in particular, whether it exceeds, or is less than, unity. In general, one anticipates differences in how a particle interacts with the coherent oscillations that are supported by a continuous medium, depending on whether the particle is travelling faster or slower than the speed at which phase information can propagate in the medium. In a magnetised plasma, in the frequency range of interest to ICE, this speed is the Alfvén speed V A . Even if no complete experimental or theoretical information related to this frequency and velocity range in fusion plasmas is available, it would be necessary to investigate it when establishing the physics basis for the exploitation of ICE. However there exists, in addition, extensive evidence for the importance of the value of the ratio v Energetic /V A compared to unity: in relation to the ICE detected from fusionborn alpha-particles in the only DT MCF plasmas thus far, in JET and TFTR during the 1990s; and in relation to the fundamental theory of the MCI. The fusion-born alpha-particles responsible for the ICE observed throughout the duration of JET DT H-mode plasmas with high edge density were super-Alfvénic, see the sixteenth item in table 1 of [14]; whereas in TFTR supershot DT plasmas with strong central density peaking and low edge density, they were sub-Alfvénic, as shown in figure 6 of [15]. Apparently related to this, and shown in figure 7 of [15], alpha-particle ICE from TFTR DT plasmas arose only during the first 100 ms to 200 ms, in the early stage of the density ramp, notwithstanding the greater measured production rate for fusion alpha-particles at later times with higher density. Conversely, in TFTR plasmas with edge densities such that newly born alpha-particles were super-Alfvénic, the corresponding alphaparticle ICE spectrum persisted, see [15]. Evidently the value of v Energetic /V A is central to the ICE phenomenology in the DT plasmas which, thus far, provide the most relevant signposts for ICE if it arises in ITER. The earliest study [37] of ICE driven by NBI ions in TFTR DT plasmas reinforced the centrality of v Energetic /V A , bearing in mind also that NBI ICE was not detected from JET DT plasmas. For example, deuterium and tritium NBI ions were at v Energetic /V A = 0.1 in the plasma region where they excited ICE. The analytical MCI theory available at that time [34][35][36] was aligned to the JET super-Alfvénic alpha-particle regime, and could not immediately address the strongly sub-Alfvénic TFTR NBI regime; instead an interpretation was obtained based on a primarily electrostatic treatment [37]. The foregoing motivated the extension of the analytical theory of the MCI into the sub-Alfvénic regime, reported at length in [36], and exploited in [14] and in relation to both NBI ions and fusion-born ions in [16,38]. By this stage, the linear analytical theory of the MCI had successfully exhausted its potential at the leading edge of ICE interpretation. Only from 2010 onwards was it possible to carry out first principles kinetic calculations using PIC codes, which carry the MCI selfconsistently into its fully nonlinear regime [25,[30][31][32]39]. The saturated power spectra emerging from these computations then provide theoretical counterparts to measured ICE spectra. Hitherto, with one exception, these treatments have addressed only the super-Alfvénic regime for fusion-born ions. The exception is the set of multiple PIC simulations [31,32] of ICE driven by fusion-born protons under rapidly evolving edge plasma conditions in KSTAR. For local electron densities below n e ∼ 1.05 × 10 19 m −3 , corresponding to the lower panels of figure 4 of [31], the perpendicular velocity of the protons is sub-Alfvénic. It is therefore timely to address the standing issue of the impact on ICE of the value of the ratio v Energetic /V A compared to unity, by using a combination of contemporary NBI ICE measurements and PIC computations. Here we exploit the fact that ICE from the outer midplane edge of LHD heliotronstellarator plasmas is generated by NBI ions in both super-Alfvénic and sub-Alfvénic regimes. We use the hybrid PIC code of [25], as previously exploited along with the EPOCH PIC code [27] in [25,[30][31][32]39], to simulate the collective relaxation of these NBI ion populations under LHD-relevant conditions. To progress, several novel steps are necessary, which are reported in this paper. We carry out the first extensive set of 1D3V kinetic PIC-hybrid simulations of ICE in the sub-Alfvénic regime, where the minority NBI ion population has kinetic energy more than an order of magnitude lower than in previous simulations [25,[30][31][32]39] relating to super-Alfvénic fusion-born ions in JET. These are also the first ICE simulations with wavevectors inclined more than one degree from perpendicular to the magnetic field direction. The analytical theory of the linear MCI which underlies ICE is well developed, and we are able to show that its predictions regarding initial growth rates in our fully nonlinear simulations are successful, for the first time in the sub-Alfvénic regime. Analytical theory of the magnetoacoustic cyclotron instability The magnetoacoustic cyclotron instability (MCI) [25,29,33,34,36,37,38] is the most likely emission mechanism to account for ICE generation. The theory of the MCI was first developed analytically [33][34][35][36], and then using large PIC numerical simulation from 2010 onwards [29,30,39]. At the plasma wave-wave resonant level of description, the MCI essentially involves the resonance of a fast Alfvén wave supported by the background plasma with negative-energy ion cyclotron harmonic waves (terminology described in the next paragraph) sustained by minority fast ions whose non-Maxwellian velocity distribution incorporates a population inversion. This is evident from the structure of the left-hand side of equation (28) of [34] for example. MCI theory was originally developed by [33] for purely perpendicular propagating waves satisfying ω Ω i , the background ion cyclotron frequency, including a ring beam distribution for fast ions. The theory was revisited and extended in [34] to lower frequencies for perpendicular propagation, and MCI growth rates were further obtained for energetic ion distributions in velocity space that have the form of both a spherical and an extended-spherical shell [35], in addition to monoenergetic ring beams [33,40]. The interest in the ring beam-type distribution arose from the subset of fast ions thought to be responsible for generating ICE from DT plasmas in JET and, subsequently, TFTR. These ions, born in a very narrow range of pitch angles, undergo large drift orbit excursions from the core whose trajectories intersect the outer midplane edge. This leads to a local population inversion in velocity space, in the form of a thin cone shape which is limited by: the maximum energy of the α particles; their narrow range of pitch angles; and, at the lower bound, the strong decrease of radial excursion with decreasing energy (figure 15 of [14]). The negative energy character of the ion cyclotron harmonic waves supported by the energetic ion population, alluded to in the preceding paragraph, is a key elementexplicit or otherwise-in all theories of the MCI. Less importantly, the ion cyclotron harmonic waves supported by the bulk plasma also enter the left-hand-side of equation (28) of [34] and, if resonant, can change the order (quadratic to cubic) of the analytical expression for the growth rate in the linear limit. Neglecting this complication, when the linear MCI occurs, there is spontaneous growth in amplitude of both the fast Alfvén wave supported primarily by the bulk plasma, and the cyclotron harmonic wave supported by the non-Maxwellian alpha-particle population. This does not violate energy conservation because the latter wave is negative-energy: its excitation lowers the overall energy of the system which supports it. Negative-energy waves can only arise where there is a distinctly non-Maxwellian population, for example beam-type, in velocity space; for an account of this, see [41]. The analytical theory of the MCI was extended to include wave-particle cyclotron resonance in [34]. In this form, it was applied to the interpretation of ICE measurements from DT plasmas in JET and TFTR, including sub-Alfvénic regimes in [36,37]. The MCI is also believed to play a central role in the excitation, by energetic ion populations in spherical tokamaks, of compressional Alfvén eigenmodes [42][43][44][45] (CAEs). CAEs can be viewed as a toroidal generalisation of the fast Alfvén wave (or of the magnetosonic wave) at frequencies comparable to, or lower than, Ω H , the energetic ion cyclotron frequency. Hence this phenomenon may in some respects be a lower-frequency continuation of ICE phenomenology. Among the most sophisticated numerical approaches in this context is the HYM code [46], which represents the energetic ion population using a full-orbit delta-f approach, and the background plasma by single-fluid resistive MHD equations. The delta-f method [47] is a particle-in-cell approach for the solution of the perturbed velocity distribution function. With this method, only the deviation of the initial velocity distribution is treated in a PIC-fashion. This has the advantage to significantly reduce the noise in the simulations since only a subset of the velocity space is represented by means of macroparticles. Finally, we recall that with the introduction of tritium into JET in 1991 [48], ICE was detected at successive cyclotron harmonics of α particles. The intensity of this ICE extended the linear correlation with the measured neutron flux [14] to over six decades of signal intensity across all classes of JET plasma. Also, a striking correlation was obtained between the maximum linear growth rates computed via the MCI linear analytical theory and the time evolution of the ICE amplitude averaged over six TFTR supershots, see figure 6 of [38]. This calculation was carried out using a drifting Maxwellian for the parallel distribution function of the fusion-born alphas and a ring-beam for the perpendicular velocity distribution function. In [49], additional conclusions are drawn. In particular, the aforementioned distribution function gives rise to a sublinear scaling of the linear analytical growth rates (∝ n α /n e ) with increasing alpha density, in a regime where linear theory is not applicable, see figure 7 of [49]. This was taken to indicate that the linear scaling of ICE intensity with fusion alpha density in TFTR suggested that the relative alpha density at the emission location was smaller than 10 −4 . The PIC simulations of ion cyclotron emission have mostly, so far, assumed ring beam distributions or drifting ring beams; this includes the present paper. Such ring beams were used as initial alpha particles velocity distribution of hybrid PIC simulations of the MCI relevant to JET ICE [25], which showed that the linear growth rates scaled sublinearly with the alpha particles density, in agreement with the relevant linear theory (equation (36) of [34]). These simulations further suggested that the spectral power of the nonlinearly saturated fields scaled linearly with the alpha density as observed experimentally in JET. First principles numerical simulations of ICE Direct numerical simulations of ICE scenarios were first reported in 2013 [29]. These used a particle-in-cell (PIC) [50][51][52] code [27] (see also section 5) to evolve the full orbit kinetics of millions of thermal ions and electrons, together with the self-consistent electric and magnetic fields, all governed by the Maxwell and Lorentz equations. The distribution of energetic ions in velocity space is initialised to reflect physics considerations relevant to the observations of ICE. PIC simulations motivated by ICE measurements from JET show [25,29] that energetic minority ions relax, under Maxwell-Lorentz dynamics, in ways that replicate the linear MCI at early times and, at later times, produce power spectra capturing measured ICE features. Multiple simulations with different concentrations of energetic ions predict a linear scaling of spectral peak intensity that matches the observed linear scaling of ICE intensity with fusion reactivity [39] in JET. An ICE-related scenario relevant to α-channelling [53] has been proposed on the basis of PIC simulations [30]. It rests on a process that can arise when a radially inward propagating fast Alfvén wave, unstable against the MCI in the outer edge plasma, thereby extracts energy from a fast ion population and transfers it to the bulk plasma. ICE measurements from NBI-heated LHD hydrogen plasmas Highly resolved ICE signals were measured in LHD hydrogen plasmas using an ICRF antenna in receiver mode [11] during perpendicular neutral beam injection (NBI) of hydrogen ions. The large antenna loop area (≈600 cm 2 ) enhances the quality of the data, which was recorded at a maximum sampling rate of 5GSas −1 and processed via fast Fourier transform, with a rectangular window of typical duration 100 µs. Examples of time profiles of heating with four proton NBI sources, together with ICE spectra, are shown in figures 1 and 2. These spectra obtained from LHD plasmas 79126 and 79003 are not discussed in 11 and 12, but share key similarities in that they all pertain to hydrogen plasmas heated by neutral beam injection: specifically, 40 keV perpendicular NBI giving rise to ICE, as presented in [11] and [12] and simulated in the present work. These ICE signals discussed in [11,12] are detected shortly after the turn-on of the perpendicular positive-ion based NB injector #4 by an antenna located close to it, at the outer midplane of LHD. The fundamental ICE frequency f 0 is defined by the measured interval between successive spectral peaks, and also typically corresponds to the frequency of the first measured spectral peak. A linear relation was obtained between f 0 and the magnitude of the magnetic field on axis (at a major radius of 3.6 m) across several LHD plasmas, confirming the cyclotronic character of the detected signal. It follows that the location at which this ICE signal is generated in LHD lies along a magnetic field line on which the proton cyclotron frequency f cH corresponds to the measured fundamental ICE frequency f 0 , see figures 3 and 4 of [11]; this is found to be at both the LHD plasma inner and outer edge in the LHD magnetic configuration (see figure 3 of [11]). The observation of ICE from high density (n e > 5 × 10 20 m −3 ) LHD plasmas, into which NBI cannot penetrate as far as the inner edge, further supports the interpretation that ICE originates from the outer region near the NBI #4 injection point. The ICE signal in the plasmas of [11] disappeared roughly 0.1ms after the turnoff of the perpendicular NBI (see figure 5 of [11]). This synchronization suggests that ICE is driven by the fast injected protons. Particle orbit calculations [11] for the relevant LHD plasma and magnetic field show that NBI protons are lost in a few tens of microseconds, consistent with the observed decay time of ICE. Direct numerical simulation of LHD ICE using a kinetic PIC-hybrid code In this paper, we will interpret the ICE spectra shown in figures 1 and 2, in terms of the collective relaxation of the NBI proton population in the LHD plasma edge. To that purpose, we first introduce the PIC-hybrid modelling approach. We then present the physical and computational parameters involved in the calculations. In particular, this study requires, as input, a representation of the distribution in velocity-space of the NBI protons which is based on the kinetic modelling of [54] that we briefly describe. We then move on to present our calculations results. The PIC-hybrid approach We use the one spatial dimension and three velocity space dimensions (1D3V) PIC-hybrid code [55,56] approach implemented in [25]. This follows full gyro-orbit kinetics for ions in collisionless plasmas, where the energetic and thermal populations are represented by hundreds of thousands to hundreds of millions of macro-particles. These interact with each other, and with the self-consistent electric and magnetic fields, through the Lorentz force and Maxwell's equations. The code incorporates all three vector components of the electric and magnetic fields, and of each particle's velocity, and represents the electrons as a massless neutralising fluid. It self-consistently solves and iterates the Lorentz force equation for each particle together with Maxwell's equations in the Darwin approximation [20,55] which neglects the displacement current and alleviates the needs to resolve light waves; it is fully nonlinear. The code resolves ion gyromotion, which is necessary to simulate phenomena such as ICE where key physical length scales and time scales are of the order of the ion gyro-radius (and ion skin depth) and ion cyclotron frequency. In particular, the code captures the full spatial and gyrophase dynamics of resonant particle-field interactions close to the ion cyclotron frequency and its harmonics. We assume quasineutrality: N l=1 Z l n l = n e (1) with n e the number density of electrons and n l , Z l the number density and electric charge of each ion species l. We also have where x denotes distance along the 1D slab geometry spatial domain of our code. We solve for B using Faraday's law while Ampère's law in the Darwin approximation [26] combines with the massless electron momentum equation to give the generalized Ohm's law [57], Here the charge-weighted mean ion velocity V i is defined by where u l denotes the bulk ion velocity of species l. We assume an isothermal pressure law, p e = n e k B T e , with T e the electron temperature. A quiet start [50,58] is used to launch the majority thermal ions in phase space, so as to reduce the noise in our simulations. The code makes use of periodic boundary conditions. This level of approximation is known to sustain fluid waves such as the fast Alfvén and whistler waves, as well as kinetic waves such as electrostatic Bernstein and ion cyclotron modes. Physical and computational parameter sets The NBI protons are sub (super)-Alfvénic in the emitting region of LHD hydrogen plasmas 79126 (79003), whose measured ICE spectra are shown in figure 2 The increased number of particles per cell in the sub-Alfvénic calculations reduces the noise in the simulations, and facilitates the calculation of the growth rates presented in Section 6. The simulations at 85° between ω and k use fewer particles per cell and a larger grid length, which enables us to run the simulations for longer as shown in figure 4. The time step is 0.00025 (0.0001) × τ H , where τ H = 0.143µs (τ H = 0.273µs) is the proton gyroperiod in each simulation. Cyclotron motion is thus highly resolved, in space and time, for the energetic ions whose cyclotron resonant collective relaxation underlies the observed ICE signals. The simulations run for 10-360τ H ; this is determined by the time taken for the instability driven by the NBI ions to satur ate, which for a given set of plasma parameters depends on ξ and on propagation angle. Physical and computational parameters for our simulations are summarized in tables 1 and 2 respectively. When necessary, subcycling for the electric and magn etic fields is used to satisfy the Courant-Friedrichs-Lewy (CFL) condition for the Alfvén wave [47], which states that to ensure the simulations are stable, v max × ∆t < ∆x, v max being the maximum velocity in the calculation. This means for example that a macroparticle can cross at most one cell over a timestep ∆t. In our simulations, energy is conserved to within 0.2%. Distribution in velocity-space of the NBI protons Kinetic modelling [54] has previously been used to obtain the steady-state distribution function of NBI fast ions in stellarators, for both TJ-II and LHD. The LHD case focused on hydrogen plasmas heated by 40 keV perpendicular NBI, relevant to our study. The orbit code ISDEP (integrator of stochastic differential equations for plasmas) [54] is a Monte Carlo orbit-following code which solves the Fokker-Planck equation in 5D phase space, namely: (x, y, z, v 2 , λ). The three spatial coordinates (x, y, z) are the guiding centre position, v 2 is a normalised kinetic energy, and λ = v · B/vB is a pitch angle. ISDEP includes collisions of fast ions with background ions and electrons, and treats re-entering particles. The initial NBI ion distribution function is calculated with HFREYA [59] which simulates the evolution of fast neutral particles by modelling their propagation, charge exchange and ionization processes. The resulting distribution in [54]'s work, which is relevant to perpendicular NBI in LHD, presents distinct features. Among them, the velocity distribution in pitch angles λ displays a peak near zero. The calculated velocity distribution function presents a ∼34 keV NB component which is close to the 36.5 keV super-Alfvénic initial fast proton population which, as we shall show, drives the ICE. The distribution is localized toroidally and the related density of fast neutrals increases with ρ, the dimensionless plasma radius defined in terms of toroidal flux surfaces. This NBI proton population is thus expected to arise close to NBI # 4, and therefore close to the ICRF receiver antenna. We shall therefore incorporate this form of minority energetic proton population in our first principles modelling of ICE in LHD. The former motivates the initialisation of the NBI protons in velocity-space by means of a ring-beam velocity distribution function and the NBI protons are initially uniformly and randomly distributed in gyro-angle. The orientation of the velocity of each NBI proton is thus perpendicular to the background magnetic field, in order to reflect the perpendicular orientation of the relevant NBI in LHD. The inclusion of this NBI ion population in our 1D3V simulations effectively defines the spatial location to which these simulations apply: that is, the location, near the NBI injection point at the outer midplane edge of LHD, where this NBI ion population is present. Figure 4 plots the time evolution, in our simulations, of the change in energy density of the electric and magnetic fields, and of the change in the kinetic energy density of the bulk proton and NBI proton populations. The four panels of figure 4 are for sub-Alfvénic (left) and super-Alfvénic (right) NBI protons, and for propagation angles θ of k with respect to B 0 of 89.5 • (top) and 85 • (bottom). It is evident that the time taken for the NBI fast protons, which are not replenished, to relax, and for the instability which we identify below with the MCI to unfold, saturates on time scales of between 10τ H to 360τ H . This corresponds to a few microseconds to a few 100 microseconds. The relative NBI proton concentration ξ = n NBI /n e is chosen small enough to observe the unfolding of the MCI, yet not too small to reach saturation in a trac- All the simulated spectra in figure 6 show strong excitation at multiple successive proton cyclotron harmonics; and the ICE signal in the simulations is a hundred to a thousand times more intense than the thermal noise. Decreasing the angle between k and B 0 shows preferential excitation at lower harmonics, as well as longer timescales for mode excitations, see figures 4 and 5. Since the power spectra at the bottom panels of figure 6 are averaged over a longer time duration (in particular compared to the proton gyroperiod), high spectral resolution is achieved that translates into the thick curves. The similar intensities on the top right and bottom right panels are due to comparable NBI proton densities. Conversely, figure 6 (top left) could be due to a numerical artifact during the post processing. This spike could also result from three-wave interaction between oppositely propagating waves such that ω 1 + ω 2 = 0 and k 1 + k 2 = 0. In the PIC-hybrid computations, the simulated emission is most strongly driven for propagation angles that are close to perpendicular to the magn etic field, for which the MCI is most unstable and satur ates quickly as seen from the simulations at 89.5°. Figure 7 shows good qualitative agreement between the spectra generated by relaxation of the NBI ion population in Results of PIC-hybrid simulations Bright spots at sequential proton cyclotron harmonics along the fast-Alfvén branch result from the MCI, driven by NBI protons, for waves propagating in the x direction, almost perpendicular to the background magnetic field. The cold plasma dispersion relation for the fast Alfvén wave, whose tangent at the origin satisfies ω = kV A , is shown by the dark dashed lines. The extension of the Alfvén wave below the proton cyclotron frequency arises from the finite parallel wavevector component on the bottom panels. The line ω = kv NBI is shown by the dark trace, and lies above (or below) ω = kV A in the super-Alfvénic (or sub-Alfvénic) NBI regimes. The wedge in (ω, k) space defined by these two lines approximately delineates the region of (ω, k) space where waves can in principle resonate with NBI protons. These plots show that, in our computations, excitation occurs for modes on the fast Alfvén branch and preferentially close to ω = kv NBI . our first principles Maxwell-Lorentz computations with the 1D3V PIC-hybrid code, and the observed ICE spectra from LHD shown in figure 2, in both the sub-Alfvénic and super-Alfvénic NBI regimes in LHD. The ICE signals from LHD studied here were obtained in the same way as the early JET ICE measurements [5,14], that is, using an ICRH antenna in receiver mode. The antenna has broad directionality, and in consequence can receive radiation across a wide range of incident angles. This is the reason for considering the emission at an angle of 85° whose corresponding simulated spectra are also shown at the bottom of figure 7. They show that ICE is still emitted away from pure perpendicular propagation under LHD edge plasma conditions and suggest the robustness of the MCI to drive ICE. In particular we infer that simulations at intermediate angles would generate ICE as well. Although the antenna cannot distinguish between incoming waves with a propagation angle of 89.5° and 85° separately, these would be summed together over the considered range of angles in forming the observed ICE spectrum. Here we compare the measured ICE spectra with our simulated spectra at 85°, which share a similar qualitative rise and then fall in the distribution of power across successive cyclotron harmonic peaks. At 89.5°, there is a monotonic increase of the power with harmonic number for the range of frequencies considered. In figure 8, we plot the time evolution of the intensity of the spatial FFT of the magnetic field δB 2 z /B 2 0 . There is a oneto-one mapping between the excited k-values and the cyclotron harmonics Ω H , inferred from the dispersion relation of the fast Alfvén wave using information in figure 5. Thus by plotting the time evolution of the distribution of energy across wavenumbers, as in Figure 8, we can identify the time sequence in which specific cyclotron harmonics in the simulated ICE spectrum are excited. A notable feature of figure 8 is the late excitation of the spectral peak at the fundamental cyclotron frequency of the protons in the nonlinear phase. This is of interest because, across multiple simulations, the fundamental cyclotron frequency is often the most strongly linearly stable against the MCI. The observational counterpart of this was noted early on, for example the early experimental comparison of ICE intensity spanning different levels of fusion reactivity in JET plasmas, figure 5 of [14], compares measured intensity at the second harmonic rather than the fundamental. In many simulations, ICE at the fundamental cyclotron frequency is driven by nonlinear beating between pairs of neighbouring higher harmonics that are linearly unstable. Examples include figure 1 of [25] for the MCI linear and nonlinear stability aspect, and figures 3 and 4 of [32] for nonlinear beating between neighbouring cyclotron harmonics in experimental ICE data and PIC simulation output respectively. Identification of the excitation process in the PIC-hybrid simulations for NBI proton driven ICE in LHD plasmas The frequencies of the modes excited by ICE in our simulations typically range from ω ≈ 5Ω H to ≈45ΩH. We focus primarily on modes up to ω ≈ 15Ω H , marginally stable and unstable, to be in line with the upper frequency limit of the experimental measurements. These modes are electromagnetic and lie on the fast Alfvén branch, so that ω and k are related by ω ≈ V A k , where V A is the Alfvén speed. In all the calculations that follow, we used the numerical dispersion relation to map k to ω; this relation satisfies the cold plasma dispersion of [60] and is necessary when ω starts to deviate from kV A . In our simulations, we consider waves propagating nearly perpendicular to the local background magnetic field. Such waves can leave an MCF plasma, propagating radially, and be detected beyond it. The simulation outputs encapsulated in figure 8 enable us to infer the rate at which the energy in a given cyclotron harmonic spectral peak grows over time. It is particularly helpful to calculate this during the early phase of growth, because this enables quantitative comparison with counterpart linear growth rates obtained from analytical theory. We denote the early phase growth rate inferred from the simulations at the th harmonic by γ . A primary objective is to quantify the scaling of γ with NBI proton number density. This we shall compare with the corresponding scaling of analytical linear growth rate for the MCI, γ lin ( ), defined [14] by equations 7 and 8 below. Equation 9 shows that γ lin ∼ ξ 1/2 ; hence figures 9 and 10 compare early phase simulation outputs with linear theory by plotting γ versus ξ 1/2 for multiple simulations at a propagation angle of 89.5 • , focusing on = 11 and = 12 for sub-Alfvénic NBI LHD plasma 79126 and super-Alfvénic NBI LHD plasma 79003 respectively. These harmonics are associated with the smallest error bars in our analysis. The agreement shown is good; this further confirms the role of the MCI in our simulations and, by extension, the LHD experiments. Figures 9 and 10 are obtained as follows. The Alfvén dispersion relation provides a one-to-one mapping between the excited ω modes and the excited k modes. In addition, our simulations use an initially uniform density for both the NBI protons and the background plasma, and the domain has periodic boundary conditions. This means there is neither loss of information, nor need for windowing, when taking spatial Fourier transforms of the electric and magnetic fields. We may therefore compute the growth rates of k-modes by taking the spatial Fourier transform of B z (x,t), which we perform for a propagation angle of 89.5 • , leading to B z (k,t) as shown on the top panels in figure 8. One selects an ω-mode at ω = Ω H , 5 15, to which a unique kmode, k = k ΩH is associated through the dispersion relation as in figure 5. The time evolution of B z (k ΩH , t) is then plotted and best fits are constructed to extract the empirical growth rate γ of this mode, as described below. This approach is convenient because it does not require transformations from the time domain. Once B z (k ΩH , t) is calculated, we identify the interval [t 0 , t 1 ] over which the initial exponential growth phase takes place in our simulations. We find the duration of this initial exponential growth phase in the simulations to be ≈1.0τH for the fastest-growing modes, while the slowestgrowing ones unfold over ≈20τH. We perform multiple fits of B z (k ΩH , t) between [t 1/2 − n∆t, t 1/2 + n∆t] where: t 1/2 is at the centre of [t 0 , t 1 ], which are the start and end times of the initial exponential growth; ∆t ≈ 0.001τ H ; and n varies between 1 and n max , such that [t 1/2 − n max ∆t, t 1/2 + n max ∆t] is the smallest interval to contain [t 0 , t 1 ]. This yields a family of n growth rates γ ,n , 1 n n max for a given mode at ω = Ω H . We take the mean of the individual best fits as the growth rate value γ =γ ,n , and define the associated error ∆γ = σ (γ ,n ), where the bar and sigma respectively represent the average and the standard deviation. The average and variance are taken between n min n n max , where n min satisfies n min ∆t 0.5τ H / . That is, computation of the average starts from ∆t corresponding to half an oscillation of the unstable th mode. This enables us to use the same value of ∆t for each cyclotron harmonic . The procedure described above enables us to calculate growth rates denoted γ for the th harmonic during the early phase of simulations. These are next compared with the scaling of the analytical expressions for the corresponding growth rate γ lin ( ) of the MCI, notably equation (36) of [34] which reads: Here ω p,NBI and ω pi are the plasma frequencies of the NBI protons and of the bulk protons respectively, and v NBI is again the initial velocity of NBI protons. We define where Π x,x , Π x,y and Π y,y are the functions of z NBI = kv NBI /Ω H as given in the appendix of [34]. Near resonance, For a given mode , if all parameters are kept fixed except for the NBI proton density ξ = n NBI /n e , equation (7) yields the scaling γ lin ( ) Ω H = α ξ (9) as in figure 10, where which depends on solely. As stated above, the function χ 0 depends on the dimensionless parameter v NBI /V A as well as on mode number . We have evaluated χ 0 in the super-Alfvénic and sub-Alfvénic cases for cyclotron harmonics between 5 and 15, and obtained roughly constant results of 0.35 and 0.15 respectively. Taking their ratios across these mode numbers range therefore yields a constant value of approximately 2.5, and translates into ratios of α of approximately 2.05. Equation 36 of [34] hence predicts higher growth in the super-Alfvénic regime at constant beam density. The values α are close to 0.30 and to 0.16 in the super-and sub-Alfvénic regimes respectively. This factor of 2 between these two regimes from analytical linear theory differs from the simulations. A possible explanation rests on the fact that analytical linear growth rates in the 1990s literature were calculated on the basis of a first order expansion about the cold plasma linear dispersion relation ω = kV A . By contrast, in our present first principles computations, the real frequency of the is the growth rate inferred from the simulations for mode number , and depends on the relative NBI density ξ at the ICE location. The translated compensated plots represent the quantity (γ /Ω H ) / √ ξ versus ξ, with γ inferred from simulations. If the simulation outputs match the linear theory of the MCI, we expect γ /Ω H = α √ ξ as in equation (9). In this case, it follows that (γ /Ω H ) / √ ξ = α , a quantity that does not depend on ξ, but on the mode value only. This outcome is reflected by the sequence of horizontal lines in the compensated plots. The values of ξ span one order of magnitude, between 10 −4 and 10 −3 . Together, these graphs show that, within modest error bars, γ ∝ √ ξ . excited modes is determined by the self-consistent Maxwell-Lorentz dynamics, and is found to deviate progressively from the linear expression at increasingly high frequencies. Hence the location in (ω, k)-space of the dominant MCI drive differs increasingly from the location assumed in linear theory, at increasingly high cyclotron harmonics. This is visible in figure 4; the extent of any deviation of early-time growth rates from linear theory can only be established reliably, for a given scenario, by running the code. Our code outputs yield continuity between early-time growth rates as one transitions across the boundary between sub-Alfvénic and super-Alfvénic regimes for the particular parameter sets considered, whereas linear analytical theory yields a difference by a factor of order two. We test the hypothesis that γ γ lin ( ) by running multiple simulations for different values of the beam proton density ratio ξ, with all other parameters kept fixed. The computed growth rates at early times γ for a given mode , obtained by means of B ξ (k ΩH , t) as appeared in figure 8, are then plotted against √ ξ , in line with the analytical scaling of γ lin ( ) [5,34] in equation (9) as shown in figure 9. The values of the growth rates obtained in our PIC hybrid calculations are similar between the sub-and super-Alfvénic regimes for comparable NBI proton densities; and the numerically computed values of α are about five for harmonic numbers between 5-15 in both regimes. More generally, figure 9 shows congruence between the early phase of collective relaxation of the NBI ion population in our first principles PIC-hybrid simulations, and the analytical theory of the linear stage of the MCI. It tends to confirm that the resulting ICE spectra can be understood in terms of the MCI, which our simulations extend into the analytically inaccessible nonlinear regime, for both sub-Alfvénic and super-Alfvénic NBI proton populations. The procedure is now applied accross multiple harmonic modes with the results shown figure 10. As in figure 9, in figure 10 we plot the dependence of the growth rate γ of the fields, calculated at early times in multiple simulations, on NBI concentration ξ in each simulation. The computations are performed for parameters corresponding to LHD plasmas 79126 and 79003. In both scenarios, the propagation angle between B 0 and k is 89.5 • . The pair of panels in figure 10 instead reformulates equation (9) as This implies that for a fixed mode number , the quantity (γ lin ( ) /Ω H ) / √ ξ is independent of ξ and equals a constant that depends on solely. We translate that constant such that it equals to α = . Figure 10 strongly suggests that in general γ ∼ ξ 1/2 in our simulations, across an extended range of modes. This dependence is the same as for growth rates from linear analysis of the MCI [34], and from previous simulations e.g. figure 3 of [39]. Additional linear and cubic root scaling tests have been performed, and F-test statistics [61] applied with a 99% significance further confirm the square root scaling. We have also investigated, through PIC-hybrid simulations, the collective relaxation of NBI proton populations that have artificially enhanced sub-and super-Alfvénic characteristics under the LHD plasma conditions already considered. We have run multiple simulations of a sub-Alfvénic fast proton population of 25 keV, for which v NBI /V A = 0.7 under LHD plasma 79126 conditions, as well as simulations with a 56 keV super-Alfvénic fast proton beam, for which v NBI /V A = 1.4 under LHD 79003 plasma edge conditions. The same conclusions are obtained. Conclusions Several advances are reported in this paper. First, we have presented the first 1D3V PIC-hybrid simulations of ICE where the minority energetic ion population arises from NBI, hence with kinetic energy more than an order of magnitude lower than in previous simulations [25,[30][31][32]39] relating to super-Alfvénic fusion-born ions in JET. Second, first principles kinetic simulations of ICE and MCI physics in the sub-Alfvénic regime for energetic ions have not previously been carried out, with one exception. This exception is the set of multiple PIC simulations [31,32] of ICE driven by fusion-born protons under rapidly evolving edge plasma conditions in KSTAR where, for local electron densities below n e ∼ 1.05 × 10 19 m −3 , corresponding to the lower panels of figure 4 of [31], the perpendicular velocity of the protons is sub-Alfvénic. We have found that, in the LHD-relevant context, the transition between sub-Alfvénic and super-Alfvénic ICE phenomenology is extremely smooth, both in observations and simulations. A third novel aspect of this paper is that it is the first to report first principles kinetic ICE simulations with wavevectors inclined more than one degree from perpendicular to the magnetic field direction. While more challenging computationally, this leads to better congruence of simulation outputs with the observed ICE spectra, as in figure 6, and is helpful in a context where antenna sensitivity may not be known in detail as a function of k in the relevant range. Fourth, we have carried out the first study of early-time growth rates inferred from simulations that span sub-Alfvénic and super-Alfvénic energetic ion regimes. Of particular interest is how these growth rates depend on energetic ion concentration ξ = n NBI /n e ; figure 10 establishes a square root scaling with ξ, in line with prediction from the corresponding linear analytical MCI theory [34]. For simulations relevant to the super-Alfvénic regime, this scaling was established in [39]. In summary, the measured ion cyclotron emission (ICE) spectra (figure 2) from LHD hydrogen plasmas with both sub-Alfvénic and super-Alfvénic perpendicular proton NBI have been successfully simulated (figure 6) using a first principles approach. Direct numerical simulation of kinetic ions (bulk protons and minority energetic NBI protons) and fluid electrons using a 1D3V PIC-hybrid code captures the self-consistent Maxwell-Lorentz dynamics of the plasma and fields. It is found from the Fourier transforms and time evolution of the energy and field components in the PIC-hybrid code outputs that the dominant physical process in our first principles Maxwell-Lorentz computations is the magnetoacoustic cyclotron instability (MCI). The many correlations between our code outputs and the measured ICE spectra suggest that an emission mechanism, which corresponds essentially to the nonlinear MCI, is responsible for the main features of ICE in these LHD stellarator plasmas. In the context of the extensive prior research on the role of the MCI in ICE from tokamak plasmas, this outcome suggests a significant degree of commonality across tokamak and stellarator ICE physics. This appears to be a consequence of the strongly spatially localised character of ICE physics, and implies that overall magnetic geometry plays a relatively minor role. The spontaneously excited electric and magnetic fields in our simulations, which are carried out in local slab geometry, correspond to the fast Alfvén wave. This work helps establish a baseline for future energetic particle experiments in LHD, where magnetic geometry and toroidal eigenfunctions [62] may play a larger role. ICE links beam ion physics in LHD to fusion-born ion physics in tokamaks, and has significant diagnostic potential [20].
11,922.4
2019-07-19T00:00:00.000
[ "Physics" ]
Remapping cybersecurity competences in a small nation state The impact of cybersecurity (CS) on public well-being is increasing due to the continued digitisation process of all industry sectors. The protection of information systems rests upon a sufficient number of CS specialists and their competences. A cyber-competence map describing the capacity and trends of the CS workforce is an essential element of the workforce development strategy. Large enterprises tend to have narrowly specialised employees with clearly identifiable roles. Still, most enterprises in small countries are SMEs. Therefore, the tasks and responsibilities of many CS-related specialists overlap the functions of several roles. This paper aims to develop a small-state cybersecurity competence map consistent with the standards of professional organisations. The work applies a combined qualitative and quantitative methodological approach to collect data using questionnaires and expert interviews in the CS field organisations. The study includes a representative public survey, a large-scale survey of company executives, an exploratory CS expert survey, and a comprehensive job posting analysis. Finally, a national CS competence map is presented and verified using two qualitative semi-structured interviews with field professionals. Even though the map reflects a status of a small nation state, it is activity-based and might be applicable in any country. As a future research direction, we will investigate the impact of early and late exposure to cybersecurity competences in education and framework applicability. Introduction The rapid advancement of technologies and transfer of services to cyberspace inevitably leads to an increased number of cyber incidents that threaten national economic and political stability [30]. Global supply chains are under threat as cybercrime impacts private individuals and causes major disruptions, significant financial losses, and reputation damage to many enterprises and organisations. Cybersecurity (CS) specialists tasked with the prevention of these incidents usually have Information Technology (IT) or Information Communication Technology (ICT) education background, but the problem should be addressed at all levels. Strategic cybersecurity management is essential in minimising data breach risk [1]. A constant drive for time efficiency forces many business sectors to experience digital transformation in services and workplaces by applying state-of-the-art ICT tools. Therefore, their resilience against incidents and disruptions depends on a combination of transformative capacity and cybersecurity [16]. Innovative solutions are required to process and analyse data, protecting users' privacy, for example, in the health sector [24]. The IT sector is actively involved in developing tools for the digital society, and manufacturers aiming to become innovative solution providers have to move the focus from the product logic to the service logic [19]. Stakeholders envision resilient systems and infrastructure as a common goal; therefore, challenges in security-relevant sectors are indicative of CS trends and provide directions for future research and innovation [13]. The higher education sector is reacting to the increased demand for CS specialists with a plethora of CS study programs worldwide. Moreover, sectorial communities are contributing by developing the generalised multipurpose frameworks of cybersecurity skills-one of the best known is the NIST NICE framework [31]. The frameworks provide the rationale of profiles, alternative job titles, tasks performed by the profile, key knowledge, and essential skills. Still, the competences of CS specialists need to be explicitly defined and mapped to tasks. An explicit definition of needed competences for different levels of roles could support the mapping exercise. It would justify educational routes to design qualification degrees or propose lifelong learning curricula in the CS sector. Demand for CS specialists grows in Lithuania the same way as in other countries [14], even if the country has a high rank according to the global cybersecurity index. However, it is challenging to meet the demand when the CS sector requires specific technological knowledge from several overlapping areas and general competences. Therefore, it is necessary to have a national-level recruitment strategy and coordinated education of the CS specialists to meet future workforce demands. The experience of other countries shows a need to attract non-technological specialists to the area. The work aims to develop a national cybersecurity competence map to support a security-oriented ecosystem and foster innovation development in the digital society. While creating a cybersecurity framework, it is important to analyse the alignment of the current frameworks with the status-quo of the cybersecurity workforce in a small nation state. We formulate our hypothesis in the context of the aim: Current cybersecurity skill and competence frameworks do not represent the workforce profiles in a small nation state's labour market. We designed a multi-phase research methodology workflow and included quantitative and qualitative components to triangulate results. Questionnaires, analysis of job postings, and interviews with multiple data-gathering points enabled the testing of the hypothesis. The collected data enabled us to propose an alternative competence framework. We contribute to the CS community with a hierarchical competence framework that balances the workforce proportions for educational and business purposes. The paper is structured as follows. Section 2 presents the background of the work with a literature overview. Section 3 defines the methodology for the research setup and analysis workflow, with results presented in Section 4. Discussion of the results leads to a proposed competence framework in Section 5. The work concludes in Section 6 with possible future research directions. Background Globally, cybersecurity is treated as a branch of computer science, even though principles originating from other research and study fields, such as management and law, constitute a mandatory part of some specific CS-related topics. Cybersecurity Curricula 2017, CS2017, by professional computer science communities [20], is one of the standards to follow when developing or updating a study program associated with CS. It defines CS discipline and divides the CS-related content and topics into several knowledge areas. For example, Risk management, Governance and policy, Laws, ethics, and compliance, and Strategy and planning are essentials of the knowledge unit of Organisational security to cover mostly non-IT topics. The knowledge unit of System security includes Holistic approach, Security policy, Authentication, Access control, Monitoring, Recovery, Testing, and Documentation. This unit combines the abilities to correlate policies and technical implementations of system security. CS2017 follows the professional NIST NICE competence framework [29,31]. NIST NICE describes tasks and associates them with skills and knowledge required to perform a work role. The framework contains several roles described in detail, falling into several categories: Securely Provision, Operate and Maintain, Oversee and Govern, Protect and Defend, Analyze, Collect and Operate, and Investigate. One category covers several dedicated roles with different work scopes. For example, the category Securely Provision includes Risk Management (e.g. authorising official), Software development (e.g. software developer), Systems Architecture (e.g. enterprise architect), and others. NIST NICE model distinguishes small scope-oriented roles, and the framework's applicability might be limited due to the lack of the workforce in smaller enterprises of small nation states. The European Cybersecurity Skills Framework, ECSF [11], contains 12 cybersecurity roles providing a more general view of the CS workforce than NIST NICE: 1) chief information security officer (CISO), 2) incident responder, 3) legal, policy and compliance officer, 4) threat intelligence specialist, 5) architect, 6) auditor, 7) educator, 8) implementer, 9) researcher, 10) risk manager, 11) digital forensics investigator and 12) penetration tester. The implementer role is an umbrella for all cybersecurity implementation-related aspects, including infrastructure solutions and products (systems, software, services). European regulations and national legislation define several specific roles in CS. For example, General Data Protection Regulation, GDPR [37], enforced the introduction of the data protection officer position required for the public sector and some businesses. Also, the qualification frameworks define the levels based on skills, knowledge, responsibility and autonomy level in a work position as a standard. The European Qualification Framework [35] defines qualification levels to transfer between national qualification frameworks. For example, Level 6 requires demonstrating mastery and innovation to solve complex and unpredictable problems, and Level 7 includes reviewing the strategic performance of teams. Cybersecurity is not listed as a separate field in the study field classifier in Lithuania. National higher education institutions design and improve study programs following study field descriptions [5]. When searching for keywords related to cybersecurity (e.g. "cyber L. Bukauskas, A. Brilingait˙e, A. Juozapavičius et al. security," "cyber incidents," "cyberspace," and "electronic information security"), the results appear only in the study field group of computing [4]. The rest of the descriptions of the study fields did not contain any of the keywords. Therefore, competences related to cybersecurity overlap with computer science. In the description of the computer science field, it is stated that "The core of the group of study fields of computing consists of the following areas of knowledge: (...) security of information and information technologies, including the aspects of cyber security (...)". European Skills, Competences, Qualifications and Occupations (ESCO) classification [36] defines qualifications for the European labour market and education. Compared to the above-listed frameworks, ESCO assumes that a security architect, security advisor, and security consultant are alternative names for an ICT security engineer. Still, they are separate roles in the ENISA ECSF model and, of course, in the detailed NIST NICE model. The competence frameworks and qualification classification differ in their level of detail and might be ill-suited to describe existing CS roles in smaller countries. Therefore, applying these frameworks could negatively impact strategic workforce development. Consequently, there is a need to create a national CS competence map adapted to the reality of a small country to ensure that all stakeholders use the same CS vocabulary. Methodology We conducted a multi-level qualitative and quantitative study of cybersecurity roles in Lithuania. Fig. 1 presents the overall view of the methodology used to test our hypothesis. Research steps To understand the broader picture, we started with a quantitative representative public survey, followed by a survey of chief executives (CEO), including human resources managers (HRM) (see Fig. 1). Then, to see the current application of two major competence frameworks (NIST NICE and ENISA ECSF) in the country's labour market, we collected and analysed cybersecurity-related job postings and performed an exploratory survey of CS specialists. To extract further details about the requirements for CS specialists, we conducted a set of semi-structured interviews with experts in the area and organised two focus group discussions. The findings support our research hypothesis, and therefore we propose a CS competence map suitable for a small country. Data collection in the quantitative part (surveys) is carried out using questionnaires, whereas qualitative content analysis [8] with an induction approach is applied when dealing with expert interviews. Methodological triangulation of different methods to confirm findings ensured the study's validity. All interview participants were informed about the study's objectives and agreed to participate. According to GDRP regulation [37] and national legislation, the data is kept anonymous, with any private information removed. Interviews with participants were conducted according to the established ethical guidelines of the Code of Academic Ethics and Regulations of the Academic Ethics Commission of the Core Academic Units of Vilnius University. In compliance with Order, No. V-60 of the Ombudsperson for Academic Ethics and Procedures of the Republic of Lithuania, 2020 Section IV, paragraph 27, interviews qualified for an exemption of ethical review board approval. The participant group did not include vulnerable persons, and no intervention methods were applied. All ethical principles were assured, and written consent was received as voluntarily expressed declarations. All participants had the possibility to leave and stop interviews at any time. Gathered data were managed according to the data management plan approved by the Research Council of Lithuania. Research sample and statistics Cybersecurity is a relatively new field that arose naturally from the broader ICT field. It would not be surprising if the role of a cybersecurity specialist in the mind of an ordinary society member is therefore confused with the usual system administrator. A representative population survey (Omnibus) was the first step in our research, and we included several questions to identify the general population's understanding of the cybersecurity profession. The Omnibus survey was carried out in Lithuania in September-October 2021. In total, 1004 persons of ages 18 and older were surveyed in every region of the country. They were chosen using a multi-stage statistically random selection process and individual interviews so that the distribution of respondents would closely match the population distribution according to gender, nationality, age, and area of residence. The maximum error of results is 3%, given the sample size. A public opinion polling company implemented the survey using a questionnaire designed by us. Table 1 presents the summary of data sample sizes used in producing the results. Complete questionnaires are provided as supplementary material for the paper. The tasks and responsibilities of CS personnel in an organisation depend on the resources allocated. Therefore the next stage of the research concentrated on the opinion of top managers (CEOs and HRMs) of various public and private organisations. The smallest companies rarely have a dedicated CS or IT specialist, so we intentionally limited their number in the survey. Companies with less than 50 employees constituted only 15% of the respondents. The remaining larger companies matched the size distribution of businesses in the country. In total, 1343 company managers were contacted, 252 completed the questionnaire via a phone call or a web form, and 246 responses were found valid. There were 2820 companies with more than 50 employees in Lithuania at the time, and the sample size of 208 respondents from those companies (85% of the responses) gives us a confidence interval below 7% for the 95% confidence level. The response rate was unusually high due to the relevance of the topic and the reputation of the institutions performing the research. A public opinion polling company was commissioned to carry out the survey according to our requirements. A small exploratory survey was designed and performed to determine the overlap of CS roles (specified in the two competence frameworks) among the functions of CS specialists in various organisations. The survey questionnaire consisted of job functions used in the NIST NICE framework and areas from ENISA ECSF. We chose 35 CS specialists via their professional public profiles on the LinkedIn professional network and asked them to specify CS tasks carried out in their companies. We received and analysed 29 responses. To further cross-correlate findings, we set up several semi-structured interviews with experts in the field. Researching online job postings is a way to identify the needs and requirements for a profession or a position in a chosen field. This analysis based on two dominant job posting platforms complements the surveys of CS experts, company executives, and the general public. It provides a more comprehensive picture of the CS sector. Similar joint studies are being carried out in other countries. [18] studied job advertisements, conducted surveys, and then identified sets of competences specific to Australia. Using CS-related keywords, we manually identified and analysed 100% of available CS job postings (175 out of more than 4000 ICT-related postings). Two focus group discussions and five expert interviews supported the results from the quantitative part. The experts were chosen based on their experience in senior work positions and active participation in national cyber defence exercises and the national cybersecurity community. Results of the public survey A representative survey of the population revealed the public opinion about the role of a CS specialist. The respondents were asked to identify the CS function and select the best applicable definition out of four choices: F1. Monitors activity of computer systems and reacts to security incidents. F2. Develops websites and information systems. F3. Manages computer networks and computers. F4. Develops mobile applications and/or computer games. F5. Did not know or did not answer. The respondents were expected to choose the F1 answer as the most likely function of a CS specialist because it had the exact keyword "security" as the role name, and it was the first of the possibilities. Whoever chose a different option was not familiar with the role. However, only slightly over half of the respondents (57.9%) chose F1 as an answer. The highest percentage of correct answers came from the youngest generation of men (under 25), with 70.5% choosing F1 (see Table 2). In total, more than a third of the population could not identify the functions of a CS specialist, and 12.4% selected the functions of a systems administrator. Therefore, according to public opinion, the role of a cybersecurity specialist is relatively young and unknown, with a tendency to confuse it with the role of a generic IT specialist. In the Omnibus survey, respondents were asked to specify the most critical science fields contributing to the education and training of CS specialists. Over 80% of the respondents selected technological and natural sciences (see Fig. 2). Moreover, the respondents identified social sciences as contributing to the development of the CS field but with less importance. The results indicate a common understanding that education in STEM subjects is a primary path into the CS field. On the other hand, the findings highlight the necessity to inform public society (particularly the younger generation) about less technologically focused CS roles, e.g. legal advisor, data protection officer, or physical security penetration tester. Survey of executive officers The survey of top managers in private and public organisations, including human resources managers, focused on the following four questions: (a) importance of CS to their organisation; (b) attitude towards the tasks of CS specialists; (c) opinion about the balance of hard vs soft skills of a CS specialist; (d) future demand for CS competences. In total, 64% of the executives stated that cybersecurity is very important for their business. An organisation's size was directly related to the expressed importance of CS (see Table 3). An averaged value of the CS importance on a scale from 1 (most important) to 5 (least important) grows uniformly with the yearly turnover of surveyed companies. A relatively low turnover and fewer employees in smaller organisations limit the resources allocated for CS. Such attitude of CEOs further increases the vulnerability of SMEs. The majority of CEOs (81% in total, whereas 84% in the public sector) expressed an opinion that an IT specialist could either fully or at least partially carry out the tasks of a CS specialist. In contrast, only 13% of the respondents think that dedicated CS roles should take the tasks. In general, 51% of surveyed companies have neither a CS nor an IT specialist. It is not surprising that a company assigns all IT-related functions, including all CS-related responsibilities, to an IT specialist whenever one can be afforded. Only the largest companies have enough resources for a dedicated CS specialist (or specialists). When asked to rate the priority of hard vs soft competences of a CS specialist on a scale of 10, where 1 denotes purely technical (hard), and 10-purely soft skills, the CEOs indicated (see Fig. 3) that technical knowledge is more important. The average value of the responses was 4.3 (where the middle point between the hard and soft skills was 5.5). This result was independent of the size of the company or the number of its CS specialists. However, the results differed depending on the sector: the public sector average was 3.6, and the private sector-4.6, indicating a stronger preference for technical skills in the public sector (and possibly a more apparent separation of roles). The executives were also asked to rate the future demand (next 2-5 years) for CS competences in different field domains. Even though all areas received high grades (see Fig. 4), the least attention was paid to digital forensics, the systems support was deemed the most important. Thus, continuous business operations are prioritised. At the same time, managers expect to avoid CS incidents. The findings further explain their dominating opinion that an IT specialist (the one who takes care of continuous operations) should be able to take care of cybersecurity (and therefore avoid incidents). CS expert survey The exploratory study focused on the following elements to explore the perspective of CS specialists: (a) opinion about the balance of hard vs soft skills of a CS specialist; (b) demand for CS specialists in different CS areas; (c) CS roles present in organisations. The specialists put more emphasis on hard competences when asked about the balance of skills a CS specialist should have. The average score was 3.6 (where 1.0 would mean purely technical skills), substantially lower than the answer given by top executives (their average score was 4.3). Thus, managers expect the CS specialists to have more soft (communication, reporting) skills than the specialists themselves. In general, the managers seem to have a lot of expectations from their technical staff. According to them, a L. Bukauskas, A. Brilingait˙e, A. Juozapavičius et al. In an open-ended question, the experts were asked to indicate the most in-demand CS specialist roles. The highest-scoring answers were: security architect, forensic specialist, and penetration tester. The experts mentioned technical roles twice as often as managerial CS roles. Judging by the occasional mention of purely IT roles (network administrator or operational technology specialist), even among CS experts, cybersecurity is often not separated from IT. Interestingly, no one has identified any education, research, or risk management roles. One of our aims was to find the distribution of CS roles in Lithuania organisations. We provided ENISA ECSF and NIST NICE roles to the experts. ENISA ECSF roles had ten choices because we merged the roles of Educator and Researcher, and Architect with the Implementer also made a single choice. We also selected 41 of NIST NICE roles (a few unlikely ones related to offensive operations were eliminated). The respondents had to indicate those that were present in their organisation. On average, 67% of ENISA roles (6.7 out of 10) and 57% of NICE roles (23.2 out of 41) were marked by the respondents. When considering the ENISA framework, the CISO role is the most frequently chosen (90% of responses), followed by the Data Protection Officer and the Auditor (83% each), and the least chosen are the Digital Forensics Investigator and the Educator/Researcher (59% each). It has been observed that the number of roles increases depending on the size of the organisation, but even in very small companies (where there is only one person in the CS role), the number of CS roles remains very high. As a result, most organisations have people working in multiple CS roles. Similar conclusions are reached if the NICE framework is used instead. The only difference was that in this case, the roles marked most often were classified as IT roles rather than CS roles according to the European classification. The role of System Administrator was selected 90% of the time, followed by three other roles: Incident Resolver (a role also included in the ENISA classification), Technical Support Specialist and Network Operations Specialist, each with 86% of the votes. The roles of Cyber Intelligence (21%) and Cyber Lawyer (26%) were mentioned the least often, as the role of the Lawyer is separate from the role of Data Protection Officer in this classification. To summarise, all CS roles are relevant in Lithuania organisations, whatever the classification, but even such a small sample shows a reduced focus on cyber threat intelligence/hunting, research (and potentially education), and cyber law. Investigation of job postings Job postings are a tool for attracting new employees, easily accessible to both small and large companies. Employers usually include detailed requirements for candidates and detailed job descriptions in their postings. We analysed the collected data not only qualitatively but also quantitatively. This analysis has provided a fairly accurate cross-section of the cyber and information security labour market. According to Eurostat, in 2021, ICT workers in Lithuania represented around 4% of the total workforce [12]. The National Statistics Department reports similar numbers: in the first quarter of 2022, more than 3% of the employed population worked in the ICT sector. In 2019, 8.6% of companies were hiring or looking for IT professionals, and as many as 58.8% of companies had job vacancies that were difficult to fill. The job postings were collected over one year in four rounds (see Table 4). In total, 4023 adverts were inspected in the "Information Technology" category on one of the largest national CV portals and the international professional social network LinkedIn. In the LinkedIn search, the results were filtered by the country. Of the total number of ICT job postings (4023), 171 were attributable to cyber and/or information security specialist positions. It should be noted that the demand for cyber security professionals varies from one date to another. These changes may be related to the COVID-19 pandemic and its impact on the labour market. For this reason, 94 job posting samples collected on April 10, 2022, were chosen for the detailed analysis. To this date, most of the COVID-19-related restrictions had already been lifted in most European Union countries. Around 5% of ICT vacancies were in cyber and information security. In the second quarter of 2022, around one in 20 new ICT professionals was expected to start working as a cybersecurity specialist or manager. ICT (33% of job advertisements) and finance (36%) companies were the most likely to be looking for cybersecurity professionals. There were also shortages in the financial management, insurance, information provision (public sector), transport, pharmaceuticals, auditing, energy, marketing, and manufacturing sectors (30%). It should be noted that in almost 10% of the job postings, companies were looking for a specialist to provide cyber or information security consultancy to external clients. The title of the posting often indicates the level of the future position, for example, "junior," "mid," and "senior." The latter category usually includes managers (lead or manager). The level of the job position can also be identified by the salary offered, requirements for experience, and qualifications. Job postings are dominated by experienced and senior cyber and information security professionals (see Fig. 5). Entry-level positions accounted for 9.6% of all job offers only. Most companies are looking for experienced professionals (one or more years of practical experience in cyber/information security, sometimes in IT, see Fig. 6). Only 7 out of 94 job postings did not require applicants to have practical experience, and they were mostly offering an entry-level position. Sometimes experience is not defined in specific years but by adjectives such as "extensive" or "deep" (strong knowledge, excellent understanding). Many postings specify not only minimum experience but also a range of experience, for example, "3-5 years" or use a "+" sign (such as "3 +") next to the number of years. The experience and level requirements for applicants in the postings suggest that companies are most likely to employ experienced professionals who have worked in the CS field for several years. Linking the job roles described in the postings to the NIST NICE competence framework shows that an average cyber/information security job applicant is expected to cover 7.8 NICE roles out of 41 possible (see Fig. 7). Only one employer looked for employees with a narrow specialisation (one specific NICE role per posting). There were also postings with 16 or even 18 mentioned roles in the description. The number of roles for beginners is usually lower than for experienced or senior positions. There is also a trend that the larger the private company or the more employees it is looking for at any time, the narrower the specialisation of the cybersecurity jobs. This trend is particularly evident when a company has a separate cyber or information security department. The analysis shows that the NIST NICE framework is too detailed for the organisations of a small country. The ENISA ECSF framework has, therefore, just 12 roles. However, based on the job postings analysis, companies would like to reduce the list of CS specialisations by combining several ENISA roles. On average, 3.1 ENISA roles per cyber security specialist are mentioned in job postings. 77% of the job postings required the job seeker to perform the functions of two to four ENISA roles. The popularity of ENISA roles correlates with NICE NIST roles. The most sought-after ENISA role is the Chief Information Security Officer (CISO). Functions of this position were mentioned in 56% of all job postings. CISO was followed in order of popularity by L. Bukauskas, A. Brilingait˙e, A. Juozapavičius et al. Table 5 The path of a CS specialist into the area (technological profile). There is an interesting distribution of ENISA roles in frequency in the secondary job function descriptions. Recruits are most often expected to train colleagues (26%), assess/manage risks (19%), or investigate incidents (17%) in addition to their main job tasks. Expert and focus group interviews Results of interviews of selected experts from CS field indicate that, due to the overlap of IT and CS profiles, the career route into CS begins by acquiring IT education or experience (the essential requirements for a CS specialist are highlighted in Table 5). Basic IT knowledge, ability to read software source code, ability to program, script, use a command line and develop tools, and knowledge of network administration and cloud and virtualisation technologies, are considered a common basis for both IT and CS profiles. If an IT specialist understands CS threats and applies methods to prevent them, then he/she is considered competent by default in solving CS issues. Moreover, results demonstrate that in recruitment procedures, a higher education degree is not an essential requirement despite the recurring opinion of the experts that general IT studies provide a necessary "spectrum of knowledge" and basic competences. The main selection criteria refer to qualities acquired behind formal education, namely, experience and certificates. According to the experts, certificates indicate personal motivation and a set of abilities. Most experts recognised that a career path in the CS area could start as a system or network administrator. The initial position could be in the incident response (IR) team and performing analyst functions. A basic set of IT knowledge or experience in the IR team would lead to a further career step, e.g. implementing and managing security systems (Security engineer) or choosing tasks of less technical work, e.g. information security management, consulting, auditing, GDPR conformity assessment. CS tasks of non-technical nature (ensuring regulatory compliance, taking care of procedures or processes) require abilities to ask for information and translate technical language into business language for cross-disciplinary coordination and communication between teams. Nevertheless, these non-technical positions also require a basic level of IT knowledge to address the technical staff. When asked to identify the roles of CS specialists in large companies, the experts named two groups of specialists: technical and non-technical. Non-technical roles cover data security officers, information security managers, regulatory compliance officers and auditors. Although compliance is separated from management in the international classifications, the overlap of roles is specific to Lithuania. According to experts, the demand for non-technical roles and specialists has increased and is still high due to the EU GDPR directive. Alternatively, the roles responsible for incident management, security implementation and day-to-day assurance (DevOps), vulnerability management and testing, and system monitoring were assigned to the technical role group. Unexpectedly, any of the experts failed to mention roles related to research & development, innovation, threat intelligence, or digital crime investigation. Also, risk management was not classified as a function of a separate role. When asked about the continuing education and certification of CS specialists, the experts emphasised the individual responsibility. Discussion The surveys, interviews, and analysis of job postings give several consistent results. First of all, the role of a CS specialist is deeply associated with the role of a generic IT specialist. This trend is evident in the opinion of the general population. Most CEOs also think that an IT specialist may carry out the functions of a CS specialist. Secondly, most Lithuania companies are too small to have a set of separate CS roles. However, the activities needed in the CS area are similar regardless of the company size. As a result, a usual CS specialist in Lithuania is expected to perform functions of several ENISA ECSF or NIST NICE roles, confirming our hypothesis. To summarise the observations, Fig. 8 maps the cybersecurity landscape in different organisations depending on their size. Small companies or individuals (typically with less than 50 employees) whose primary activity is not related to providing IT products or services tend to delegate CS roles to an ICT specialist within the organisation if they have one or rely on their service providers. CS is based on good ICT literacy and secure use of IT resources and data. CS functions are uncoordinated. Medium enterprises (around 50-150 employees) whose primary activity is not related to the ICT sector, including small IT service provider companies, delegate CS tasks to their in-house software developers and implementers. CS is part of an organisation's core L. Bukauskas, A. Brilingait˙e, A. Juozapavičius et al. IT functions and is sometimes taken for granted as a responsibility of developers or administrators, for example, by using attractive names such as (Cyber-)DevOps-(Cybersecurity) Development and Operations. There is poor coordination of CS governance and regulation and limited coverage of incident response or other CS activities. Large or international enterprises, including state-owned organisations, use more detailed job descriptions. Work activities are usually linked in a hierarchy that has explicit CS activities and is well coordinated. Therefore, based on the research results, we propose a professional cybersecurity framework comprising six activity areas (see Fig. 9). The proposed framework is built based on cyber activities, tasks, and responsibilities within a context of a small nation state. Strategic leadership. Involving senior management in mitigating cyber risks is a key success factor [6]. Establishing government procedures, continuous performance monitoring, employee motivation, and other tools developed by executives build sustainable cybersecurity awareness. Therefore, this activity area is related to strategic management in cybersecurity. Ryttare [32] performed several semi-structured interviews with respondents from six organisations in different industry sectors to identify predominant factors in establishing a security culture. For example, one of the factors is the need for a leader interested in cybersecurity with a pedagogical approach to support employees in the continuous change towards security-oriented culture. Dhillon [9] proposed three levels of organisational competence to harness IT, aiming to gain competitive advantage-strategic, exploitation, and supply. Information security must be developed in all three areas within the organisation [10]. For example, the strategic level includes competence to clearly define roles (threats are mitigated before they become serious), and the exploitation level includes competence to lead and influence others' awareness (will to "sell an action plan"). Therefore, trustworthy managers in the CISO role with its responsibilities build credibility and capacity within the organisation [22]. Assurance & compliance. Digitalisation processes have enabled the development of legal regulation in cybersecurity to ensure data protection and support resilience against cybercrime. Numerous policies and regulatory measures have been adopted [15] to protect fundamental rights and cover cyber issues independently of the rights. The technical multi-layered ecosystems raise challenges regarding security and safety levels [7]; for example, the European Union still lacks some regulations and mandatory requirements for the manufacturer regarding IoT product security. Solutions and strategies might lead to risks, and organisations need to analyse not only their own risk but also the industry-wide trends [23]. Therefore, the activities of this area support legal compliance, audit, and risk management. Research, innovation, and education. Mushtaq [27] performed semi-structured interviews with experts to define the technology course role at secondary schools. Experts emphasised that it is more important to build secure habits than to produce citizens for the cybersecurity job. After reviewing the literature, Mwim and Mtsweni [28] concluded that cybersecurity training and education is the top cybersecurity culture factor. Educational institutions focus on education and research as a primary function, but innovation can be fostered only through cross-sector partnerships, including business and academia [34]. Therefore, the area combines education in a broad sense (including internal training) and research (academia and industry) to define the national direction toward innovation development. CS engineering & development. Today, implementing security tools in compliance with standards is a "basic hygiene" [33]. Therefore, this area considers any tool application and solution integration as an implementation in the infrastructure or software, starting from the architecture design to the operational support. The activity scope of this area is IT system security and all objects and entities interacting with these systems in the cybersecurity context as proposed by Villalón-Fonseca [39]. It is important to emphasise that engineering and development are not limited only to IT systems but also cover industrial control systems. Of course, L. Bukauskas, A. Brilingait˙e, A. Juozapavičius et al. for example, SCADA security and IT security differences like communication protocols and fault tolerance levels should be considered in the engineering processes [38]. Cyber defence & incident response. Incident response can be seen as a separate group of activities and responsibilities. The extensive availability of services via online systems and usage of smart devices pre-program possible attacks and compromises. Reporting processes to national CSIRTs are predefined by legislation, but some enterprises are required to maintain incident response procedures and test them for preparedness. Information officers are responsible for ensuring procedure compliance and training, while incident management involves teamwork. This activity's scope covers cyber defence from the incident response perspective, including digital forensics, which is a part of the incident response process [17]. Data gathering, intelligence & analytics. This activity area combines analytics of the data and testing system vulnerabilities. Monitoring tools generate extensive amounts of data, and the question of how much data are translated into the decision remains even today [33]. Therefore, proper data analysis provides the system view of daily status to compare to exceptional situations, unexpected behaviour, and publicly shared indicators of compromise. Organisations can benefit from sharing information and using the available threat information. Intelligence-sharing communities are a powerful tool to get the solution and avoid the risk of being targeted via vulnerabilities [25]. Large data amounts require technological solutions to analyse and detect attack patterns or irregularities within systems. Therefore, the activity area relates to challenges and needs for real-time identification of vulnerabilities, for example, by applying artificial intelligence [26]. Conclusions and future work Our investigation's primary goal was to test and evaluate the application of currently well-known CS competence frameworks in a small nation state. We designed a comprehensive research methodology workflow, cross-examined the CS field in Lithuania using several data collection methods and an inductive approach, and proposed a new competence framework. We found that existing CS competence frameworks do not apply to the majority of organisations, hence the need for the activity-based generic framework. Our discussed findings directly link the national efforts to compete in the global cybersecurity job market. The proposed framework for the CS field applies to organisations for defining consistent job descriptions, communicating educational paths, and identifying key-stone issues related to developing organisational strategy regarding CS. The framework presents a balanced view of the cyber workforce categories and reflects the existing international standards. Focus on a small state creates limitations of the proposed framework. The framework's hierarchical structure balances workforce proportions found during the research. But small countries are sensitive to changes in local innovation and investment intensity, and a new big player can significantly impact workforce distribution. The business changes might prioritise some CS competences over others and shift the specialist profiles into specialisations at the country's level. However, we cannot say that the framework does not suit large countries, and further investigation is needed. Finally, the industry should validate and approve the framework for educational and professional purposes. As a future research direction, we envision other detailed analyses of data gathered during surveys and interviews. It is important to explore the gender balance in CS and how to promote women in the labour market. Continuing our research, we also intend to investigate possible options regarding early and late exposure to cybersecurity competences in education.
9,221.8
2023-01-01T00:00:00.000
[ "Computer Science" ]
Pre-eruption thermal rejuvenation and stirring of a partly crystalline rhyolite pluton revealed by the Earthquake Flat Pyroclastics deposits, New Zealand The Earthquake Flat Pyroclastics form a c. 10 km3 rhyolite deposit erupted at c. 50 ka from the margin of Okataina Volcanic Centre, immediately following the caldera-forming eruption of the Rotoiti Pyroclastics (c. 100 km3) from vents c. 20 km to the NE. Earthquake Flat Pyroclastics deposits display textural and compositional complexity on a crystal-scale consistent with rejuvenation of a near-crystalline pluton in the upper crust. Quartz and plagioclase crystals are resorbed, whereas hornblende and biotite are euhedral. Fe–Ti oxides indicate large variations in pre-eruption temperatures (702–805 °C). Differences of up to 70 °C within pumice lapilli show that crystals were chaotically juxtaposed during magma stirring and evacuation. Chemical zoning within hornblende crystals is consistent with rimward increases of c. 50 °C. These features are consistent with a convective self-stirring process. Previous isotope studies demonstrate a long (>100 ka) crystallization history for the magma. Resorption of crystals deep in the magma may have produced a Ca-, Fe- and Mg-enriched rhyolite melt that allowed the growth of reverse-zoned hornblende. Microdiorite lithic fragments in the Earthquake Flat Pyroclastics and Rotoiti deposits and a basaltic eruption that immediately preceded the Rotoiti eruption suggest that mafic underplating beneath Okataina Volcanic Centre provided a major thermal and volatile pulse to drive the caldera eruptions. The accumulation of magma bodies and the causes of their eruption are central to understanding the future activity of volcanoes and hence their impact as natural hazards. Recent studies have highlighted the importance of new inputs of heat and volatiles to pre-existing crystal-rich magma bodies that had stagnated before rejuvenation and eruption. Examples span the spectrum of volcanism and magma volumes, from small andesitic-dacitic volcanoes to rhyolite caldera supervolcanoes, in both arc and intra-continental settings (Hervig & Dunbar 1992;Murphy et al. 2000;Wark et al. 2007). Episodes of rejuvenation are also important in the construction of plutons (e.g. Wiebe et al. 2004). Some erupted products of magmas rejuvenated prior to eruption display obvious disequilibrium textural features such as heterogeneous mafic enclaves that record basalt intrusions that provided new inputs of heat and volatiles. In other examples, evidence for magma rejuvenation may be subtle, as indicated by quartz resorption and regrowth, and the compositionally complex zoning of plagioclase and mafic phenocrysts. The investigation of magmatic rejuvenation processes can therefore provide insights into the mechanisms of pluton growth and leakage to generate silicic eruptions. The Taupo Volcanic Zone ( Fig. 1) of New Zealand has erupted small to moderate volumes of rhyolite magma (1-10 km 3 ) at a millennial-scale frequency, punctuated by larger (.100 km 3 ) caldera-forming events at c. 20-100 ka intervals (e.g. Nairn 2002;Smith et al. 2005). Here we investigate the thermal record leading up to one such caldera-forming event. The Earthquake Flat Pyroclastics (Nairn 2002) form a crystal-rich rhyolite pyroclastic density current deposit (volume c. 10 km 3 ) erupted at c. 50 ka (Charlier et al. 2003;Shane & Sandiford 2003) from the SW margin of Okataina Volcanic Centre (Fig. 1) in the Taupo Volcanic Zone. The Earthquake Flat Pyroclastics eruption immediately followed the rather similar but much larger (c. 100 km 3 ) caldera-forming eruption of the rhyolitic Rotoiti Pyr-oclastics from vents c. 20-25 km to the NE (Nairn & Kohn 1973;Nairn 2002). The Earthquake Flat and Rotoiti deposits have been the subject of earlier petrographic, chemical and isotopic studies (Davis 1985;Schmitz 1995;Burt et al. 1998;Charlier et al. 2003;Schmitz & Smith 2004;Shane et al. 2005a). Most of these studies have concluded that the Earthquake Flat Pyroclastics magma body was coexisting with, but separate from, the Rotoiti magma. Zircon ages from Earthquake Flat Pyroclastics deposits demonstrate a prolonged magma storage history, requiring periodic rejuvenation and final reactivation from a largely crystalline pluton (Charlier et al. 2003). Although the Rotoiti rhyolite eruption was immediately preceded by a small basaltic subplinian eruption (Matahi Scoria; Pullar & Nairn 1972), implying a trigger via mafic intrusion, there has been little previous evidence for any mafic input to the Earthquake Flat Pyroclastics eruption. Here we use crystal chemistry to describe the final crystallization and thermal history of the Earthquake Flat Pyroclastics magma prior to eruption, examine its relationship to the Rotoiti magmas, and discuss implications for the precaldera magma dynamics. Deposits, samples and methods The Earthquake Flat Pyroclastics eruption occurred from six vents defining a 5 km long NW-SE-trending lineament (Fig. 1) overlying the inferred Okataina Volcanic Centre outer ring fracture (Nairn 2002). The eruption produced c. 10 km 3 of pumiceous pyroclastic flows and interbedded fall and surge deposits that form thick (up to 150 m) lowangle pyroclastic fans around the vent area. Deep sections expose up to 14 flow units, typically 0.5-7 m thick and containing pumiceous ash, lapilli and blocks to 50 cm in size, with a minor accessory lithic rhyolite content (Davis 1985;Nairn 2002). The intercalated fall beds extend beyond the pyroclastic flow fans, to form the distal 'Rifle Range Ash' tephra. Rifle Range Ash directly overlies the analogous distal fall deposit ('Rotoehu Tephra') of the Rotoiti Pyroclastics, with a sharp conformable contact that lacks weathering or soil formation (Nairn & Kohn 1973). The contact is consistent with sequential deposition of the two tephras, separated by a time interval as short as hours to weeks. The Rotoiti Pyroclastics, including a voluminous non-welded ignimbrite (Fig. 1), the widespread Rotoehu Tephra fall deposits, and the preceding Matahi Scoria basalt, are thought to have been erupted from vents on the northern part of the Haroharo Linear Vent Zone (Fig. 1), since buried by intracaldera lavas (Nairn 2002). The Earthquake Flat Pyroclastics deposit is non-welded and vitric, showing little evidence of post-emplacement alteration other than meteoric hydration. Pumice clasts and bulk ash samples were collected from five main sites representing different stratigraphic levels and azimuths from the vent area. No thick proximal or medial sections expose the complete Earthquake Flat Pyroclastics deposit sequence, so that the precise stratigraphic level for samples collected from near the middle of these sections is unknown. The Earthquake Flat Pyroclastics are chemically and mineralogically distinct from most of the Rotoiti Pyroclastics (Davis 1985;Schmitz & Smith 2004;Shane et al. 2005a Whole-rock and glass chemistry Earthquake Flat Pyroclastics pumice clasts are low-to mediumsilica rhyolites and display a moderate compositional range in whole-rock anhydrous major element chemistry (i.e. SiO 2 69.6-74.8 wt%, K 2 O 2.14-3.25 wt%; Fig. 3). On variation diagrams, Fe 2 O 3 , Al 2 O 3 , TiO 2 , MgO, CaO, Sr and Zr display linear inverse trends with SiO 2 , whereas K 2 O (Fig. 3) displays a positive linear trend. Other elements such as Rb (Fig. 3) and Ba lack linear trends. Earthquake Flat Pyroclastics pumices have moderate negative Eu anomalies; Eu=Eu à ¼ 0:64-0:87 (and one outlier of 1.02), and are enriched in light rare earth elements (LREE); Ce=Yb N ¼ 3:65-4:62 (Fig. 3). Ranges in high field strength element (HFSE) ratios are relatively narrow (e.g. Ta/Hf = 0.18-0.24). We found no evidence for a separate high- K-low-Zr magma reported by Davis (1985). Our reanalysis of two of the four pumice clasts that Davis (1985) classified as high-K-low-Zr samples lie within the total compositional cluster representing all samples. We found no correlation between compositional variation and sample stratigraphic position. Microprobe analyses of matrix glass within the pumices, and glass shards in bulk ash samples, reveal a compositionally uniform high-silica rhyolite melt with variation within that of analytical error (i.e. SiO 2 77.29 AE 0.21 wt%; K 2 O 4.51 AE 0.10 wt%; Fig. 3). Thus, the whole-rock compositional variation must be controlled by variation in crystal content. Mineral textures and chemistry We focused our investigation on the compositional variability of hornblende in Earthquake Flat Pyroclastics and Rotoiti Pyroclastics deposits, but discuss each crystal phase in the former in order of abundance. Crystal phases in Rotoiti deposits have been described by Schmitz & Smith (2004) and Shane et al. (2005a). Earthquake Flat Pyroclastics plagioclase and quartz Plagioclase crystals are commonly large (.5 mm), subhedral to anhedral, tabular laths. A wide range of textural features are displayed including oscillatory zoning and sieve textures (Fig. 2). Some crystals have cores with melt channels and embayments ( Fig. 2e and f), whereas others display zones of fine pits on their outer margins (Fig. 2d). Randomly analysed crystal cores reveal a compositional range of An 21 -50 (Fig. 4). Analytical traverses across large crystals reveal compositional variations of up to c. An 15 within crystals. There are no consistent trends or patterns revealed in the traverses. However, 20-50 ìm wide zones of significantly higher An were encountered in some crystals (Fig. 5). Quartz occurs as large anhedral crystals up to 6 mm in size, with some relict bi-pyramidal forms evident. The crystals are often deeply incised by melt channels and embayments (Fig. 2ac), and melt inclusions are common but many are devitrified. Most of the Al variation occurs in the tetrahedral site ( T Al) Fig. 3. Selected plots of the major and trace element composition of Earthquake Flat Pyroclastics pumice (whole rock) and glass (this study), compared with those of Rotoiti Pyroclastics pumice (Schmitz & Smith 2004) and glass (Shane et al. 2005a). Rotoiti Pyroclastics whole-rock data represent early, mid-and late deposits. To avoid instrumental differences no comparison is shown for REE data. (range c. 0.4 a.p.f.u.). The amount of M1-M3 Al is minor, mostly ,0.2 a.p.f.u.. There are positive linear correlations between both T Al and A (Na + K), and between T Al and Ti (M2 site) (Fig. 6). These atomic substitutions are consistent with edenite ( T Si þ A h ¼ T Al þ A (Na þ K)) and Ti-Tschermak ( T Si þ M1-M3 Mn = T Al þ M1-M3 Ti) exchange, respectively. Such atomic exchanges in hornblende are considered to be temperature controlled . There is no indication of pressure-sensitive atomic substitutions such as Al-Tschermak exchange ( T Si þ M1-M3 Mg ¼ T Al þ M1-M3 Al) (Fig. 6). Rim to core analytical traverses ( Fig. 7) at 10-20 ìm spot spacing reveal complex cyclic zoning patterns within some crystals. The compositional variability within a single crystal is comparable with that of the entire crystal population examined. Overall, the traverses show rimward trends of increasing Al 2 O 3 , TiO 2 and alkalis (Fig. 7). Smaller cycles with ,100 ìm wavelengths are commonly superimposed on these trends. There are positive correlations between FeO, Al 2 O 3 , Na 2 O and K 2 O. These elements correlate negatively with MgO and SiO 2 . Other crystals show very little compositional variation (Fig. 7). In addition, a few display pronounced elemental spikes in Al 2 O 3 , TiO 2 and (Na 2 O + K 2 O) rather than rimward trends (Fig. 7c). Hornblende in Rotoiti late-stage ejecta Previous studies have recognized a bimodal population of hornblende in Rotoiti deposits (Schmitz & Smith 2004;Shane et al. 2005a). Small acicular high-MgO (c. 14.5-15.5 wt%) horn-blende occurs as a rare phase throughout the entire eruptive sequence, and is joined by a second population of large stubby (up to 5 mm in size), low-MgO (c. 10.5-13 wt%) hornblendes in mingled pumice in the uppermost fallout units (Re3 beds) (Fig. 4). In compositional traverses of the low-MgO hornblendes we found a wide range of compositions within crystals including Al 2 O 3 (4.25-10.22 wt%), FeO (15.27-18.23 wt%), MgO (9.81-13.39 wt%) and Na 2 O (0.90-2.23 wt%). These ranges are similar to and encompass those of the Earthquake Flat Pyroclastics hornblendes (Fig. 4), and show the same elemental correlations. Most of the Al variation occurs in the tetrahedral site ( T Al) (range c. 0.6 a.p.f.u.). As with Earthquake Flat Pyroclastics, atomic substitutions in Rotoiti Pyroclastics low-MgO hornblendes are consistent with edenite and Ti-Tschermak exchange (Fig. 6). Some smaller (,300 ìm wide) Rotoiti hornblende crystals show rimward increases in Al 2 O 3 , TiO 2 and Na 2 O + K 2 O (Fig. 8). The largest crystals (c. 600 ìm wide) show complex zoning in these elements, including high-Al 2 O 3 zones (,100 ìm wide) separated by abrupt transitions to low-Al 2 O 3 zones. Earthquake Flat Pyroclastics biotite Biotite occurs as large (up to 5 mm in size) flakes and books with common microinclusions of apatite, zircon and Fe-Ti oxides. The crystals do not show evidence of optical zoning or resorption (Fig. 2g). The biotite crystals display little compositional variation (FeO 18.68-21.08 wt%; MgO 10.33-11.86 wt%), both within and between crystals (Fig. 4). Orthopyroxene occurs as a rare phase in the form of acicular euhedral to subhedral crystals up to 1.5 mm in length. Reconnaissance spot analyses reveal a moderate compositional range (En 47 -54 , average 49 AE 2). The orthopyroxene crystals commonly contain Fe-Ti oxide micro-inclusions. Earthquake Flat Pyroclastics intensive parameters Estimates of temperature (T) and oxygen fugacity ( f O 2 ) were obtained using the algorithm of Ghiorso & Sack (1991) for Fe-Ti oxide pairs considered to be in equilibrium using Mn-Mg distribution criteria (Bacon & Hirschmann 1988). Fe-Ti oxide pairs attached to the same host crystal were used to ensure that oxide crystallization occurred within the same part of the magma chamber. Thirty-one oxide equilibrium pairs produced T-f O 2 estimates in the range 702-805 8C and NNO À0.26 to +0.32 (where NNO is log units above or below the Ni-NiO buffer) (Fig. 9). T-f O 2 values vary randomly in the eruption sequence; there is no relationship to geographical or stratigraphic locations of the samples. Wide ranges (i.e. values of T ¼ 705-7788C and f O 2 ¼ NNO À0.1 to +0.19) were obtained from single pumice lapillus (Fig. 9a). Pre-eruption magma temperatures were also obtained from amphibole and plagioclase using the algorithm of Holland & Blundy (1994). The edenite-richterite algorithm was used because it is considered to be less influenced by pressure . The estimation assumes that the two phases are in equilibrium, which is difficult to assess because both phases display some degree of variability. For plagioclase, we used an average composition of Ab 69 AE 4 based on the narrow unimodal population (Fig. 4). Atomic substitutions in hornblende suggest that Al variation is thermally controlled. Thus, we selected low-and high-Al 2 O 3 analyses that correspond to rim and core areas in two zoned hornblendes (crystal 38050, Al 2 O 3 5.55 and 6.70 wt%; crystal 1-2, Al 2 O 3 5.32 and 7.21 wt%) to estimate temperatures. A pressure of c. 250 MPa was assumed (see below). Crystal 38050 produced temperatures of 732 8C (low-Al) and 769 8C (high-Al) and crystal 1-2 produced temperatures of 722 8C (low-Al) and 770 8C (high-Al). These temperatures are in good agreement with the range of values from Fe-Ti oxide equilibrium. reported that the edenite-richterite algorithm is sensitive to changes in Ab. They found that a decrease of Ab 5 could result in a temperature increase of 20 8C. For Earthquake Flat Pyroclastics deposits, the exact plagioclase equilibrium composition(s) correlating with the low-and high-Al hornblende is uncertain. However, the narrow compositional mode in plagioclase analyses indicates that Ab 68 -70 are the dominant compositions, and thus the resulting temperature estimates may not vary greatly (i.e. by ,15 8C) from those calculated. Equilibrium pressures can be estimated by plotting the glass (melt) composition on the Quartz-Albite-Orthoclase (-H 2 O) ternary of Tuttle & Bowen (1958) following the method of Cashman & Blundy (2000). An absolute pressure at the time of crystallization can be estimated for magmas that are saturated with silica and water. Without volatile abundance data, it is not known whether Earthquake Flat Pyroclastics magma was watersaturated. However, Johannes & Holtz (1990) have shown that in water-undersaturated experiments the cotectics are not greatly displaced from those of water-saturated melts. The average Earthquake Flat Pyroclastics melt (glass) composition plots close to the 200 MPa cotectic (Fig. 10); we use this value for the Earthquake Flat Pyroclastics magma. Earthquake Flat Pyroclastics magma zoning The analysis of pumice clasts from different geographical sectors and stratigraphic levels within the Earthquake Flat Pyroclastics deposits reveals no consistent compositional or mineralogical trends either in space or time. This suggests that any pre-eruption zoning of the magma body was largely disrupted before or during eruption. Whole-rock (magma) major element analyses show some variation whereas glass (melt) analyses are homogeneous, pointing to crystal-content control on the bulk compositions. A wider variation noted in trace elements and REE (Fig. 3) may reflect crystal abundance variability on a sample (pumice-clast) scale (,5 cm). In particular, the spatial distribution of apatite and zircon, which have high partition coefficients for LREE and some trace elements, is not uniform. Apatite and zircon crystals are preferentially included in biotite phenocrysts that are not uniformly distributed on a 1-10 cm scale. Thermal and crystallization history of Earthquake Flat Pyroclastics magma Several features suggest the Earthquake Flat Pyroclastics magma had a complex thermal history involving late-stage cooling and crystallization followed by rejuvenation by input of heat and volatiles shortly before eruption. (1) Large quartz crystals display relict bi-pyramidal forms consistent with early phenocryst formation in a slowly cooling melt. However, they are deeply embayed and resorbed by remelting. Similarly, large plagioclase crystals display a range of internal and marginal resorption textures. (2) Glomerocrysts composed of all the main phenocryst phases are common, indicating that parts of the magma body had completely crystallized prior to eruption. (3) Hornblende and biotite are mostly euhedral and lack resorption textures, indicating they represent a separate and late stage of the crystallization history. Their growth continued until eruption. (4) Al and Ti variation in hornblendes is consistent with temperature-controlled atomic substitutions. Increasing abundance of these elements rimward in many crystals indicates their late growth in a regime of increasing temperature (Figs 6 and 7). (6) Fe-Ti oxide equilibrium pairs provide temperature estimates from 702 8C to 805 8C (Fig. 9), indicating that considerable temperature variation existed in the pre-eruption magma body. This variation was not accompanied by variation in magma composition or mineralogy. Crystal pairs within a single pumice clast (,5 cm in size) can provide temperature estimates that vary by up to 70 8C, indicating crystals from different temperature regions were juxtaposed. As Fe-Ti oxides can re-equilibrate within days to months (Devine et al. 2003), the thermal variation they record here was produced shortly before eruption. (7) Plagioclase composition can be controlled by temperature, pressure and water content (e.g. Housh & Luhr 1991), in addition to bulk composition. Ca-rich spikes in some plagioclase analytical traverses (Fig. 5) could result from one or more occurrences of heat pulse, recharge by Ca-rich magma, or volatile loss or gain. (8) Charlier et al. (2003) reported a wide spectrum of U-Th single crystal zircon ages (c. 70 to .350 ka) with a peak at 105 ka, interpreted as earlier crystallization events. Most of the characteristics described above for the Earthquake Flat Pyroclastics magma are remarkably similar to those documented for the Fish Canyon Tuff by . Those workers concluded that the Fish Canyon Tuff was erupted from a voluminous crystal mush zone in the upper crust, after rejuvenation by mafic underplating. The Fish Canyon Tuff magma body lacked bulk compositional gradients, but the deposits display diverse textural and chemical complexities on a crystal scale, indicating that crystals of differing histories had been juxtaposed by convective stirring shortly before eruption. As pointed out by , the mineralogical features displayed by the Fish Canyon Tuff are consistent with models of convective 'self-mixing' (stirring) described by Couch et al. (2001), and do not require the mechanical mixing of two separate magmas. In the convective self-stirring model, a silicic magma body is heated at its base; for example, by underplating with freshly injected mafic melt. This produces a relatively thin basal layer of hot silicic melt, which is unstable and buoyant, and rises as discontinuous plumes into the overlying cooler part of the magma, carrying crystals that may record the thermal history. Cooler pockets of crystal mush would descend into the hot boundary magma zone where remelting of some crystal phases would occur (Fig. 11). The self-stirring model seems applicable to the Earthquake Flat Pyroclastics magma body (Fig. 11), where higher temperature Fe-Ti oxide equilibrium pairs (c. 750-800 8C) would have originated in the basal hot boundary layer, along with the reverse-zoned hornblende crystals with Al-rich rims (c. 770 8C). Resorption of quartz and plagioclase crystals would have occurred where overlying pockets of crystal mush descended into the hot basal layer. In addition to convective stirring, further juxtaposition of different parts of the magma body could have occurred during syneruptive disruption as the magma chamber was evacuated, and during magma ascent in the conduit (Fig. 11). The apparent paradox inherent in the late-stage growth of Alrich hornblende, and euhedral biotite, during heating of a highly evolved melt that had already undergone considerable quartz and plagioclase crystallization has been discussed by . In the Earthquake Flat Pyroclastics magma body, mafic components may have been carried in fluids from the inferred underplating mafic magma, even though direct maficrhyolite magma mixing was minimal. Alternatively, resorption of crystals in a deeper (perhaps non-erupted) part of the silicic magma body produced a rhyolitic melt enriched in Ca, Fe and Mg from which late-stage ferromagnesian phases could grow (Fig. 11). In the Bishop Tuff, some quartz crystals display evidence of resorption followed by regrowth at higher temperature (Wark et al. 2007). This was considered to be due to the influx of CO 2 from a mafic intrusion lowering the activity of H 2 O and allowing quartz to grow at a higher temperature following thermally induced resorption. In contrast, quartz grains in the Fish Canyon Tuff exhibit resorption with little evidence of late high-temperature regrowth Wark & Bachmann 2005). This could be explained by input of H 2 O-dominant fluids acting to depress the solidus. We note that textural and crystal zoning features in Earthquake Flat Pyroclastics are similar to those of Fish Canyon Tuff, and thus imply an Earthquake Flat Pyroclastics mafic intrusion containing fluids dominated by H 2 O rather than CO 2 . The contemporaneous Rotoiti magma was watersaturated as shown by the presence of cummingtonite (Shane et al. 2005a), indicating the availability of water at Okataina Volcanic Centre immediately before the Earthquake Flat Pyroclastics eruptions. Mafic intrusion and tectonic implications The only direct evidence for the involvement of a mafic magma with the Earthquake Flat Pyroclastics (and Rotoiti) eruptions comes from rare glass-bearing microdiorite lithic fragments found in both deposits, and interpreted as quenched mafic magma (Burt et al. 1998). However, the Rotoiti Pyroclastics directly overlie the basaltic sub-Plinian Matahi Scoria (,0.5 km 3 ), demonstrating the presence of mafic magma at Okataina Volcanic Centre immediately before the Rotoiti and Earthquake Flat Pyroclastics eruptions. Neither of these rhyolite eruption deposits contains hybrid or mafic ejecta that would demonstrate mechanical mixing of basaltic and silicic magmas, but the absence of mafic enclaves in the rhyolitic eruptive rocks is consistent with the convective self-stirring process inferred for the Earthquake Flat Pyroclastics eruption. The Matahi Scoria records the rise of a basaltic dyke to the surface, locally unhindered by encounter with a less dense rhyolite magma body. The exact vent location for Matahi Scoria is unknown but is considered (Pullar & Nairn 1972) to be near the northern end of the Haroharo Linear Vent Zone (Fig. 1), and now buried within Haroharo caldera. Further to the SW the basaltic intrusion may have ponded beneath the main Rotoiti magma body and triggered it into eruption (Shane et al. 2005a). Thermal rejuvenation of the Earthquake Flat Pyroclastics magma also implies mafic underplating, and thus basaltic intrusion may have extended (as a dyke swarm?) c. 20-30 km NE along the Haroharo Linear Vent Zone and into the Earthquake Flat Pyroclastics vent area (Fig. 1). The NW-SE trend of the Earthquake Flat Pyroclastics vents is orthogonal to the NE-SW trend of the inferred dyke swarm, but is subparallel to the adjacent Step 1: mafic (black) underplating of the silicic magma body (grey) produces a hot basal boundary layer where extensive resorption of felsic crystals produces enriched silicic melt from which reverse-zoned hornblende crystals with high-Al rims grow. Overlying crystal mush descending into the boundary layer transports felsic phases that are partially resorbed. Plumes rising from the heated boundary layer mechanically mix the crystal mush, juxtaposing crystals with different thermal histories. Step 2: eruption entrainment causes chaotic mixing of the magma, disrupting the thermal gradients and melt plumes. Haroharo caldera boundary (Fig. 1). Together, the thermal, temporal and tectonic relationships suggest that the Earthquake Flat Pyroclastics magma body was primed by mafic underplating related to a regional dyke intrusion event, and triggered into eruption by lithostatic readjustments including external ring faulting that accompanied collapse of Haroharo caldera. Relationships between Earthquake Flat and Rotoiti magmas The most voluminous of the multiple rhyolite magmas erupted during the Rotoiti episode (T1 magma, Shane et al. 2005a) is characterized by a ferromagnesian mineral assemblage dominated by cummingtonite. The Rotoiti T1 and Earthquake Flat Pyroclastics magmas are clearly distinguished by their different chemical compositions (Figs 3 and 4), mineralogies and intensive parameters (Fig. 9). However, mingled pumices in the uppermost beds of Rotoiti Pyroclastics contain a crystal sub-population similar to that of Earthquake Flat Pyroclastics (Fig. 4). This crystal sub-population includes large quartz grains, and plagioclase, hornblende, biotite and Fe-Ti oxides compositionally similar to Earthquake Flat Pyroclastics phases; with T-f O 2 values that plot on a similar trend (Fig. 9). The crystal subpopulation and its accompanying K 2 O-rich melt (Fig. 3) define the Rotoiti T2 magma of Shane et al. (2005a). The lack of universal re-equilibrium of the Rotoiti T1 and T2 Fe-Ti oxides ( Fig. 9) in the mingled pumices indicates that contact of the two magmas was transitory, and probably occurred in the conduit. The similarity between crystal assemblages in Earthquake Flat Pyroclastics and late-stage Rotoiti Pyroclastics was first described by Schmitz (1995), who suggested that the Rotoiti eruption may have tapped the Earthquake Flat Pyroclastics magma body. This concept was subsequently dismissed on the basis of differences in whole-rock chemistry (Burt et al. 1998), isotope ratios (Schmitz & Smith 2004), and zircon single crystal age spectra (Charlier et al. 2003). Biotite-bearing Rotoiti pumice displays a zircon age peak at 70-90 ka, compared with 105 ka for Earthquake Flat Pyroclastics. However, biotite-bearing Rotoiti pumices are hybrids comprising mingled T1 and T2 magmas, and thus provide a hybrid isotopic and chemical signature (Shane et al. 2005a). The confusion is compounded by the fact that some whole-rock chemical and isotope data of earlier workers was obtained by combining multiple pumices that may have variable to no T2 components. Although Rotoiti T2 and Earthquake Flat Pyroclastics melts are distinguished by their K 2 O contents (Fig. 3), it is possible that the Rotoiti T2 melt is itself a hybrid between Rotoiti T1 melt and an unsampled high-K melt similar to that of Earthquake Flat Pyroclastics. We conclude that existing data cannot rule out a genetic relationship between Rotoiti T2 and Earthquake Flat Pyroclastics magmas as proposed by Schmitz (1995). However, this does not require the existence of a laterally continuous melt body across the Okataina Volcanic Centre at the time of the Rotoiti eruption. The Rotoiti deposits also contain glass-bearing granitoid lithic fragments that display differing age spectra, mineralogy and isotopic ratios from Earthquake Flat Pyroclastics and Rotoiti (hybrid T1 + T2) magma (Burt et al. 1998;Charlier et al. 2003). Thus, we concur with Charlier et al. (2003) that multiple semimolten bodies resided in the mid-to upper crust, and were disrupted and entrained in the Rotoiti caldera-forming event (Fig. 12). Earthquake Flat Pyroclastics-like melts may have occurred as isolated ponds residing in a highly crystallized mush zone. The similarities between Earthquake Flat Pyroclastics and Rotoiti T2 crystal populations and their compositions suggest that this mush zone may have been the crystallized residuum of a largely homogeneous magma body that had originally extended across the Okataina Volcanic Centre (Fig. 12). Although we lack at present a comprehensive dataset of zoning in Rotoiti crystals, our hornblende data are consistent with heating rejuvenation of Rotoiti T2 magma, based on Al variation consistent with temperature-controlled atomic substitutions (Fig. 6). Analytical traverses show rimward increases in Al 2 O 3 , TiO 2 and Na 2 O + K 2 O in some crystals, implying growth with increasing temperature, similar to that in Earthquake Flat Pyroclastics crystals. In addition, the largest Rotoiti T2 hornblende crystals show even greater and more complex variations (Fig. 8) revealing rapid changes in temperature, consistent with magma heating events. Implications for Okataina Volcanic Centre A boundary between lower-velocity (metasediments and silicic igneous rocks) and underlying high-velocity rocks (mafic cumulates and intrusions) occurs under the Taupo Volcanic Zone at c. Fig. 12. Conceptual diagram of magma zones beneath Okataina Volcanic Centre immediately before the Earthquake Flat Pyroclastics and Rotoiti eruptions, and caldera formation. Mid-crustal plutons represent rhyolite plumes that ponded and largely crystallized at depth, with periodic rejuvenation by new basaltic or rhyolite intrusion. This was the source region for melts represented by EFP, Rotoiti T2 ejecta and glassy granitoid lithics. The voluminous Rotoiti T1 melt was rapidly extracted from the deep crust and ponded at mid-crustal depth for a short time before eruption. Widespread mafic underplating provided a thermal and volatile input to drive the eruptions. Horizontal and vertical scales are equal. 15 km (Harrison & White 2004). Thus, the production and aggregation of silicic melts is likely to occur at shallower depths (e.g. Charlier et al. 2005). Partial melts may exist at present below c. 10 km, as indicated by magnetotelluric data (Ogawa et al. 1999). New silicic melts are thought to form as a result of mafic intrusion-induced crustal melting, and associated fractionation and assimilation processes (Graham et al. 1995;Charlier et al. 2005). Melts extracted from these depths would ascend until reaching neutral buoyancy (Fig. 12) (or erupt if tectonic conditions allowed direct passage to the surface). The Earthquake Flat Pyroclastics crystallization pressures of c. 200 MPa and temperatures extending down to ,720 8C, point to a mid-or upper crustal environment (,8 km) for magma ponding prior to eruption. Many of the Earthquake Flat Pyroclastics zircons have ages .100 ka (Charlier et al. 2003), and thus represent relicts from previous magmatic cooling events. They provide evidence for a long-lived pluton that was periodically thermally rejuvenated and incrementally augmented by new melts, preventing its complete crystallization. The near-identical mineralogy and associated crystallization temperature and pressure (Figs 9 and 10) in Earthquake Flat Pyroclastics and Rotoiti T2 magmas suggest that this long-lived pluton may have been areally extensive, underlying much of the Okataina Volcanic Centre (Figs 1 and 12). Evidence for the existence of such plutons was first recognized from granitoid lithic fragments by Ewart & Cole (1967). There is little evidence for surface volcanism at Okataina Volcanic Centre for a prolonged period of uncertain duration prior to the Rotoiti episode (Nairn 2002). The Rotoiti Pyroclastics overlie a well-developed regional paleosol. Distal tephra records show a pronounced increase in rhyolite fall events from Okataina Volcanic Centre after the Rotoiti eruption, and very few for the preceding c. 100 ka (Shane & Hoverd 2002;Shane et al. 2006). During this time no large pyroclastic eruptions occurred from Okataina Volcanic Centre; eruptive activity seems to have been restricted to extrusion of minor rhyolite lavas now preserved at only a few sites around the margins of Haroharo caldera (Nairn 2002). This pre-Rotoiti quiet period coincides with peak periods of zircon crystallization in the Earthquake Flat and Rotoiti magmas (and the associated glassy granitoids) (Charlier et al. 2003), and represents the period when these magmas were assembling in the mid-crust. Interaction with parts of the semi-continuous mush zone would provide a source for relict crystals and could explain the variety of crystal populations and melt types erupted in the many post-Rotoiti eruption episodes at Okataina Volcanic Centre (e.g. Shane et al. 2005b;Smith et al. 2005). Evacuation of the large volume of Rotoiti magma and formation of Haroharo caldera significantly changed the extrusive and intrusive regime at Okataina Volcanic Centre. Post-caldera rhyolite activity (c. 50-35 ka) was more frequent, less voluminous and involved hotter lower-Si magmas (Shane et al. 2005b). It is likely that caldera formation enhanced magma transport from greater depths by providing vertical conduits, and by depleting the crystal mush zone that had previously hindered magma ascent through mid-crustal depths. Conclusions The Earthquake Flat Pyroclastics magma was a crystal-rich body residing at mid-crustal depths (c. 8 km) before reactivation by thermal and volatile inputs from mafic underplating that involved little or no direct mafic-silicic magma mixing. Compositional and textural complexities on a lapilli scale demonstrate a long history of periodic crystallization followed by late-stage remelting. Earlier formed felsic phenocrysts were remelted, and euhedral ferromagnesian phases grew in a regime of increasing temperature. These characteristics are similar to those reported for the Fish Canyon Tuff . They are best explained by convective self-stirring (Couch et al. 2001) that produced the mixing of crystals with diverse thermal histories. The late-stage growth of reverse-zoned ferromagnesian crystals during heating of an already highly evolved and crystallized magma required new inputs of water and Ca, Fe and Mg components. These inputs could have come directly from the inferred underplating mafic magma, and/or from extensive crystal resorption in the heated basal silicic magma to provide a rhyolite melt that percolated into the overlying crystal mush. Other similar magma pockets occurred beneath Okataina Volcanic Centre prior to caldera formation at c. 50 ka. Mingled magmas and granitoid inclusions in Rotoiti ejecta provide evidence for the existence of a widespread mid-crustal zone of semi-continuous melts and crystal mush at this time. This zone had experienced a complex thermal history of cooling and reheating, as indicated by single-crystal zircon age spectra spanning .100 ka (Charlier et al. 2003), representing periodic mafic intrusion and amalgamation of new silicic melts that failed to reach the surface. This mid-crustal, partly crystalline 'protopluton' developed during a quiet eruptive period at Okataina Volcanic Centre lasting up to 100 ka before the Rotoiti eruption. Rejuvenation of the low-temperature crystal-rich magma bodies required a major thermal and volatile influx from mafic intrusion that rapidly led to the caldera-forming Rotoiti eruption at c. 50 ka. Relict, zoned and resorbed crystals in the Earthquake Flat Pyroclastics deposits reveal a complex history of temperature fluctuations that many other plutons and pre-eruption magma bodies must also experience. Such crystal-rich bodies (e.g. Vinalhaven Granite, Wiebe et al. 2004;Fish Canyon, Bachmann et al. 2002) were probably never completely molten or completely crystalline. Instead, they represent a zone that was intermittently recharged by mass, volatiles and heat throughout its history. The Earthquake Flat Pyroclastics magma was erupted only when allowed by the particularly favourable tectonic conditions that were induced by adjacent caldera collapse.
8,136.8
2007-12-01T00:00:00.000
[ "Geology", "Environmental Science" ]
Which Corporate Social Responsibility Performance A ff ects the Cost of Equity? Evidence from Korea : This study analyzes the e ff ect of corporate social responsibility activities on the cost of equity in Korea. We find that firms with better corporate social responsibility (CSR) performance generally exhibit cheaper equity financing. Considering three dimensions of CSR separately, we find that a higher “socially responsible management” significantly reduces the cost of equity by 1.13%-1.37% per annum and “Corporate governance” activity also marginally a ff ects the cost of equity, while “environmental management” has no impact. Our result is robust in controlling for systematic risk, size, leverage ratio, and the number of analysts. These results imply that enhancing socially responsible management and corporate governance can increase firm value in Korea, but environmental management is not relevant for firm values. Putting di ff erently, investors tolerate a lower return from firms with more CSR activities, because they expect them to provide sustainable incomes. Future researches can extend our approach to examining the e ff ect on the cost of debt and cost of capital. Introduction Over the last decade, corporate social responsibility (CSR) has emerged as a dominant paradigm in business and scholarship as the growth in the number of indices, institutional investors, and mutual funds, make investment decisions depending on firms' CSR performance [1]. The CSR concept has been defined in various ways. McWilliams and Siegel [2] define it as action that appears to further some social good beyond the interests of the firm and legal requirements. Hill et al. [3] define it as firms' economic, legal, moral, and philanthropic actions. Corporate social responsibility is a crucial issue in financial markets. Many studies have examined the relation between CSR performance and firm value in advanced markets. However, the literature has failed to provide consistent empirical findings about whether CSR performance affects firm value. Feldman et al. [4] argue that investors perceive firms with superior environmental performance as being less risky. Guenster et al. [1] insist that environmental performance and firm value are positively related, and Jiao [5] and Orlitzky et al. [6] find evidence of a positive relationship between CSR performance and firm value. On the other hand, Brammer et al. [7] argue that the realized returns of firms with higher CSR performance are low, while Hamilton et al. [8] and Nelling and Webb [9], find that CSR performance does not affect financial performance. Most studies examine how CSR performance is evaluated by investors in stock markets, and few studies analyze how it is evaluated by participants in capital markets. Firms must be managed so as to maximize shareholder wealth. This necessarily incurs costs. Therefore, it is essential to examine the effects of CSR on financial performance explicitly. We thus analyze the relationship between CSR and financial performance, by examining whether CSR performance affects the firm's cost of equity. socially responsible management and better corporate governance play more important roles than environmental performance plays in the Korean stock market. Theoretical Background Previous studies have failed to provide consistent results concerning how CSR performance affects financial or capital markets. One set of results finds a positive relationship between CSR performance and financial performance. Statman and Glushkov [40] show that, portfolios with high CSR performance generally generate higher returns, than portfolios with low CSR performances. Gompers et al. [41] also suggest that firms with effective governance outperform firms with poor governance. Kempf and Osthoff [42] find that, firms with better CSR performance show positive excess returns, and Eccles et al. [43] show a positive relationship between CSR and firms' financial performance. Halvarsson and Zhan [44] and Yoon et al. [20] argue that, firms with higher CSR performance have lower cost of capital in the Swedish capital market. Consistent with previous researches, Xu et al. [45] make the same conclusion in the Chinese market. Another set of results finds a negative relationship between CSR performance and financial performance. Brammer et al. [7] argue that firms with high CSR ratings tend to provide lower returns, but the results are insignificant. Renneboog et al. [10], and Hong and Kacperczyk [46] show that SRI funds or SRI-screened portfolios underperform benchmarks. Another view is that CSR performance has no impact on firm performance. Humphrey et al. [47] find no difference in risk-adjusted performance between portfolios comprising high corporate social performance firms and those comprising low corporate social performance firms. They find that firms with high CSR performance tend to be large, and that they can lower systematic risk. Several studies, including [8,48], and [49], find no difference between socially responsible investment (SRI) mutual funds/indices and conventional ones. Data This study analyzes firms that have earned an ESG grade, from the KCGS, between 2011 and 2017. We attempted to analyze as many firms as possible. The KCGS started announcing ESG grades in 2011. However, as estimating the cost of equity requires a consensus one year later, all the 2018 data are used. Specifically, the data used for this study are as follows: Daily and monthly dividend adjusted returns, prices, dividends per share, total assets, total debt, market capitalization, and analyst earnings forecasts. Market returns are calculated as the average of the returns of all stocks traded on KOSPI on day t. The three-year government bond yield is used as a proxy for the risk-free rate. All data except ESG grades are obtained from FnGuide. Data on ESG grades are taken from the KCGS. To minimize measurement errors and outliers, estimates corresponding to the upper 1% and lower 1% of the cost of equity are excluded from the sample. To reduce the influence of extreme values, all control variables are winsorized at 1% and 99% each year. CSR Performance It is difficult to measure CSR performance, since the scope and definition of CSR is very diverse [50]. Some databases such as Kinder, Lyndenberg and Domini (KLD), Bloomberg, and Thomson Reuters Eikon have provided data on CSR performance of international corporations. Among them, KLD is the most widely used data source [11,[51][52][53]. However, Yoon et al. [20] argued that ESG grades, published by these databases, have limitations in examining the Korean stock market, since they do not cover a wide range of ESG information on Korean firms. Thus, we use ESG grades by KCGS as a proxy for CSR performance. Previous studies, which analyze the relations between CSR performance and the firm values on Korean stock markets, have employed the KEJI index or ESG grades as proxy for CSR performance. Earlier studies mainly used KEJI index by the Korea Economic Justice Institute [12][13][14]. However, it has been pointed out that it is difficult to analyze specifically the relationship with CSR performance, since KEJI index provides information on only 200 firms. Recent studies have used the ESG grades announced by the KCGS [15][16][17][18][19][20]. KCGS covers the largest number of firms among those listed in KOSPI, as well as selected firms listed in KOSDAQ, resulting in reduction of the sample bias. The KCGS announces grades on environmental management, socially responsibility management, and corporate governance, every year, in order to help firms improve their level of sustainable management. KCGS uses seven grades (S, A+, A, B+, B, C, and D). However, for all areas, except governance, KCGS makes public the grade as "below B" for all grades of B, C, or D. Many governance indicators can be drawn directly from business reports, but other data can be evaluated only through voluntary disclosure. All KOSPI-listed firms are included in the evaluation. In addition, large-cap stocks (KOSDAQ 100), financial companies, companies belonging to large firms, and the firms that institutional investors request to be graded are also included in the evaluation. The KCGS evaluates CSR performance for three areas: Environmental responsibility, social responsibility, and governance performance. These three factors are evaluated separately. In assessing around 900 listed firms, the KCGS consults data available via corporate disclosure (e.g., business reports, sustainability reports, and firms' websites) and media coverage. After an initial evaluation on 13 major categories, an analysis, using 237 core evaluation criteria and in-depth evaluation, are conducted simultaneously. The final grades are the sum of the environmental, social, and governance scores. The ESG grades are published annually. Table 1 shows the statistics for firms that earned ESG grades from 2011 to 2017. A total of 4907 firms were evaluated over seven years, but only 4726 received ESG grades. Of these, 799 firms received an ESG grade at least once during the sample period. Only about 1% of firms earned A+, and about 80% earned B or lower. In 2017, Shinhan Holdings was the only company to receive an S grade. Cost of Equity The cost of equity, which is rate of return required by a firm's equity holders, is not directly observed in the market. Thus we need to estimate it with theoretical models. There are two approaches for estimating the cost of equity in academic fields [54]. The first one is the ex-post realized returns approach. This approach assumes that, realized returns are an unbiased estimator of the market's required return. However, several studies have pointed out that the realized returns may not be an appropriate measure of the cost of equity. For example, Fama and French [26] argue that asset pricing models, such as the CAPM, the Fama, and French three-factor model, do not estimate the cost of equity because the models, which use realized returns as a proxy for expected returns, have failed to provide the clear evidence of an inter-relationship between average realized returns and betas because the CAPM uses. Elton [55] also insists that the average realized returns cannot be a proxy for expected returns because an association between realized returns and expected returns are weak. The second one is the ex-ante implied approach. It considers that the cost of equity is implied in the relation between current market prices and analysts' forecasts [11]. Recent studies argue that ex-ante cost of equity is better proxies for the cost of equity. Analysts' forecasts are regarded as a rational proxy of the market's expected future cash flows, while the expected future cash flows cannot be observed directly [29]. In addition, it makes an explicit attempt to isolate the cost of equity effects from the growth and cash flow effects [24]. Also it is particularly useful for the short sample period, while the sample average approach requires such a long sample period to acquire the reliability of the estimation [54]. Based on such discussions, we estimate the implied cost of equity using ex-ante implied approaches. We follow three different models suggested in previous studies [24,27,56,57]. In all three models, FEPS t is the three-month average value of the t year ahead EPS estimates announced by securities firms; we re-estimate the cost of equity using median analyst's EPS forecasts. To save space, the results using average value are reported here. The results using median value are similar to those of using average value. FEPS is obtained from FnGuide. P t is the price at t, and DPS is the dividend per share at t. Our first measure of the cost of equity is estimated by the Ohlson and Juettner [28] model (OJ model). The OJ model has two advantages. First, it does not need forecasts of book values or return on equity. Second, it is parsimonious; γ determines the perpetual growth rate as well as the decay rate of short-term growth as our short-term growth [29]. As suggested by Gode and Mohanram [29] who empirically test the OJ model, we use the average of forecast two-year growth and estimate λ − 1 by the risk-free rate less 3%. Specifically, for each firm i in year t, k OJ is calculated as where , and λ − 1 = r f − 0.03. The second measure is the PEG model suggested by Easton [30]. This model assumes no dividend payments and is based on short-term earnings forecasts. For each firm i in year t, k PEG is calculated as The third measure is the MPEG model also suggested by Easton [30]. This model has a clear forecasting horizon of two years, assuming earnings with a constant growth rate. For each firm i in year t, k MPEG is calculated as: Equation (3) requires FEPS t+2 > FEPS t+1 > 0. Table 2 reports the summary statistics of the cost of equity estimates. Estimates corresponding to the upper and lower 1% are excluded. The average annual cost of equity is 15.20% to 16.35%. The cost of equity is estimated to be lowest in the OJ model and highest in the MPEG model. This result is consistent with those in previous studies [22,30,58]. Figure 1 shows the average cost of equity by ESG grade. There is no pattern of monotonous increase or decrease in the cost of equity according to the ESG grade, but costs of equity below B are noticeably higher. Table 3 reports the summary statistics for the control variables used in this study. We include firm characteristics, which are known to affect the cost of equity. Book-to-market ratio (BM) is used to capture a firm's growth opportunity [59]. The BM ratio is calculated by dividing the book value of equity in year t by the market value of equity in year t − 1. Firms with larger BM ratios can be expected to have higher capital costs. Firm size (SIZE) is used as a proxy for firms' information environments. Information asymmetry is mitigated for large firms, due to the high interest shown by media and analysts [60], leading to a negative relationship between firm size and the cost of equity. Firm size is measured as the natural logarithm of total assets at the end of year t − 1. The market beta (BETA) is expected to be positively related to the cost of equity because it represents volatility against systemic risk. BETA is calculated using the Fama and French three-factor model over a one-year period. If the number of observations is less than 120, they are regarded as missing values. For leverage (LEV), a positive relationship between leverage and cost of equity is expected because a large debt can induce great financial risk. LEV is calculated by dividing the total debt by market capitalization at the end of year t − 1, following [61]. A high volatility of stock returns (IVOL) indicates greater risk of unfavorable earnings or returns. Park and Kim [16] argue that corporate CSR plays an important role in lowering the cost of equity by reducing idiosyncratic risks. Therefore, it is expected that, idiosyncratic volatility and cost of equity, will be positively related. Idiosyncratic volatility is estimated as the standard deviation of the residuals from the Fama and French three-factor model, using daily returns over a one-year period. If the number of observations is less than 120, they are regarded as missing values. The number of analysts following a firm (ANAL) determines the amount of information available about it. Therefore, a large number of analysts can be expected to have a negative relationship with the cost of equity because they can lower transaction costs and estimation error. ANAL is calculated as the number of analysts who announced forecasts. By contrast, a large standard deviation of analysts earnings forecasts (DISP) can be expected to be positively related to the cost of equity because it increases risk [58,59]. Thus, DISP is calculated as the standard deviation of earnings forecasts. Finally, we control for the long-term growth forecast (LTG). Stocks with high growth rate are generally considered to be riskier, and the long-term growth is expected to be positively associated with the cost of equity [11,29]. LTG is calculated as the difference between one-and two-year-ahead EPS forecast divided by the one-year-ahead EPS forecasts. Tables 3 and 4 report summary statistics for the control variables, and correlations, respectively. Portfolio Analysis We constructed portfolios, based on ESG grades, in order to capture the potential differences between the average costs of equity for each portfolio. The portfolios were formed in four ways, according to the ESG grade: Environmental management grade (E), social responsibility management grade (S), and governance grade (G). Specifically, the portfolios were constructed as follows. First, all stocks, included in the sample, have been classified as either a "High" or "Low" portfolio according to the ESG grades at the end of December of year t. The High portfolio consists of stocks with grades above B+, while the Low portfolio consists of stocks with grades below B. We then observed the relationship between CSR performance and the cost of equity through the magnitude and significance of the cost of equity of High/Low portfolios at the end of year t + 1. The cost of equity is estimated using the OJ, PEG, and MPEG models. All portfolios are reconstructed every year. Table 5 reports the magnitude and significance of the average cost of equity for the portfolios. Panel A presents the cost of equity of the portfolios formed based on ESG grades. We observed that the cost of equity of the High portfolio is lower, regardless of the estimation method used. The annual average cost of equity of the High portfolio ranges from 13.79% to 16.04%, which is 0.19%p to 1.09%p lower than the average cost of equity of the Low portfolio, which ranges from 14.89% to 16.23%. This result is consistent with previous studies' finding that firms with better CSR performance can finance with a lower cost of equity [11]. Note: ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively. The results in Panel C show that firms that earned grades above B+ for social responsibility management can finance the cost of equity at the lowest cost. This result is observed regardless of how the cost of equity is estimated. High portfolio firms can finance at a cost of 13.41% to 15.69% per annum, while Low portfolio firms pay 15.30% to 16.52% per annum. The difference between the two portfolios is −0.83%p to −1.89%p, which is statistically significant at the 5% level. The results in Panel D are also significant. They show that firms graded above B+ for governance can finance at a cost −1.13% p to −1.37% p lower than firms with lower grades. This result is statistically significant at the 10% level. This result is consistent with the previous finding that better governance in the Korean stock market leads to a lower cost of equity [58]. However, the result for environmental management provides no evidence of a statistically significant difference. The result of the portfolio analysis indicates that firms with better CSR performances can finance their cost of equity at low costs. The firms that fulfill their social responsibilities, and have better corporate governance, enjoy particularly low costs. Fama and MacBeth Cross-Sectional Regression Next, we conducted a Fama and MacBeth [62] cross-sectional regression analysis, at the individual stock level, in order to check whether the results of the portfolio analysis persist after various factors that may affect the cost of equity are controlled for. The cost of equity at the end of year t + 1 is the dependent variable, and the other variables at the end of year t are independent variables. Table 6 reports the regression results. Models (1), (3), and (5) do not include the control variables, in order to allow a direct examination of the relationship between CSR and the cost of equity. Models (2), (4), and (6) examine the relationship between CSR and the cost of equity, while controlling for various factors known to affect the cost of equity. Note: ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively. Panel A of Table 6 shows that no statistically significant relationship is observed between ESG grades and the cost of equity, regardless of the regression model. Although differing in magnitude and significance, other control variables indicate the predicted signs. Specifically, SIZE, BETA, LEV, and ANAL are significant. The smaller the firm is, the larger the market beta, the higher the leverage ratio, and the smaller the number of analysts following it-which all lead to higher costs of equity. Except for the result in Panel A, the results are similar to those of the portfolio analysis. We find that firms that fulfill their social responsibilities and have better corporate governance can finance their cost of equity at a lower cost. Environmental management does not affect the cost of equity even when the effects of other factors are controlled for. The results of the empirical analysis are summarized as follows. In the Korean stock market, CSR performance does not directly affect the cost of equity. When distinguishing firms, based on social responsibility and corporate governance, however, we find that high social responsibility and corporate governance levels lead to lower costs of equity. Additional Analysis In this section, we verify whether the results of the empirical analysis are robust. We re-examine the sample excluding cases where the number of analysts is fewer than five. We examine the results, excluding cases where the number of analysts reporting the consensus, is fewer than five. A large number of analysts indicates that more information about the firm is available. This is expected to mitigate information asymmetry and reduce the cost of equity. This phenomenon should be particularly apparent for firms with low ESG grades, which have high costs of equity. Table 7 shows that the cost of equity decreases sharply for the Low portfolio. In some cases, the cost of equity is slightly lower than that in the High portfolio. The results of the empirical analysis show that, the statistically significant negative results, between the High and Low portfolios' costs of equity, are mainly due to the high cost of equity of the Low portfolio stocks. However, when these stocks are excluded, the differences in the cost of equity between the High and Low portfolios are not statistically significant. This result is consistent with the results of the Fama and MacBeth [62] cross-sectional regression analysis. Table 8 shows that, in most cases, CSR performance has no statistically significant effect on the cost of equity. The second column of Panel D shows that governance has a significant impact on the cost of equity. The correlation coefficient reported in Table 4 shows that CSR performance is highly correlated with firm size and the number of analysts. Large firms have excellent CSR performance and a large analyst following, which mitigates information asymmetry and lowers the cost of equity. Thus, differences in the cost of equity can be affected by information asymmetry. Note: ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively. Note: ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively. Conclusions This study presents empirical evidence on the role of CSR in the Korean stock market. We analyzed how CSR affects the cost of equity by considering three dimensions of CSR activities: Environmental management, socially responsible management, and corporate governance. The empirical results show that, socially responsible management significantly reduces the cost of equity. Firms with higher levels of socially responsible management pay costs of equity 1.13% to 1.37% lower per annum, than firms with lower levels. Corporate governance also marginally affects the cost of equity. On the other hand, environmental management has no impact on the cost of equity. These results are significant, even after several factors known to affect the cost of equity are controlled for. Our study provides guidance for financial managers of firms, that are considering conducting, or are already conducting, CSR activities. Enhancing socially responsible management and corporate governance can increase firm value in Korea by reducing the cost of equity, all else being equal. Focusing on socially responsible management will produce the greatest reduction in the cost of equity, while environmental management is not a significant factor. Our findings also imply that investors tolerate lower returns from firms that are more highly engaged in CSR activities, because they can expect sustainable incomes from such firms. Thus, investors consider, not only the potential for high returns in the short run, but also the sustainability of firm performance over the long run. Finally, a reliable indicator like the KLD index is required to be developed in order to measure firms' CSR performance precisely in the Korea. Although KCGS has measured and published the CSR performance of most of the stocks listed on the Korean stock market, it relies on what corporations report or voluntary disclosure, making it difficult to accurately measure CSR performance. This study contributes to the CRS-related literature, by focusing on the Korean stock market, which is increasingly interested in firms' CSR activities. We analyze, not only the CSR performance and firm value, but also which CSR activities have more influence on the firm value. Meanwhile, this paper has some limitations. First, even though we have tried to analyze as many firms as possible to reduce sampling bias, there are still many firms that are excluded from the KCGS evaluation. Second, we analyzed the effect of CSR on the cost of equity. Future studies can extend our approach to cost of debt and cost of capital. Finally, we concentrated on the Korean stock market, but further studies can be extended to Asian emerging markets and compare the results. These limitations can be addressed in further studies.
5,896.8
2019-05-23T00:00:00.000
[ "Business", "Economics", "Environmental Science" ]
Small slot waveguide rings for on-chip quantum optical circuits Nanophotonic interfaces between single emitters and light promise to enable new quantum optical technologies. Here, we use a combination of finite element simulations and analytic quantum theory to investigate the interaction of various quantum emitters with slot-waveguide rings. We predict that for rings with radii as small as 1.44 $\mu$m (Q = 27,900), near-unity emitter-waveguide coupling efficiencies and emission enhancements on the order of 1300 can be achieved. By tuning the ring geometry or introducing losses, we show that realistic emitter-ring systems can be made to be either weakly or strongly coupled, so that we can observe Rabi oscillations in the decay dynamics even for micron-sized rings. Moreover, we demonstrate that slot waveguide rings can be used to directionally couple emission, again with near-unity efficiency. Our results pave the way for integrated solid-state quantum circuits involving various emitters. I. INTRODUCTION The cutting edge of solid-state quantum optics is determined by our ability to realize phenomena such as photon-mediated cooperative effects between multiple quantum emitters [1][2][3][4], few-photon nonlinearities [5][6][7], and strong light-matter coupling [8,9], which provide the resources that are crucial for scalable quantum simulation and information processing [10][11][12].An important bottleneck in this endeavor is the efficient coupling of light and matter.Ideally, one would like a single photon to interact with a single quantum emitter such as a quantum dot, color center, ion, or molecule with 100% efficiency. Traditionally, cavities have been used to boost light-matter coupling by significantly increasing the time during which a photon and an emitter interact.A variety of resonators such as bulk Fabry-Pérot cavities, microspheres, microdisks, micropillars, or photonic crystal cavities can retain a photon for millions of optical cycles, increasing the probability (β) that a photon interacts with a quantum emitter to unity [13].In other words, a photon is always emitted into the cavity mode.The high quality factors (Q) required for this operation, however, usually pose severe technical challenges.In particular, it becomes imperative that one tunes the narrow resonances of the cavity and the emitter to each other and stabilizes the cavity length to down to a few picometers [14].Simultaneous coupling of several emitters or frequencies that can be individually addressed is, therefore, not within reach. As an alternative to cavity-enhanced interaction, several groups have investigated singlepass coupling via near-field optics [15], tight focusing [16][17][18][19] or a subwavelength waveguide (nanoguide) [20][21][22][23].The key concept in this approach is spatial mode matching between the photon and the emitter radiation pattern.Although the coupling efficiency β can theoretically reach unity [24], it is a challenge to identify well-behaved coherent transitions known in atomic physics in the solid state.Here, issues such as the quantum efficiency, phonon dephasing, or lossy transitions limit the scattering cross section of a given transition.For example, in the case of organic dye molecules the Frank-Condon and Debye-Waller factors reduce the overall efficiency by about 50 − 70% [15] while for nitrogen-vacancy centers in diamond, strong phonon wings and the quantum efficiency limit the efficiency to well below 10% [25].In cavity-coupling, one can hope to compensate for such photophysical deficiencies by strong enhancement of the interaction between the cavity mode and the emitter [26]. The central advantage of a cavity-free coupling is its immense bandwidth.In this work, we provide an example, where the advantages of single-pass and cavity couplings are combined through the design of feedback geometries with moderate Q.Of the different cavity-free approaches, the nanoguide geometry is particularly attractive for this purpose because it can be implemented on a chip and be used as the building block of quantum optical circuits. Moderate cavity feedback would also be particularly advantages for this platform because it is otherwise a great challenge to achieve a very large index contrasts between the nanoguide and its surrounding, necessary for reaching high β factors.So far, this issue has been addressed via slow light photonic crystals [27,28].Another proposal has been to use slot waveguides [29,30]. Here, we begin with a slot waveguide that couples well to single quantum emitters (β ≈ 0.6) and bend it into a ring, as shown in Fig. 1a.We model a realistic implementation of the slot-waveguide ring compatible with nanofabrication capabilities.By choosing a small radius r < 1.5 µm, we minimize the structure footprint, while keeping the balance between the Q (> 3000) of the ring and its performance as a quantum optical platform.As we show, this system maintains the broadband nature of waveguides relative to lifetimelimited transitions of solid-state emitter (Fig. 1b) and allows for near-unity β and even to enter into the strong coupling regime.We conclude by considering two specific implementations of our system: one where the emitter is sitting inside the slot, as would be the case for single organic molecules or colloidal quantum dots, and the other for an emitter such as a quantum dot or an NV center embedded in one of the high index bars of the slot waveguide (see Fig. 1c).In the latter, we explore the possibility of chiral-emission where the state of the emitter determines in which direction a photon is emitted.In either case, the robust performance of the slot-waveguide ring suggest that it is a powerful platform for future quantum nanophotonic experiments and applications. A. Solid-state quantum emitters in linear nanoguides Nanoscale waveguides (nanoguides) provide a flexible and scalable platform for quantum optics.The simplest version of a nanoguide consists of a nanoscopic rectangular channel that is surrounded by a lower refractive-index medium, and the efficiency with which it couples to emission depends on the magnitude of this refractive index mismatch.Simply put, a greater refractive index difference between the core and the surrounding results in larger field enhancement inside the nanoguide, and a more efficient interface with a quantum emitter.The optical properties of nanoguides depend on their geometry, meaning that mode profiles, light confinement and bandwidths are all easily tuned.This versatility ensures that nanoguides can be designed to interface with the many different quantum emitters, each of which has different optical properties and is suited for different applications [31].In practice, however, the coupling efficiency of the system is limited, as its constituent materials are often predetermined by the type of emitter used. Quantum emitters, in general, act as two (or three) level systems.A transition between these levels occurs as a result of a charge redistribution that can be described by a transition dipole d e , and is accompanied by the absorption or emission of a photon of angular frequency ω e .Each transition resonance is described by a homogenous linewidth γ hom (see Fig. 1b, for a comparison of emitter linewidth relative to a cavity resonance) that is typically distributed over an inhomogeneous spectrum that is typically greater than 1 THz in the solid state.This inhomogeneous broadening arises because each emitter experiences a slightly different local environment within its dielectric host matrix. Integration of various solid-state emitters in waveguides requires considerations specific to each system.In particular, some emitters such as single organic molecules or colloidal quantum dots are generally embedded in a fairly low index dielectric.Thus, a nanoguide made of the host matrix results in low coupling efficiencies around 0.1 to 0.2 [23], while inclusion of single molecules or colloidal quantum dots into standard nanophotonic structures fabricated out of high-n dielectrics is not trivial either.Rare earth ions, epitaxially grown quantum dots, or vacancy centers in diamond, on the other hand, are inherently embedded in high-n matrices that can form nanoguides; The coupling efficiency typically remains below 0.75, though it can be increased by introducing resonances to the structure [32,33]. B. Straight slot waveguides Here, we turn to a slot waveguide [29,30,34], whose cross-cut is outlined in Fig. 1c, showing an ultrathin low n region sandwiched between two higher index channels.This geometry leads to an effective coupling of the modes of the two high index channels, resulting in confinement of the propagating mode to the low index region, which is highly subwavelength in size.As a best case, we calculate that an emitter in a 60 nm wide, n = 1.6 dielectric between two gallium phosphide (GaP) channels (n = 3.2), will experience a coupling efficiency β = 0.75 and an enhancement of the emission χ = 3.25 (see Appendix A for the calculation of β and χ).These correspond to more than 4-fold increase of β and 3-fold increase of χ relative to a simple nanoguide geometry [23].The broadband nature of the waveguides results in enhancement of the resonant florescence at ω e , but equally of all red-shifted emission. C. Emission into slot waveguide rings By bending the slot waveguide into a ring, as shown schematically in Fig. 1a, we can add optical feedback to our system and increase β from 0.75 towards unity.The spectral response of a 1.44 µm radius ring, where bending losses dominate (see Sec. III A), is shown in Fig. 1b (other dimensions given in the caption); here, we observe narrow modes separated by 10 THz. A zoom onto the 24 th order mode reveals a calculated spectral full-width at half maximum (FWHM) of 14.2 GHz, which is up to 1000 times broader than the natural linewidths of various typical emitter resonances.As examples, in the inset to Fig. 1b we overlay zero-phonon line linewidths of single molecules [35] and vacancy centers in diamond [36] (10-30 MHz FHWM, red curve) and of epitaxially grown quantum dots [37] (530 MHz, blue curve) over the ring resonance.For this ring geometry, the non-zero width of the resonances is due solely to bending losses, which result in complex eigenfrequencies.Here, for instance, the complex mode frequency is ω cav + iγ cav /2 = 2π (3.947 × 10 14 + i7.082 × 10 9 ) rad/s (calculated using the commercial eigenfrequency solver COMSOL).These bending or radiative losses are the limiting factor of the quality of the resonator given by Q = Q rad = ω cav /γ cav = 27, 900. Creating a resonator out of the slot waveguide also affects the optical eigenmode of the structure, albeit in a more subtle manner than the modification to its spectral response. In contrast to a straight slot waveguide, the mode of the ring is slightly asymmetric, as we see in Fig. 1c.This asymmetry is visible in all three field components, as the fields on the outside of the ring (right side) are slightly larger.As is the case for a straight slot waveguide, the field is largest in the slot, where it is also primarily radially polarized.This slot, then, is ideally suited for emitters with linear transition dipoles embedded in a low-n dielectric.A good position for such an emitter is marked by the red circle in Fig. 1c; note that this position is clearly off-center, due to the aforementioned mode asymmetry.This slot waveguide ring is also a good platform for emitters that require a high-n host dielectric. These, be they epitaxially grown quantum dots or defect centers in diamond, could be placed at the position of the blue square in Fig. 1c (see Sec. IV B for a detailed explanation of why such a placement is advantageous). To study the interaction of an emitter with the slot waveguide mode, we perform fully The ring has a 1.44 µm radius, and the cylindrical coordinate axes of our system are shown. three-dimensional finite element method simulations of the ring structure.A radially oriented transition dipole that oscillates at the ring resonance frequency (i.e.ω e = ω cav ) represents the emitter, and is placed in the slot.We then look at the steady-state field generated by this dipole, as shown in the z = 0 and ϕ = π/2 planes in Fig. 2 (where ϕ = 0 at the position of the dipole), for a 1.44 µm radius ring.In this image the radiation is nearly fully coupled into the photonic modes of the ring.To determine χ quantitatively, we use the field map, as explained in Appendix A and compute the fraction of the power radiated by a dipole into our nanophotonic system.For the 1.44 µm ring, we calculate that χ = 1, 330, a 400-fold enhancement of emission with respect to an identical emitter in a straight slot waveguide.Similarly, we calculate β by comparing the total emitted power to the one found far along the waveguide.We find β = 0.995, meaning that only 1/200 photons leaks out of the waveguide as compared to 1/4 in a straight nanoguide. A similar analysis reveals remarkably efficient couplings between the ring and quantum emitters even when they are placed well away from the mode maximum in the slot.An emitter embedded in the high-n channel, as shown by the square symbol in Fig. 1c, would experience χ = 56 and β = 0.99 (see Sec. IV B for more details).That is, slot waveguide rings are compatible with all manner of quantum emitters, including quantum dots or NV centers, which would necessarily be placed out of the field maximum. In addition to the improvement of the spatial coupling to a single mode, the slot waveguide ring will favor the resonant emission on the zero-phonon line compared to the Stokes-shifted coupling to vibrational states or phonon wings, thus, improving the branching ratio (oscillator strength) of solid-state emitters.This renders a solid-state emitter inside such a ring like an ideal two-level emitter such as an atom with an overall outcome of β 1.Clearly, even a ring with a geometric cross-section of only 6.5 µm 2 and a moderate Q = 27900 acts as a near-ideal interface between an emitter and photons. D. Emission dynamics We now turn to the dynamics of the emitter-waveguide resonator interaction.In the framework of macroscopic quantum electrodynamics [38,39], an emitter that is prepared in its excited state will decay according to (see Appendix B), where and K 0 = (ω 2 cav /ω 2 e ) χγ cav γ hom .In these equations Γ = i (ω cav − ω e ) + γ cav /2 contains both the loss rate of our nanophotonic resonance and a phase due to detuning between the ring and emitter resonances.In what follows, we assume ω cav = ω e and hence Γ is simply the loss rate of our system.It follows that D depends on the difference between the rate at which energy is lost and the rate at which energy is exchanged between the emitter and the photonic mode. The dynamics of the emitter's decay are determined by the interplay between Γ and D, which, in turn, are dependent on the resonator quality factor, Q. Equations ( 1) and (2) allow us to quantify these dependencies and to calculate the probability to find the emitter in its excited state |C e (t)| 2 .The results of these calculations for rings with Q ranging from 49 to 27900 are shown in Fig. 3(a) (see Sec. III B for an explanation on how Q may be varied by introducing losses into the system).Here, we take γ hom = 30 MHz, which is typical for a single organic molecule. We observe markedly different dynamics for the different rings, ranging from a clear oscillatory behavior for the large Q rings to a slow exponential decay for small Q resonators. In other words, the decay dynamics show that these slot waveguide rings can couple either Rabi oscillations are clearly visible in the high Q decay, while for small values of Q the decay dynamics approach that of a lossless, straight slot waveguide (dashed curve).(b) The different rates of the decay dynamics as a function of the resonance Q, normalized to γ hom .The transition from the weak to strong coupling regimes occurs when the value of D becomes imaginary.In the strongly coupled regime, the region where Rabi oscillations can be observed (ImD ≥ 2γ) is labeled, as is the low Q region where a monoexponential decay of the emitter is observed. strongly or weakly to quantum emitters, and the coupling strength can be tuned by varying the resonator Q. We can, in fact, understand the different coupling regimes by considering Eqs. ( 1) and (2) and how Γ and D depend on Q [Fig.3b].The transition from the strong to the weak coupling regimes occurs when Γ 2 = K 0 , where D evolves from being imaginary to real valued and C e (t) loses its oscillatory nature.In our system, this change occurs for the moderate value of Q = 8, 300.For larger Q resonators, when ImD ≥ 2Γ, Rabi oscillations are clearly visible, as is the case in Fig. 3a. In contrast, when D is real valued, the emitter and resonator photons are weakly coupled and we observe exponential decay dynamics in Fig. 3a.As expected, the decay constant approaches that of an emitter in a straight slot waveguide (dashed curve in Fig. 3a) as Q decreases.As we shall see below (Sec.IV A), well into the regime where the resonator and emitter are weakly coupled (where χ > 200, see Fig. 5), we still find β 1, demonstrating the power of this quantum optical platform. III. DESIGN CONSIDERATIONS The previous sections laid out the response of an idealized slot-waveguide ring, namely one with no losses beyond those associated with the bending of the waveguide.It is fair to ponder how such a ring would fare under more realistic conditions.In this section, we answer this question, first examining the types of imperfections that can be reasonably expected for this type of nanophotonic structure and their consequences to the performance of the ring. A. Nanofabrication: Imperfections and materials Nanofabrication techniques typically result in imperfections, leading to additional loss channels such as scattering losses due to surface roughness.In addition, depending on the choice of material, dopings or imperfections in the thin dielectric layers can cause absorption losses.The total quality factor of the ring resonator can then be expressed as a sum of three abs , corresponding to the radiative, scattering, and absorption channels.For a realistic performance, it is important to examine the competition among these contributions.We choose to neglect absorption losses in this work since several non-absorbing dielectric and semiconductor layers are available.We note, however, that it will be straightforward to extend our results to absorbing material because both absorption and scattering losses effect the optical properties of the ring by limiting the propagation length of the light. The deposition and nanopatterning of thin films typically results in surface roughness that is on the order of 2-5 nm for granular films (such as TiO 2 ) and can be much smaller for single-crystalline or amorphous materials (such as GaP).In the Rayleigh scattering model, Q scat is inversely proportional to both the square of the root-mean-square (RMS) size of surface features and their correlation length [40].A realistic value for the surface roughness of 2 nm, with a corresponding correlation length of 10 nm would result in Q scat = 2.1 × 10 6 at a wavelength of 760 nm.In fact, it is only when the RMS roughness approaches 10 nm (and the correlation length 100 nm) that Q scat drops below 20000. We also note that although we have used GaP as the high-index medium for our simulations, our findings can be readily transposed to other materials, which might be more suitable for various applications.Three such examples are diamond (n = 2.4), SiC (n = 2.5), and TiO 2 (n = 2.5).The lower refractive index contrast between the waveguiding region and the surrounding material lessens the confinement of the light in these cases and causes higher bending losses.To compensate for these, the ring radius can be increased.For example, for a diamond slot waveguide ring with w = 180 nm, h = 230 nm and r = 3.1 µm, we find that Q = 30000.Likewise, for a SiC resonator with w = 170 nm, h = 220 nm and r = 2.5 µm, Q = 29000.In other words, a change in the materials of the slot waveguide can be easily accounted for with a slight tuning of the geometry, allowing us to recover the functionality of our platform. B. Tunability The optical properties of slot waveguide rings can be changed by tuning the ring geometry (i.e.varying the size to alter Q rad ) or by introducing losses, either due to enhanced scattering (e.g.due to surface roughness) or absorption.While scattering is usually a static intrinsic feature, absorption losses can be introduced in either a passive or active manner, e.g., by doping the semiconductor layer during growth or actively by electrical or all-optical generation of free carriers.Active control, in particular, allows for the reshaping of the emitted photon's wavefunction, if it occurs on the time-scale of the emitter lifetime [41]. Here, we calculate the eigenmodes of rings (as was done in Fig. 1c), while varying either the ring radius or the imaginary part of the refractive index of GaP, κ GaP to introduce absorption losses.From these mode distributions and their corresponding complex eigenfrequencies, we extract both Q and V eff [42] as a function of the ring radius and the propagation length.The outcome is presented in Fig. 4. As expected, both shrinking the ring and introducing absorption losses leads to a monotonic decrease in Q (Fig. 4a), here by a factor of almost 300.These calculations show that even resonators with high losses (corresponding to propagation lengths in the 10's of micrometers) or with radii down to 1 µm still have Q's in the 1000's. Interestingly, while Q has a similar dependence on both the radius and the loss, changing these parameters has very different effects on V eff .Decreasing the ring size results in a smaller mode volume (bottom axis, Fig. 4b) although in all cases we observe a small V eff of only a few (λ eff ) 3 .This decrease, however, is not proportional to the decrease of the geometric ring volume, V g = πh (r 2 o − r 2 i ) where r o,i are the outer and inner radii of the ring.For example, a decrease of V g by a factor of 2.8 results in a corresponding decrease of V eff by only 1.75 (in all cases, n eff ranges from 2.0 to 2.1).This difference occurs because as the ring shrinks, the bending losses increase, and the mode is pushed out of the slot and into the outer part of the ring.The situation is very different when r is held constant and losses are ramped up.In this case, the mode distribution is basically unaltered and V eff remains constant (Fig. 4b, top axis). IV. IMPLEMENTATIONS In this section we consider two different approaches to quantum optics in a slot waveguide ring.In the first, quantum emitters are embedded at the electromagnetic mode maximum in the low-n dielectric in the slot (see Fig. 1c).This approach is compatible with emitters such as single molecules and colloidal quantum dots, and it highlights the strong field confinement inherent to slot waveguides.Secondly, we consider emitters embedded in the high-n bars such as epitaxially grown quantum dots or defect centers in diamond.Here, we make use of the structured light fields of the slot waveguide ring to direct emission. A. Tunability of emission from low-n quantum emitters In Sec.II C we saw that slot waveguide rings act as remarkably efficient interfaces between emitters and photons, when the emitters are placed in the field maximum inside the low-n slots.Here, we explore the tunability of the emission when the ring properties are varied, as was done in Sec.III B. We begin by considering the case where absorption losses are introduced.In this case, Q is changed while the optical eigenmode remains unaltered.The resultant emission properties are displayed in Fig. 5(a).Clearly, both β and χ decrease as the losses increase (see Fig. 4a).The ring, however, maintains its efficient performance for κ GaP up to 0.004, resulting in Q = 600 and corresponding to a propagation length of 15 µm at λ = 760 nm.In this range, β > 0.95 while χ varies between 30 and 1,300. Changing the ring radius instead of introducing losses, affects χ and β in a similar way Q/V eff [26], we expect a smaller (lossless) ring to outperform a larger, lossy ring with a similar Q as is most readily noticeable when Q < ∼ 200. B. Chiral emission with high-n quantum emitters Quantum emitters such as epitaxially grown quantum dots or vacancy centers in diamond, which are embedded in high-n dielectrics, can also be interfaced with a slot waveguide ring. As we briefly touched on in Sec.II C, placing the emitter away from the optical mode maximum necessarily results in a decrease to the emission rate enhancement.For example, an emitter placed in one of the high-n channels of a 1.44 µm radius ring, as shown by the blue square symbol in Fig. 1c, experiences χ = 56 as compared to χ = 1, 330 at the mode maximum.The emitter does, however, maintain β = 0.99 even when placed in one of the high-n bars.We now show that at this position, the vectorial nature of light can be exploited to control the direction in which emission occurs.Such unidirectional coupling of emission to photonic pathways can be used as a basis for quantum architecture, and hence has been the focus of several recent studies [43][44][45][46]. Unidirectional emission has been investigated for transitions between different spin-states of emitters, whose charge redistribution is described by circular dipoles (e.g.d e = d/2 (r ± i φ) in the ring coordinates).For such directional emission to occur, the optical eigenmodes must contain regions of circular polarization, where the handedness of the light field depends on the direction of propagation.Placing a circular dipole in such a region ensures that it would only radiate in one direction, depending on its handedness.In Fig. 6(a) we show the ellipticity (see Appendix C) of the light field in the r = 1.44 µm ring [whose mode is shown in Fig. 1(c)].This quantity is a measure for how circular the light field is, peaking at ±1 where the light is right or left handed circularly polarized, while for linear light fields it is 0. From Fig. 6(a) it is clear that the areas where the light is most circular can be found outside the slot.In fact, since we want both a near-unity ellipticity, as well as a large field amplitude, a favorable position for directional emission is inside the high-index bars (solid circle in Fig. 6a); at this position the ellipticity peaks at 0.87. We repeat our calculations with an emitter placed at the position marked by the dot in Fig. 6(a).We vary the transition dipole to consider linear as well as right and left handed circularly polarized dipoles.We Fourier transform the line-trace of E r along the center of the slot to obtain the wavenumber spectrum, whose amplitude is shown in Fig. 6(b).In this transform, we observe two sharp peaks, centered about ±16.4 rad/µm, corresponding to light propagating in the m = ±24 modes, respectively.By comparing the area under these peaks, we determine the directionality of the emission.In the case of the linear dipole (black curve), the two peaks are almost identical and there is no directionality to within a 2% calculation error.In contrast, for the two circular dipoles (blue and red curves), one peak in each curve dominates, depending on the handedness of the dipole.We obtain a directionality of 0.87 ± 0.02, as expected from the ellipticity of the mode.For completeness, we also calculate the situation of a circular dipole placed in the low index material outside the slot waveguide (cross in Fig. 6a), finding a directionality of 0.75 ± 0.02. A slot-waveguide ring, therefore, ideally lends itself as an element in a chiral quantum network [12].An emitter such as a quantum dot would experience β = 0.99, which can be decomposed to β + = 0.86 and β − = 0.13 for the two counter-propagating modes.We expect that slight adjustments to the waveguide geometry would increase the ellipticity close to unity, and hence allow for perfect directional emission.Finally, we note that the χ = 56 calculated for such an emitter would, for example, sufficiently broaden the emission spectrum and overcome residual line broadenings often encountered in the solid state. V. CONCLUSIONS In this work we introduced the use of slot waveguide rings for quantum optics.We combined numerical models with analytic quantum theory to study the emission properties of single emitters coupled to the rings.We demonstrated that using rings that can realistically be fabricated, with a geometric footprint as small as 6.5 µm 2 , it is possible to strongly couple a solid-state quantum emitter such as an organic dye molecule to the photonic modes.In particular, Rabi oscillations can be clearly visible in the decay dynamics of such moderate Q rings. We also showed that it is possible to tune emission properties by changing the optical mode of the slot waveguide rings.Two different tuning mechanisms were identified: a change of the ring size or the introduction of absorption to the system.While the former is a passive effect, active control of the later at speeds up to ultrafast time scales offers a fascinating gateway to all-optical control of complex nanophotonic interactions.In either case, tuning the system led to β values ranging from 0.75 to near unity and χ spanning three orders of magnitude up to about 1300. In closing we discussed the unique potential of slot waveguide rings for unidirectional emission.As examples, positions both inside the high index bars and in the low index medium outside the slot waveguide ring, were identified.Interestingly, strongly directional emission can occur at positions where β > 0.99 and χ = 56.In summary, the combination of efficient emission enhancement with a high degree of tunability suggest a highly attractive platform for both investigations of fundamental quantum phenomena, and for future quantum optics technology. where E (r) is the field radiated by the dipole.In our scenario, we assume radially oriented transition dipoles and extract the out-of-phase component ImE r from simulations.We can then calculate χ and β, which both characterize emission properties in the presence of the nanophotonic structure, and are used in our analytic model to describe the decay dynamics of the emitter. The emission enhancement χ is the ratio of the power dissipated by the radiation of a dipole in a nanophotonic structure (subscript 'nano') to that of the same dipole, but in bulk media (subscript 'hom').That is, for a linear transition dipole, Eq.A1 allows us to write, We find ImE r, nano (r e ), by looking at a line trace from the results of the 3D simulations (e.g.Fig. 2), as shown in Fig. 7.For this ring, ImE r, nano (r 0 ) / |d| = −1.11× 10 18 V/A.Similarly, for this dipole in bulk naphthalene, we find (using simulations in a half-spherical space, as in the case of the slot waveguide ring) that ImE r, hom (r 0 ) / |d| = −4.55 × 10 15 V/A.Hence, for this particular ring, χ = 243, as reported in the main text.For completeness, we note that in the weak coupling limit, we can express χ in terms of the decay rates of the emitter, writing χ = γ nano /γ hom ; in the single mode limit this is equivalent to the Purcell Factor.[42] To find how well the molecule emits into the desired mode of the ring, we compare ImE r at the position of the dipole to that in the far-field of the emitter, but at the same radial distance.This comparison can be seen in Fig. 7, corresponding to the values shown by the dashed purple and dark-gray lines.For this ring (κ = 0.004) this difference is clearly visible, and it corresponds to β = 0.949. When large losses are introduced into the system, either due to absorption in the GaP bars or due to increased radiation in small rings, we need to take an extra step to correctly calculate β.In the presence of large losses, the radiated field decays away from its source, meaning that there is no constant line-trace to the far-field ImE r (r 0 ), as was the case in and the field is given by where G (r, r , ω) is the classical electromagnetic Green's function of the nanostructure, and ε (r , ω) is the position and frequency dependent, imaginary component of the relative permittivity of our structure. The wavefunction in the single-excitation limit is Here, the basis states of the system are |e, 0 where the emitter is excited and there is no photon, and |g, 1 (r, ω) where the emitter is in its ground state and there is a single photon. In this equation, ω e is the transition frequency of our emitter, and the complex coefficients C e (t) and C g (r, ω, t) can be used to find the time-dependent probabilities for the system to be in each state. We use this wavefunction when solving Schrödinger's equation, To proceed, we recall that the molecule is, initially, in its excited state, meaning that C g (r, ω, 0) = 0. Inserting this expression into Eq.B7, and using the fundamental theorem of calculus allows us to write Next, we make use of the fact that the modes of our slot waveguide ring are semi-discrete, meaning that for every mode k the FWHM Γ k is much shorter than the free spectral range. In this case, we may safely model the Green's function by a sum of Lorentzian resonances, where G (z A , z, Ω k ) is the Green's function amplitude at the central frequency, Ω k , of mode k. We can then insert Eq.B8 into Eq.B6, and use Eq.B9 to perform the frequency integral for a single mode, which allows us to write Ċe (t) = Here, we have used the following Green's function identity [38], and assumed that the molecule only interacts with the k = 0 mode of the structure. We now take the time derivative of Eq.B10, and then make use of Eq.B11, to write Ce (t) + Γ Ċe (t) − K 0 C e (t) = 0, (B14) where Γ = i (ω cav − ω e ) + γ cav /2, as defined in the main text.To solve this second-order differential equation we impose initial conditions of the system.That is, since the molecule is initially excited, C e (0) = 1 and Ċe (0) = 0.The latter condition follows from Eq. B7, and that if C e (0) = 1 then C g (r, 0) = 0.The solution provides Eq. 1 of the main text. Lastly, we show how we are able to rewrite Eq.B12 in the form of K 0 from the main text.First, we note that in Eq.B12 we do not know the value of the dipole moment of the emitter, d e .We do, however, know the experimentally measured linewidth of the emitter in a bulk environment, γ hom .This linewidth can be related to the Green's function of the bulk media by [38], where Ghom (r e , r e , ω e ) can be either calculated analytically [26] or extracted from simulations.Thus, we can express the dipole moment via the linewidth and write Im G (r e , r e , ω cav ) Im Ghom (r e , r e , ω e ) .(B16) Since the electric field radiated by a dipole defines the Green's function, we can rewrite Eq.A2 as χ = Im G (r e , r e , ω cav ) Im Ghom (r e , r e , ω e ) .(B18) Finally, using Eq.B18 in Eq.B16 allows us to arrive at the expression for K 0 that is given in the main text. FIG. 1 . FIG. 1. Slot waveguide rings for quantum optics.(a) Sketch of the slot waveguide ring, which is comprised of two high index dielectric bars (here, GaP with n = 3.2) that are separated by a low index material.This ring is sandwiched between two glass plates of n = 1.48 (here, we only show the bottom substrate, for clarity), and has air pockets to each side.(b) The spectral response of a slot waveguide ring with height h = 175 nm, width w = 135 nm, gap size d = 60 nm and radius r = 1.44 µm, as shown in (c).The central mode, corresponding to the 24 th order TE-polarized azimuthal mode, whose field profile is shown in (c), has a bandwidth of 14.2 GHz, and a free spectral range of 10 THz.The zoom in, shows this mode relative to the zero phonon line resonances of single molecules and NV centers in diamond (red curve, ≈ 10 − 30 MHz bandwidth) and epitaxially grown quantum dots (blue curve, 400 MHz bandwidth) at cryogenic temperatures.In (c), the dominant, radial component is shown in the main figure pane, while the azimuthal and out-of-plane components are shown in the sub-panels.The relative scaling of the different components are shown in the bottom right corners (e.g. the maximum of E r is 2.4 times that of E ϕ ).The (red) circle shows a favorable position for a quantum emitter, such as an organic molecule, which is embedded a low index dielectric, while the (blue) square is a favorable position for an emitter such as a quantum dot or NV center which is embedded in a high index material. FIG. 2 . FIG.2.Emission into a slot waveguide ring.A full three-dimensional calculation of the imaginary component of the radial electric field radiated by an emitter located in the slot (circle in Fig.1c). FIG. 3 . FIG. 3. Decay dynamics of emitters coupled to ring resonators.(a)The time-dependent probability to find the emitter in its excited state for Q ranging from 49 to 27900 for a 1.44 µm radius ring. FIG. 4 . FIG. 4. Tuning slot waveguide ring properties.(a) Resonance quality factor as a function of ring radius (when κ GaP = 0,bottom axis) or the propagation length of light in GaP (when r = 1.44 µm, top axis).The quality factor decreases monotonically as the ring shrinks, or as absorption in the GaP increases.(b)The corresponding effective mode volume, V eff , for these rings, when the emitter with a radially-oriented dipole is located at the position marked in Fig.1b.Here, V eff decreases as the ring shrinks, yet stays constant with increasing absorption.In both plots, the curves are guides for the eye. [ Fig. 5(b)]: in both cases, these metrics decrease with decreasing Q.However, a close inspection of Fig. 5 reveals a difference.The decrease in χ and β is more rapid if Q is lowered by absorption than if the lowering is caused by the increased radiation losses of the smaller rings because shrinking the ring results in a smaller mode volume, which counteracts the decrease in Q.In contrast, when absorption losses are introduced Q decreases while the mode volume remains constant.Since the emission properties depend on the ratio of FIG. 5 . FIG.5.Dipole emission into slot waveguide rings.Coupling efficiency β (left axis) and emission rate enhancement χ (right axis) as a function of the resonance quality factor Q, which is changed by (a), introducing absorption into the GaP (same propagation lengths as in Fig.4) and (b), changing the radius of the ring.In both plots, the performance of a lossless, straight slot waveguide is indicated by the dashed lines and the symbols are results of calculations while the solid curves are guides to the eye. 24 FIG. 6 . FIG. 6. Directional coupling of emission into a slot waveguide ring.(a) The ellipticity of the light field in the slot waveguide ring, where values of ±1 correspond to right and left handed circular polarization, and 0 corresponds to linearly polarized light.Symbols indicate possible locations of dipole emitters, as discussed in the text.(b) The Fourier amplitudes for a line trace of ImE r at the center of the slot, radiated by differently oriented dipoles that are placed at the position shown by the solid circle in (a).In all cases we observe two peaks at ±16.4 rad/µm, corresponding to the ±24 azimuthal-order modes.The relative size of each peak corresponds to the emission magnitude into the corresponding mode.The respective dipole orientation is shown above the curves, which have been shifted in both dimensions for clarity. Fig. 7 . Fig.7.In this situation, we fit a decaying exponential envelop function to the field trace using only the far-field field amplitude.The extrapolated value of this envelope function, at the position of the dipole, is then used to calculate β.
9,388.4
2016-10-11T00:00:00.000
[ "Physics" ]
Phylogenetic analysis of dematiaceous fungi isolated from the soil of Guangdong , China 1 Department of Dermatology, The Fifth Affiliated Hospital, Sun Yat-Sen University,Guangdong, Zhuhai 519000, China. 2 Department of Dermatology, The Second Affiliated Hospital, Sun Yat-Sen University,Guangdong, Guangzhou 510120, China. 3 Department of Endocrinology, The Fifth Affiliated Hospital, Sun Yat-sen University, Guangdong, Zhuhai 519000, China. 4 Zhuhai Entry-Exit Inspection and Quarantine Bureau of PRC, Guangdong, Zhuhai 519015, China. Dematiaceous or darkly pigmented fungi are uncommon causes of human disease but can be responsible for life-threatening infections in both immunocompromised and immunocompetent individuals (Revankar and Sutton, 2010;Schell, 1995).They are a heterogeneous group.The distinguishing characteristic common to all these various species is the presence of melanin in their cell walls, which imparts dark color to their conidia or spores and hyphae.The colonies are typically brown to black in color as well.Dematiaceous fungi are commonly found in the soil and generally distributed worldwide (Montenegro et al., 1996;Dixon et al., 1980;Lopez et al., 2004).They are the etiologic agents of phaeohyphomycosis, chromoblastomycosis and mycetima.Over 100 species and 60 genera of dematiaceous fungi have been implicated in human disease (Matsumoto et al., 1994). As these diseases usually occur by the penetration of the causative agent through skin wounds, it is significant to search for the agents in nature, clarify their habitat and the environmental circumstances in which they may infect man.Agents (Dixon et al., 1980;Yegres et al., 1991) have been isolated such as Phialophora spp., Cladosporium spp., Exophiala spp., Sporothrix sp., Wangiella dermatitdis, Bispora betulina, and Scytalidium lignicola, which demonstrated the presence of pathogenic dematiaceous fungi in nature, although the identity of most of these strains has not been verified by molecular data.Nishimura (Nishimura, 1994) and Nishimura et al. (Nishimura et al.1989) investigated the ecology of pathogenic fungi in natural and living environments in Colombia, Venezuela, Brazil, China and Japan and succeeded in isolating various species of pathogenic dematiaceous fungi including Fonsecaea pedrosoi, Phialophora verrucosa and Exophiala spinifera.They did not find the fungus Cladophialophora spp., which are the mainly causative agents of chromoblastomycosis in China. The taxonomy and identification of dematiaceous fungi are difficult due to a lack of phenetic characters and high degree of morphological plasticity.In the present study, we isolated 60 dematiaceous fungal strains out of 367 soil samples; these were further identified by molecular biological method.If the phylogenetic relationship and the geographical distribution of dematiaceous fungi from soil of Quangdong, PR china, are revealed, it will be useful for future study. Sample collection There are three climatic zones in Guangdong: the central subtropical (Nanxiong, Lianshan, Lianxian and Shaoguan), the sourthern subtropical (Yingde, Meixian, Shantou, Guangzhou, Yangjiang), and the northern tropical (Zhanjiang, Xuwen).365 samples were collected from the three climatic zones where four to ten collecting sites were set up randomly (Table 1).The work was done during autumn and winter (October 2006 to January 2007), the dry season in Guangdong.Samples were collected from the surface soil upward of 15 cm.Utilizing a spoon which was rinsed with sterile water after each use, approximately 25 g sample were placed in 100 ml plastic bottles containing a small crystal of paradichlorobenzene (for arthropod control), and returned to the laboratory for processing on the same day. Isolation of dematiaceous 3 g of soil were transferred to a sterile 15 ml glass tube; 10 ml of sterile saline were then added, mixed by agitation for 1 min and set for 20 min, after which it was diluted to the concentration of the proportion of 1:100, when 0.2 ml was collected from the middle part of the soil suspension and placed on two plates.Then, the solution was poured on two media; potato dextrose agar (PDA) Rose Bengal respectively, both containing antibiotics (50 mg of chloramphenicol, 10 6 units of penicillin, 200 mg of streptomycin and 200 mg of cyclohexamide per liter).Pulled a medium to 3 plates, sealed the plates with plastic film, left a hole of 3 to 5 mm.The plates were incubated at 26°C for 2 to 3 weeks.The suspected colonies were subculture on Sabouraud dextrose agar (SDA) at 25°C and cheeked grossly and microscopically.All isolates were further evaluated by molecular biology methods. Ultrapure water was added to increase the volume to 50 μl.Each reaction mixture was heated to 95°C for 4 min, followed by 30 cycles of 94°C for 30 s, 55°C for 30 s, and 72°C for 60 s., followed by incubation for 10 min at 72°C. Direct sequencing and phylogenetic analysis Direct sequencing of PCR products was done with an ABI PRISM 3100 sequencer (ABI, America) after labeling with BigDye Terminator Cycle Sequencing Ready Reaction (Applied Biosystems, Foster City, California).The ITS sequences of reference sequence from GenBank collection (Table 2) and isolated dematiaceous fungal (Table 3) in this study were aligned by using Clustal W software.Phylogenetic tree was then constructed by the neighborjoining (NJ) method in the Molecular Evolutionary Genetics Analysis (MEGA) software version 4.0 (Tamura, 2007).Bootstrap analysis with the MEGA program was performed by taking 500 random samples from the multiple alignments (values > 50 are shown with the branches).The evolutionary distance between organisms is indicated by the horizontal branch length, which reflects the number of nucleotide substitutions per site along that branch from node to the endpoint. The rDNA ITS regions (including part of 18S, ITSⅠ, 5S, ITSⅡ, part of 26S) were successfully amplified from all the dematicacious fungi by universal primers.The size of acquired PCR products ranged from 500 bp to 650 bp.Each strain of dematicacious fungi tested was shown to have unique ITS base sequences, although some of them were very similar. DISCUSSION Studies of an infectious disease usually approach from pathogens, so isolations and identificatons of pathogens are the most important parts in the approach.The usual identification of fungi by the morphological method, combining with some biochemical approaches, processes the observation of colonial textures, shapes, and colors in culture mediums, and the inspection of conidiophores, morphologies and generations of conidia.However, it is difficult because many dematicacious fungi appear in multi-morphologies, which the isolate may generate more than one kind of conidium or may be generated by various conditions.Therefore, it is not easy to determine whether a conidium is yielded by the multi-morphological fungi or by a mixture of fungi.To obtain an isolate with high purity, it requires sub-cultures from the similar culture medium.Even with the required conditions of subculture, some colonies are complicated to isolate.Furthermore the members of mitosporic fungi are taxonomically closely related, morphological identification of mitosporic fungi becomes more difficult.Early studies have shown that the results from the molecular biological identification of fungi are in accordance with of the morphology identification (Pechere et al., 1999).In this study, a series of colony complexes are involved.We identified all the strains by the molecular biological method and processed the morphology method to classify some isolates.Results indicated that dematiaceous species were found widely in Guangdong soil.However, the distribution amounts were not in a pattern.Defined by climatic zones, the western species of the southern subtropical are with the highest abundance and density; the eastern species of the southern subtropical are with the lowest abundance and density.The most found genus was Scolecobasidium.There were 17 samples found and it 28% (17/60) of the positive results.There is no detailed report found on its pathogenicity.Ten strains of Cladophialophora were found from 10 samples, which was 17% (10/60) of the positive results.Among them, the isolates of Cladophialophora carrionii were from the garden of Yuanshan Qingyuan.It has been the first time that C. carrionii which is the common pathogen for chromoblastomycosis in China was isolated from samples of Guangdong environment.Seven strains of Exophiala were recognized, including E. dermatitidis, E. xenobiotica, E. oligosperma, E. pisciphila, E. mesophila, Exophiala sp., E. eucalyptorum.Except E. eucalyptorum, the rest strains were common pathogens.Phaeococcomyces is a member of black yeast, which is hard to identify because its confusion on the morphology.Five strains were found from the experiment, but the species was not determined.Two isolates of Phialophora parasitica were found.There are 25 members in Phialophora of which five species are human pathogens, including P. verrucosa, P. richardsiae, P. repens, P. parasitica, and P. cyanescens (Park et al., 2005).The common pathogen P. verrucosa of chromoblastomycosis was not found in this experiment which may be because it intends to distribute in the cold zone (Liu et al., 2004).Cladosporium, dispensing widely, is the saprophyte found usually in soil and on plants, and is also the pathogen for plants.Some of Cladosporium are relative to human infections (Chew et al., 2009;Gugnani et al., 2006).Seven strains were found in this experiment, including four strains of C. oxysporum, two strains of C. cldosporioides, and one strain of C. sphaerospermum.The three species can cause human phaeohyphomycosis.C. oxysporum presents in the warm condition (Mckemy and Morgan, 1991).It is compliant with the four strains that have been from farms of Zhuhai and Zhanjiang Leizhou.The locations are warmer than the Chinese lettuce field in Meizhou from which the mould of the C. sphaerospermum strain is Meizhou is in the further northern region.One strain of D. bryoniae, plant pathogen, was isolated.Some studies have shown the fungi of Didymella are related to asthma (Pulimood et al., 2007), because the amount of the fungi increases during the storm season, which may lead to asthma. There is no human disease reported for the rest eleven findings, including two strains of S. suttonii, one strain of Capnoclium sp., one strain of Melanized limestone ascomycete, two strains of Leptosphaeriaceae sp., two strains of Ascomycete sp., one strain of Mycosphaerella Alistairii, one strain of C. lunatus, one strain of Microdiplodia hawaiiensis.The major pathogen Fonsecaea pedrosoi of chromoblastomycosis was not found in the experiment.Are the substances foci suitable for Fonsecaea pedrosoi growing not the soil but other material, such as plant, litter and wheat stalk?May the condition factors of the experiment, such as culture medium, temperature or other involved fungi also play the role to inhibit the species survive?Further studies will be needed to address the above questions. Using rDNA ITS sequences from 60 newly isolated strains and 9 reference strains, we constructed phylogenetic tree of dematicacious fungi.The phylogenetic method based on the ITS rDNA sequence to identify fungi agreed with the morphological method, even if more quick and accurate.The NJ tree indicates the relationships between genetic distances and some biological habits of strains, for example, the strains that can cause diseases in human and animal often group together (Cladophialophora carrionii, Cladophialophora devriesii, Cladophialophora oxysporum, Cladophia-lophora cldosporioides, E. dermatitidis, E. xenobiotica, E. oligosperma, E. pisciphila, E. mesophila, D. bryoniae), on the other hand, the strains that can cause diseases in plants usually get into other crowd.There is no evidence showing that the sorting relates to the geographical location, while the same kind strains from different regions reveal similar genetic distances.Therefore, the strains within the crowd or with closer genetic distance would be also with potential pathogenicity.Development Phylogenetic tree constructed by neighbor-joining (NJ) method.The tree was constructed using 500 bootstrap replications (values > 50 are shown with the branches).The evolutionary distance between organisms is indicated by the horizontal branch length, which reflects the number of nucleotide substitutions per site along that branch from node to the endpoint.of NJ tree would not only effect on identifications of fungal strains but would also play a role on guiding the studies on some biological habits of the strains. Figure1.Phylogenetic tree constructed by neighbor-joining (NJ) method.The tree was constructed using 500 bootstrap replications (values > 50 are shown with the branches).The evolutionary distance between organisms is indicated by the horizontal branch length, which reflects the number of nucleotide substitutions per site along that branch from node to the endpoint. Table 1 . Localities, sites and number of soil samples collected in Guangdong, PR china.: indicates the north region of Guangdong with the medio-subtropical region; †: the middle region of Guangdong with the south subtropical region; ‡: the south region of Guangdong with the north tropical region; §: the east of the middle region of Guangdong; ¶: the middle region of Guangdong; #: the west of the middle region of Guangdong.The west of the middle region of Guangdong with the south subtropical region is Zhaoqing Gaoyao West; the central of the middle region of Guangdong with the south subtropical region is Zhaoqing Gaoyao East to Heyuan; the east of the middle region of Guangdong with the south subtropical region is Heyuan East. * Table 2 . List of reference sequences. Table 3 . List of strains isolated and source.
2,881.2
2011-09-30T00:00:00.000
[ "Environmental Science", "Biology" ]
APCI Evaluation Method for Cement Concrete Airport Pavements in the Scope of Air Operation Safety and Air Transport Participants Life Many factors have an impact on flight operation safety and air transport participants life. This article presents one of them, which is maintenance of the airport infrastructure in a good condition, with proper infrastructure management, in particular of cash and human resources. At the beginning of the article, attention is paid to the aspect of safety and human life in air transport. Also, an overview of world experience in the field of assessment of the technical condition of airport pavements was presented, including the standard method of the Pavement Condition Index (PCI) estimation. Then, the authors propose an innovative method of assessing the condition of airport pavements based on the Airfield Pavement Condition Index (APCI), taking into account, apart from the extent of surface damage, such parameters as load capacity, evenness, roughness, and bond strength. This approach gives a broader picture of the actual condition of the airport pavement, which has a great importance on flight operation safety, including passengers and cruel life. Next the described research method is experimentally verified in real conditions at Polish airports. Finally, an example of using the APCI method in the assessment of selected airport pavements from Polish airports is presented. The results of tests performed on five functional elements of a military airfield are presented. A satisfactory result is obtained for three elements, an adequate—for two. Introduction Air transport is the most modern and dynamically developing branch of transport, which has recorded a several fold increase in the number of operations worldwide over the last decade. The number of travelers is increasing from year to year. Despite the growing attention to the safety of air operations, accidents still happen. Unfortunately, in air transport, an accident usually means fatalities. In the past, aviation events have occurred due to the intake of foreign bodies originating, among others, from airport pavements. That is why it is so important to maintain airport surfaces in the best technical condition. Proper airport management is a key factor that has a direct impact on the safety of flight operations and air transport participants life. The management of airport functional elements (AFE) shall be based on reliable information on the pavement's surface condition obtained systemically. This approach enables rational planning of airport pavement repairs and renovations. Many countries' experiences confirm that proper management of both airport [1] and road [2][3][4][5] infrastructure must rely on detailed and up-to-date information on the condition of a pavement's surface. Information about the current condition of the pavement, as well as the ability to predict and forecast the technical condition of the pavement in the future, plays an important role in airport At the beginning, the standard PCI method is described. Then, an innovative procedure of APCI determination is presented. Next, the example of using APCI method is shown including results of pavement condition evaluation on a real airfield. Finally, a discussion and conclusions are presented. Materials and Methods Currently known methods of assessing the condition of airport pavements are mainly based on indicators describing degree of pavement degradation. In the book intended for engineers, government institutions, and universities [1], the author describes in detail the method for determining PCI developed and used by the U.S. Army Corps of Engineers. It is a standard method used to assess airport and road surfaces, as well as parking lots, by many institutions around the world. These institutions include the Federal Aviation Administration, The U.S. Department of Defence, the American Public Works Aviation, and many more. The PCI determination method for airport and road surfaces has also been standardized and published in the American standards ASTM D5340 and ASTM D6433. The PCI indicator is a dimensionless number from 0 to 100, where 0 means the surface is completely degraded, while 100 means the surface is in perfect condition. The Figure 1 shows the standard scale of the PCI indicator and the simplified scale. The PCI indicator is determined based on the result of the pavement's visual inspection in relation to distress type, quantity, and its severity. [13]. In order to inspect airport pavements, each AFE is divided into smaller elements constituting the test sample according to standard instructions. In the case of concrete pavements, a unit sample of the pavement is 20 ± 8 full-size concrete slabs. The asphalt pavement is virtually divided into areas of approximately 460 ± 180 m 2 (5000 ± 1000 sq ft.). Inventory of deteriorations on each sample can be time consuming, which is associated with the costs incurred for this purpose. The method allows to limit the number of samples to be tested, and thus to reduce costs and inspection time. The disadvantage of this solution is reduction in the quality of the results. Formula (1) specifying the minimum number of samples was developed in order to obtain results at the assumed confidence level. where: N-Total number of samples obtained after dividing the element, In order to inspect airport pavements, each AFE is divided into smaller elements constituting the test sample according to standard instructions. In the case of concrete pavements, a unit sample of the pavement is 20 ± 8 full-size concrete slabs. The asphalt pavement is virtually divided into areas of approximately 460 ± 180 m 2 (5000 ± 1000 sq ft.). Inventory of deteriorations on each sample can be time consuming, which is associated with the costs incurred for this purpose. The method allows to limit the number of samples to be tested, and thus to reduce costs and inspection time. The disadvantage of this solution is reduction in the quality of the results. Formula (1) specifying the minimum number of samples was developed in order to obtain results at the assumed confidence level. where: N-Total number of samples obtained after dividing the element, e-Allowable estimation error PCI, s-Standard deviation of PCI results obtained on single samples. The next step is to determine the interval at which samples will be selected for evaluation. Thanks to this, the evaluated samples will be evenly distributed in relation to the element. The interval is the ratio of the total number of samples obtained after dividing the element to the minimum number of samples to be inspected. For example, when the total number of samples is 47 and the minimum number of samples is 13, the interval will be 47/13, i.e., 3.6 (rounded to integer-3). Therefore, for deterioration inspection, samples should be selected in order of numbers 3, 6, 9, 12, (...), 45. Despite the possibility of reducing the number of test samples, it is suggested to check at least 50% of the samples at critical points of the airport. However, in terms of safety, it is very reasonable to evaluate each sample. Inspectors write each noticed damage down in accordance with the legend, specifying its type, severity, quantity, and approximate place of occurrence. Data tables containing the type of damage, its harmfulness, its total quantity, and density basing on the collected results are created for each sample. Shahin [1] gives the next steps in the process of determining the PCI indicator for a single sample: 1. Determination of deduct values from the deduct value curves for each distress type and severity. Figure 2 shows a typical deduct value calculation curve. e-Allowable estimation error PCI, s-Standard deviation of PCI results obtained on single samples. The next step is to determine the interval at which samples will be selected for evaluation. Thanks to this, the evaluated samples will be evenly distributed in relation to the element. The interval is the ratio of the total number of samples obtained after dividing the element to the minimum number of samples to be inspected. For example, when the total number of samples is 47 and the minimum number of samples is 13, the interval will be 47/13, i.e., 3.6 (rounded to integer-3). Therefore, for deterioration inspection, samples should be selected in order of numbers 3, 6, 9, 12, (...), 45. Despite the possibility of reducing the number of test samples, it is suggested to check at least 50% of the samples at critical points of the airport. However, in terms of safety, it is very reasonable to evaluate each sample. Inspectors write each noticed damage down in accordance with the legend, specifying its type, severity, quantity, and approximate place of occurrence. Data tables containing the type of damage, its harmfulness, its total quantity, and density basing on the collected results are created for each sample. Shahin [1] gives the next steps in the process of determining the PCI indicator for a single sample: 1. Determination of deduct values from the deduct value curves for each distress type and severity. Figure 2 shows a typical deduct value calculation curve. 2. Determination of maximum allowable number of deducts (m), using the following formula: where: HDVi-Highest individual deduct value for sample unit i. 2. Determination of maximum allowable number of deducts (m), using the following formula: where: HDV i -Highest individual deduct value for sample unit i. 3. Determination of maximum corrected deduct value (CDV): a. Determination of q, the number of deducts with value greater than 5.0 b. Determination of total deduct value (TVD) (sum of all individual deduct values) c. Determination CDV using q and TVD using correction curves for Portland cement concrete (PCC) surfaced airfield pavements d. Reduction of the smallest individual deduct value greater than 5.0 to exactly 5.0, e. Repetition of steps a through c until q is equal to 1.0. The largest of the determined CDV is the maximum CDV. 4. Calculation PCI from the following formula: In the case where the assessed functional element of the airport was divided into samples of the same area, the PCI for the whole element is calculated as the arithmetic mean of the PCI indicators estimated for a single sample. When the sample surfaces are not the same size, the PCI value is calculated as the weighted average of the PCI indicators estimated for individual samples, in which the size of the individual sample surfaces is taken as the weight. Authors developed a procedure for assessing the technical condition of the AFE surfaces based on the results of pavement's parameters. The novelty of the proposed method is of consideration for both deterioration and repair inventory. Load capacity, roughness, evenness, and bond strength are also included in the APCI model. Moreover, the main idea of the APCI model assumes that pavement evaluation can be done only if each input parameter is greater that its minimum requirement; otherwise the AFE should not be operated. Results The assumptions for the practices and proposed level of the APCI (Airfield Pavement Condition Index), adapted to airports in Poland are presented below. In addition, estimated APCI values for several selected AFEs are provided, based on real data obtained during surface visual inspections. Procedure for APCI Evaluation The procedure was created in order to standardize the procedure of the technical condition of the AFE pavements assessment on the basis of data obtained from various sources. The technical condition of the pavement is assessed on the basis of field tests and laboratory tests. Field tests include: • deteriorations and repair inventory, • pavement load capacity assessment based on the results of elastic deflections obtained in the HWD (Heavy Weight Deflectometer) test, • pavement roughness assessment, • pavement evenness assessment based on the results obtained in the planograph test, • surface bond strength by pull-off. Laboratory tests include: • structural tests of concrete, • strength tests, including concrete compressive strength and concrete tensile strength, • climate tests, including freeze-thaw resistance and resistance to de-icing agents. The general scheme of procedure is shown in Figure 3. The main methodology idea is the assessment of individual parameters based on the results obtained. If the parameter's limit value is not obtained, corrective work should be undertaken and parameters reassessed. When all the tests are completed, the condition of the AFE's pavement is determined taking into account the results of the field tests. The technical condition of the pavement is assessed taking into account the condition of the pavement and the laboratory tests results. The process of assessing the condition of the AFE's pavement itself is illustrated in Figure 4. Example of output data using is presented in Figure 5. The plot presents changes of technical condition for five aprons from one of Polish airports based on the APCI index. The main methodology idea is the assessment of individual parameters based on the results obtained. If the parameter's limit value is not obtained, corrective work should be undertaken and parameters reassessed. When all the tests are completed, the condition of the AFE's pavement is determined taking into account the results of the field tests. The technical condition of the pavement is assessed taking into account the condition of the pavement and the laboratory tests results. The process of assessing the condition of the AFE's pavement itself is illustrated in Figure 4. Example of output data using is presented in Figure 5. The plot presents changes of technical condition for five aprons from one of Polish airports based on the APCI index. The main methodology idea is the assessment of individual parameters based on the results obtained. If the parameter's limit value is not obtained, corrective work should be undertaken and parameters reassessed. When all the tests are completed, the condition of the AFE's pavement is determined taking into account the results of the field tests. The technical condition of the pavement is assessed taking into account the condition of the pavement and the laboratory tests results. The process of assessing the condition of the AFE's pavement itself is illustrated in Figure 4. Example of output data using is presented in Figure 5. The plot presents changes of technical condition for five aprons from one of Polish airports based on the APCI index. Input Data Model input data are the results of the assessment of load capacity, degradation, roughness, evenness, and concrete tensile strength. Tests and measurements are carried out in accordance with applicable standard methods. The load-bearing capacity of airport pavement is assessed in accordance with NO-17-A500: 2016 Airfield and road pavements-Load capacity testing [24] is the standard method for deflections measurement with a Heavy Weight Deflectometer (HWD) used for airport testing, and is the basis for the analysis. Input Data Model input data are the results of the assessment of load capacity, degradation, roughness, evenness, and concrete tensile strength. Tests and measurements are carried out in accordance with applicable standard methods. The load-bearing capacity of airport pavement is assessed in accordance with NO-17-A500: 2016 Airfield and road pavements-Load capacity testing [24] is the standard method for deflections measurement with a Heavy Weight Deflectometer (HWD) used for airport testing, and is the basis for the analysis. The thickness and stiffness of structural layers, concrete tensile strength, and soil base parameters directly under the AFE structure are taken into account. For this purpose, a full research of the assessed AFE structure is made. The result is the Pavement Classification Number (PCN) indicator and/or the number of aircraft flight operations. The level of surface deterioration is determined on the basis of a visual inspection of the surface. The deteriorations and repairs inventory is made taking into account type and measurement of each damage. The basic assessed element is a single 5 × 5 m slab. Its location is determined by numbers, which are adopted in accordance with a strictly defined order. Based on the basic element's (concrete slab) inspection results, the inspected area can be analyzed as a hectometer, which corresponds to an area of 100 m in length and 5 m in width. Pavement friction is tested in accordance with the defense standard of Poland NO-17-A501: 2015. Airfield pavements-Friction testing [25] and requirements described in Annex 14 of the International Civil Aviation Organization ICAO [26], Standard 9137-AN/898 Part 2 Airport Service Manual [27] and in the Advisory Circular FAA 150/5320-12c [28]. The measurement is made with a device for friction coefficient continuous measurement in accordance with the above documents, ensuring the thickness of the water film under the measuring wheel is at least 1 mm. The test can be carried out at a speed of 65 km/h or 95 km/h and the results are compared with the values from appropriate tables in the documents above. Example of such a device is shown in Figure 4 (PG-3.1.3). Measurements of unevenness of the assessed surfaces are made based on the NO-17-A502: 2015 Airfield pavements-Evenness testing [29]. The measuring device measures and records the height of the clearance between the theoretical line connecting the bottom of the device's wheels and the pavement. Unevenness amplitudes are measured as a function of path increment, every 10 cm of the tested route, thus creating a set of numbers that are sent to the computer. Unevenness is measured with an accuracy of 0.3 mm. The measuring route is divided into 5 m sections (the most common panel dimensions). The pavement's evenness assessment is reduced to 5 m long road sections assessment. The standard requirements allow surface deviations by the values specified in the standard [29]. The assessment of the strength of the surface layer of concrete pavements is carried out in accordance with PN-EN 1542: 2000 products and systems for the protection and repair of concrete structures. Test methods consist of measurements of bond strength by pull-off [30]. The test consists in sticking a metal disc with a diameter of 50 mm, a previously appropriately drilled, to the test surface. Then, using a specialized device, the disc is pulled off with a constant strength increase. The sought parameter is determined by dividing the maximum force causing the disc to detach from the structure by its surface area. Process Analysis An analysis based on the developed model indicator of the cement concrete (pl. BC) airport pavement's condition with the use of input data is made. The indicator value is calculated using the following formula: where: w i -Characteristic weights for the type of parameter, w-Weight sum U-Load bearing capacity D-Deterioration S-Friction R-Evenness Wod-Bond strength by pull-off. The above-mentioned weights were selected using the expert method and taking into account many years of experience of experts related to airfield pavement constructions. The results of previous studies carried out at airports throughout Poland were also taken into account. The functional elements of both military and civil airports were examined. Output Data Obtained APCI values are evaluated according to the criteria of technical condition assessment. The criteria are presented on the following detailed scale: • Very poor (APCI = 40 ÷ 27)-The surface has mostly medium and high damages, which causes significant maintenance and operational problems. Immediate intensive maintenance and repairs are needed. • Serious (APCI = 26 ÷ 12)-The surface usually has high damages, which cause restrictions in its use. Immediate repair is needed. • Unfit (APCI = 11 ÷ 0)-Deterioration of pavement has reached a level where safe air operations are no longer possible. Complete reconstruction is necessary. The above APCI scale can be presented in a simplified way as: The limit values of the APCI indicator individual levels were determined on the basis of many years of experience based on the results of research work obtained over the years. The Results of the Cement Concrete Airport Pavements Evaluation Assessment of the surface condition based on the method described in Section 3.1 was made on the basis of tests and measurements carried out at one of the Polish military airports. All functional elements of the airport were examined. The article presents several of them, including the runway and four aprons. The measurements were made in a short period of time, thus ensuring satisfactory repeatability conditions. An inventory of deteriorations and surface repairs was made based on instructions created at the Air Force Institute of Technology. The inventory was made by experts with many years of experience in this field. Measurements of deflections were carried out using an HWD (Heavy Weight Deflectometer) airport deflectometer and then PCN indices were calculated. The thicknesses of the structural layers and the material characteristics of the pavement were identified on the basis of cylindrical samples taken from the pavement. The evenness of the pavement was assessed based on measurements with the modernized P-3z planograph, while the roughness measurements were made with the ASFT T-10 friction tester. The test of the surface layer's bond strength was carried out with pull-off apparatus. The presented assessment results relate to five cement concrete functional elements. Results received in tests are shown in Table 1. The charts (Figures 6-10) present deterioration, load bearing capacity, evenness, roughness, and bond strength of the surface layer results for the runway (RWY) and four aprons (APRON1-APRON4). However, Figure 11 shows the estimated pavement condition indicator. According to the analysis above, APRON 1, APRON 3, and RWY surfaces can be classified as having a satisfactory condition. In contrary, APRON 2 and APRON 4 surfaces qualify as adequate condition and will require routine repairs in a short time. The presented results apply to the entire AFE. In order to obtain greater accuracy of the results, the AFE should be divided into smaller elements and the APCI analysis performed again. Discussion Many factors have an impact on operation safety and air transport participants life. One of them is the maintenance of airport infrastructure to be in an appropriate condition, where proper infrastructure management is important, in terms of financial and human resources administration. As global experience shows, airport services are supported in this field by scientists, who provide them with specialized tools, including pavement assessment systems with PCI data sharing. The method designated by PCI is standardized, the procedure is presented in the article. The method was developed for the needs of the US Army by the Corps of Engineers. Currently, researchers are competing calculation methods, and more importantly, predicting the value of this parameter. New systems are created, not only for surface deteriorations inspection. The new approaches include the IRI index or measurement of elastic deflections with the FWD device. Others suggest adding a friction parameter to the model. Attempts are being made to automate the process of determining and forecasting PCI using artificial neural networks or statistical methods (ANOVA). According to the analysis above, APRON 1, APRON 3, and RWY surfaces can be classified as having a satisfactory condition. In contrary, APRON 2 and APRON 4 surfaces qualify as adequate condition and will require routine repairs in a short time. The presented results apply to the entire AFE. In order to obtain greater accuracy of the results, the AFE should be divided into smaller elements and the APCI analysis performed again. Discussion Many factors have an impact on operation safety and air transport participants life. One of them is the maintenance of airport infrastructure to be in an appropriate condition, where proper infrastructure management is important, in terms of financial and human resources administration. As global experience shows, airport services are supported in this field by scientists, who provide them with specialized tools, including pavement assessment systems with PCI data sharing. The method designated by PCI is standardized, the procedure is presented in the article. The method was developed for the needs of the US Army by the Corps of Engineers. Currently, researchers are competing calculation methods, and more importantly, predicting the value of this parameter. New systems are created, not only for surface deteriorations inspection. The new approaches include the IRI index or measurement of elastic deflections with the FWD device. Others suggest adding a friction parameter to the model. Attempts are being made to automate the process of determining and forecasting PCI using artificial neural networks or statistical methods (ANOVA). According to the analysis above, APRON 1, APRON 3, and RWY surfaces can be classified as having a satisfactory condition. In contrary, APRON 2 and APRON 4 surfaces qualify as adequate condition and will require routine repairs in a short time. The presented results apply to the entire AFE. In order to obtain greater accuracy of the results, the AFE should be divided into smaller elements and the APCI analysis performed again. Discussion Many factors have an impact on operation safety and air transport participants life. One of them is the maintenance of airport infrastructure to be in an appropriate condition, where proper infrastructure management is important, in terms of financial and human resources administration. As global experience shows, airport services are supported in this field by scientists, who provide them with specialized tools, including pavement assessment systems with PCI data sharing. The method designated by PCI is standardized, the procedure is presented in the article. The method was developed for the needs of the US Army by the Corps of Engineers. Currently, researchers are competing calculation methods, and more importantly, predicting the value of this parameter. New systems are created, not only for surface deteriorations inspection. The new approaches include the IRI index or measurement of elastic deflections with the FWD device. Others suggest adding a friction parameter to the model. Attempts are being made to automate the process of determining and forecasting PCI using artificial neural networks or statistical methods (ANOVA). In this article, the authors proposed a new method for the airport pavement condition index (APCI) evaluation. The innovative approach takes into account several factors at the same time, including the degree of pavement deteriorations determined on the basis of an inventory of both pavement deteriorations and repairs. In addition, the model includes the pavement's load bearing capacity, evenness, and roughness. The load bearing capacity is determined with the deflections measurement, in the model as the PCN indicator. Evenness in the model is considered as a defectiveness parameter, while roughness as a friction coefficient. Due to the jet aircrafts maneuvers on airport pavements, it is important to ensure cleanliness on the surface. There must be no loose elements in the Foreign Object Damage (FOD) zones, so any fractures of the surface is a potential threat to aircraft safety, and to people's lives. In order to prevent and react in advance to excessive surface fracturing, the pavement is controlled with the bond strength by the pull-off test. The proposed model also takes into account the above parameter. Each type of parameter fills in the model with a characteristic weight. The weights were determined by the expert method. In order to adapt the scale to the conditions at Polish airports, the authors developed a detailed scale of APCI values, according to which the technical condition of the pavement is assessed. The number of levels remained the same as in the standard PCI method, while the limit values changed slightly. In addition, a simplified scale containing three states-adequate, degraded, and unsatisfactory-was developed. The limit values were determined based on many years of experience based on the results of the research work obtained over the years. The article presents an example of using the APCI method for an airport pavement evaluation. Five AFEs of an active military airport in Poland were assessed, taking into account the degree of degradation, load-bearing capacity, roughness, evenness, and bond strength of concrete. Undoubtedly, pavement degradation had the greatest impact on the final value of the APCI index. APRON 2 and APRON 4 surfaces, which were 25% and 19% degraded, obtained the lowest APCI rates. Despite the fact that APRON 4 had the most degraded pavement, APRON 2 obtained the lowest APCI index of 68. The largest impact on this situation was the result of testing the concrete bond strength, which significantly reduced the final value of APCI. This behavior of the model shows that the condition of the surface, which is visually in good condition, may pose a threat to the aircraft, and thus to the safety of passengers and crew members. Only taking into account the wide spectrum of airport pavement parameters gives a real picture of its condition. For operational purposes, one APCI indicator for the entire AFE is sufficient, which shows whether it is possible to perform flight operations on a given element or not. For maintenance purposes, attention should be paid to APCI indicators for individual AFE sections. Due to the large size of the elements, they should be divided into smaller parts with dimensions adapted to the needs of maintenance services. For example, the runway can be divided into 100 m sections. This narrowing of the area under consideration allows for a more accurate assessment of the condition of the entire AFE, enabling the planning of repairs from the worst-case areas with an APCI ratio lower than the average APCI for the whole AFE. Conclusions Maintenance of the airport infrastructure in a good condition is a key factor to increase the level of flight safety and protect human lives and health. The tool created by the authors is aimed at supporting airport services in managing the condition of technical airport pavements, enabling rational and effective disposal of public funds intended for the airport infrastructure maintenance. The article presents sample results of the evaluation of airport pavements' condition at a Polish facility. Five airport functional elements were analyzed, including the runway and four aprons. Values of each of the parameters assessed and the final APCI for the evaluated airport functional elements are presented. The method proposed by the authors for assessing the condition of airport pavements based on the APCI takes into account not only distresses but also other pavement parameters. In contrast to methods used so far, the APCI method includes both deterioration and repair inventory, load capacity, roughness, evenness, and bond strength. A broad approach to pavement parameters and application of weighting factors are the main advantages of this method, especially with flight operation safety. At the preceding stage of this work, a methodology for airport pavement degradation assessment was developed, including repairs carried out and deterioration harmfulness. Work is currently underway in order to determine the impact of specific parameters on the APCI model. In the future, there are plana to use artificial neural networks to optimize the model and predict the surface condition in subsequent years of operation. Work is ongoing within a system supporting Polish airport services in pavement condition management. Conflicts of Interest: The authors declare no conflicts of interest.
7,043.4
2020-03-01T00:00:00.000
[ "Engineering" ]
Combining Multi-Perspective Attention Mechanism With Convolutional Networks for Monaural Speech Enhancement The redundant convolutional encoder-decoder network has been proven useful in speech enhancement tasks. This network can capture the localized time-frequency details of speech signals through the fully convolutional network structure and the feature selection capability that results from the encoder-decoder mechanism. However, extracting informative features, which we regard as important for the representational capability of speech enhancement models, is not considered explicitly. To solve this problem, we introduce the attention mechanism into the convolutional encoder-decoder model to explicitly emphasize useful information from three aspects, namely, channel, space, and concurrent space-and-channel. Furthermore, the attention operation is specifically achieved through the squeeze-and-excitation mechanism and its variants. The model can adaptively emphasize valuable information and suppress useless ones by assigning weights from different perspectives according to global information, thereby improving its representational capability. Experimental results show that the proposed attention mechanisms can employ a small fraction of parameters to effectively improve the performance of CNN-based models compared with their normal versions, and generalize well to unseen noises, signal-to-noise ratios (SNR) and speakers. Among these mechanisms, the concurrent space-channel-wise attention exhibits the most significant improvement. And when comparing with the state-of-the-art, they can produce comparable or better results. We also integrate the proposed attention mechanisms with other convolutional neural network (CNN)-based models and gain performance. Moreover, we visualize the enhancement results to show the effect of the attention mechanisms more clearly. I. INTRODUCTION Speech enhancement aims to remove background noise from the degraded speech without distorting the clean speech, thereby improving the speech quality and intelligibility. This technique is widely used in many applications, such as speech recognition [1], hearing aids [2], and VoIP [3]. Common speech enhancement techniques fall under two major The associate editor coordinating the review of this manuscript and approving it for publication was Chao Tong. categories: traditional and machine-learning-based methods. Traditional methods mainly include spectral subtraction [4], Wiener filtering [5], statistical-model-based methods [6], and subspace-based methods [7]. These methods mainly use unsupervised digital signal analysis approaches and achieve separation by decomposing the speech signal to determine the characteristics of clean speech and noise. These methods can eliminate noise to some extent. However, the performance of these methods greatly degrades when dealing with nonstationary noises because they are based on the assumption VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ of stable noises. To solve these limitations, supervised methods that can automatically discover the relationship between noisy and clean speech signals are constantly proposed. Among these methods, deep learning-based methods have dramatically boosted their denoising performance recently, thereby attracting numerous researchers and resulting in the proposal of many neural network-based models [8]- [10] like deep neural network (DNN) [11]- [13], recurrent neural networks (RNN) [14]- [16], convolutional neural network (CNN) [17]- [21] and some other variants. Xu et al. [8] proposed a regression-based DNN for mapping the log-power spectra features of noisy speech to those of clean speech. This model achieves satisfactory results, thereby proving the effectiveness of deep-learning-based methods. However, DNN is composed of several fully connected layers that experience difficulty in modeling the temporal structure of a speech signal [22]. In addition, the number of parameters increases rapidly with the layers and nodes, thereby raising the computational burden. CNN is introduced into speech processing to capture implicit information in the speech signal remarkably while reducing the number of parameters in recent years. CNN can maintain a small shift in the frequency domain of speech features within a certain range, thus coping with speaker and environmental changes [22]. Fu et al. [18] proposed a signalto-noise ratio (SNR)-aware CNN to estimate the SNR of an utterance and then enhance adaptively, thus improving the generalization ability. Hou et al. [20] employed both audio and visual information for enhancement. Bhat et al. [21] proposed a multi-objective learning CNN and implemented it on a smartphone as an application. Recently, more encoderdecoder based CNN models are proposed. Park and Lee [23] removed the fully connected layers in CNN and introduced the fully convolutional network (FCN) into the field of speech enhancement considering the disadvantages mentioned previously. Recently, many works have been proposed based on the FCN. Tan and Wang [24] proposed the convolutional recurrent network (CRN), which inserted two long short-term memory (LSTM) layers between the encoder and the decoder of the FCN. Grzywalski and Drgas [25] added gated recurrent unit (GRU) layers into each building block of the FCN. These models improve the representational capability by exploiting the temporal modeling capability of RNN. The max-pooling layers in the FCN are used to extract the most active parts of certain areas, while the detail information is lost. Therefore, FCN can achieve good results in some fields, such as speech recognition, where obtaining the overall characteristics is enough. However, in speech enhancement, the use of detail information is essential in restoring clean speech. To solve this problem, [23] also proposed the redundant convolutional encoder-decoder (RCED), which discarded the max-pooling layers and the corresponding upsampling layers in the FCN to maintain the feature map size, thereby retaining the details and achieving improved performance. To further improve the performance of CNN-based models, many methods focus on the depth, width, and cardinality of networks [26]. Unlike previous works, we integrate the attention mechanism with the RCED in this work to improve representational capability. Attention [27] is a brain signal processing mechanism. It allows human brains to assign different attention to each part automatically, thus effectively capturing several informative features. The fusion of deep learning-based models and the attention mechanism can help models emphasize informative features and suppress useless ones. At present, attention has been widely applied in speech recognition [28], answer selection [29], and session prediction [30]. Although the use of attention mechanism is not uncommon in speech enhancement, there are three reasons why we think it can play a role: First, in a noisy environment, the human auditory system can selectively focus on speech while suppressing noise through the attention mechanism [31]. Therefore, the application of attention may help the model simulate the human auditory system and capture speech from noises, thus improving its expressive ability. Second, [32] introduced the attention mechanism into LSTM to assign a weight to past several frames, then calculated their weighted sum as the context frame for each timestep. This model achieved satisfactory results, thereby demonstrating the efficiency of attention mechanism in monaural speech enhancement. Finally, given that spectrograms have a specific pattern, they can be treated as images and processed using an image processing methodology [33]. At present, most attention-based methods multiply two vectors point-to-point to calculate their similarity. In the field of image processing, Hu et al. [34] proposed a new type of attention mechanism for CNNs called squeeze-and-excitation (SE), which can summarize the information of all output channels with a small number of parameters, and learn to give a weight to each channel according to the global information. The SE mechanism consists of two steps and is leveraged to assign a weight to each channel according to all feature maps. The squeeze step integrates the global spatial information and generates a channel descriptor, in which each element corresponds to a feature map information. In the excitation step, the descriptor is adjusted, and the attention weight of each channel is determined. Finally, the weights are used to recalibrate the feature maps through which the model can emphasize useful information. The weight recalibration benefits from the SE layers are accumulated throughout the entire network. Recently, SE has been widely used in image processing and has obtained satisfactory results [33], [35], [36]. Later, Roy et al. [37] and Woo et al. [26] extended SE to space and concurrent space-and-channel domains and achieved satisfactory results. Motivated by these works, we introduce SE as the attention mechanism, and combine it with RCED, thus solving the problem that RCED has difficulty in effectively exploiting global information [25] or explicitly judging the importance of different features. Considering that the information in time-frequency points are also of great importance, we propose a spatial SE (SSE) mechanism that provides weights spatially. Moreover, we exploit channel-wise and spatial information concurrently to achieve more accurate weight prediction. The representational capability of the original RCED can be improved by explicitly emphasizing useful information through SE. Considering the accumulation benefits of SE, we add one SE layer at the end of each building block in the RCED. Experimental results show that the SE mechanism can improve the performance effectively and show a good generalization ability. We also integrate it with other CNN-based models and find that such an approach can improve performance. The rest of this paper is organized as follows. The next section describes the general framework of the proposed model. Section III describes the proposed SE mechanisms. Section IV and V respectively provide the configurations and results of the conducted experiments. Section VI presents the concluding remarks of this study. II. MODEL DESCRIPTION The original RCED is an encoder-decoder-based structure. Except for the last one, nearly all building blocks contain a convolutional layer, followed by a batch normalization (BN) layer, and a rectified linear unit (ReLU) layer. The last compression block only contains one convolutional layer to summarize the information obtained in the previous operations. Unlike normal FCN, the RCED discards the max-pooling layers and the corresponding upsampling layers to alleviate detail loss. Therefore, the feature map size of each layer in these blocks remains consistent, while the feature map number changes. In this way, the encoder can be perceived to generate many redundant features for each time-frequency point with an increasing number of filters, and each channel corresponds to a feature type. Given that some features are important to the mapping accuracy while some are not, the decoder is used for the gradual removal of unwanted features. However, the decoder in the RCED compresses features simply by directly changing the number of convolutional kernels. This phenomenon makes the precise identification of whether the feature map is important or not difficult for the model, thereby limiting its representational capability. Considering that the attention mechanism can help the model focus on important features, we introduce it to solve this problem. For the attention mechanism, we choose the SE mechanism in [34]. And we add SE to both of the encoder and the decoder. The overall architecture of the proposed attention-based redundant convolutional network (ARCN) is illustrated in Fig. 1 (a). ARCN consists of nine attention-based convolutional blocks (ACB) and a final convolutional layer. Each cube corresponds to an ACB block, and the parameter above it (i.e., ACB#id_inNum_outNum) indicates the id and the numbers of input and output feature maps. ACB architecture is illustrated in Fig. 1 (b). Except for the convolutional layer, the batch normalization and activation layers are included in the building block of the RCED, and ACB also contains an attention layer in the end. In this manner, we can increase the capability of the model to determine the importance of each feature. Moreover, instead of ReLU, we use Leaky ReLU [38] as the activation function to avoid zero gradients. For each utterance input, an enhanced utterance is generated by the model after processing. III. ATTENTION MECHANISM SE is a self-gating mechanism that can recalibrate the output feature maps adaptively. In this way, the model can selectively emphasize the valuable features and suppress useless ones in consideration of the global information. However, the origi-VOLUME 8, 2020 nal SE in [34] only focuses on the channel aspect. Meanwhile, in the field of speech enhancement, the information in timefrequency points also plays an important role. Moreover, in image segmentation, Roy et al. [37] successfully extended the SE mechanism to the space domain by point-wise convolution (i.e., squeezes along the channels and excites spatially). Inspired by this work, we considered improving the expressive ability of the model by assigning weights according to the global information from the channel aspect, the space aspect, and both. Our idea is similar to that in [37], but we propose different approaches to calculate the weights. Let U ∈ R H ×W ×C be the output feature maps of each building block F tr (·) in RCED, where H , W , and C represent the height, width, and the number of the feature maps, respectively. F tr (·) consists of a convolutional layer, a BN layer and an activation layer. Then, we apply a SE layer to U to select valuable information, thereby representing the subsequent blocks better. A. CHANNEL-WISE SE The structure of SE mechanism is shown in Fig. 2 (a). As it assigns a weight to each channel, we name it channel-wise SE (CSE). We divide U into C feature maps according to the channels, each of which is denoted as u i ∈ R H ×W , so U = [u 1 , u 2 , . . . , u C ]. First, we obtain channel descriptor Z , and each element z k aggregates the information of the corresponding channel using global average pooling as follows: Then, the excitation operation is exploited to calculate the weights of all channels as S ∈ R C , according to the global information obtained in the previous operation. where g(·) and σ represent a gating mechanism and a sigmoid function, respectively. g(·) is formed by two fully connected layers and the ReLU function (δ). W 1 ∈ R C r ×C is used for dimension reduction, and W 2 ∈ R C× C r is for dimension restoration, where r refers to the reduction ratio that can be used to vary the capacity and the calculated amount of the SE layers; r value is set as needed. After obtaining the collection of weights, we apply channel-wise multiplication between each feature map in U and its corresponding weight in S to recalibrate the features. B. SPATIAL SE Roy et al. [37] believed that pixel-wise information is important to images and then extended SE to spatial aspect in image segmentation. Unlike the original SE, which calculates the importance of channels, the SSE assigns a weight to each pixel. Given that spectrogram is similar to images, we hypothesize that the information contained in the time-frequency points in speech enhancement is also of great importance. Therefore, the use of SSE can be perceived as removing invalid information from every time-frequency embedding. The SSE structure is illustrated in Fig. 2 (b). SSE squeezes along the channels and excites spatially. We first cut U into a total of H * W tensors, with a 1 * 1 * C shape, and denote it as Second, we implement the squeeze operation and obtain the weight for u i,j . The extraction of additional information can help predict the spatial weights remarkably. Given that the average pooling layer can reflect the overall information of the feature map, the max-pooling layer can detect certain features, and the dilated convolutional layer can effectively capture the contextual information with different scales, we consider using all of them to process U to obtain the spatial weights. The output of these operations are denoted as . dlt x indicates that the dilation rate of the dilated convolutional layer is x. Then, we concatenate them and integrate to obtain the spatial weight by a convolutional layer. (3) where Conv n is the convolution operation, and n is the output feature map number. The kernel size in time and the frequency axis are 11 and 9, respectively. As the attention weight should be in [0, 1], a sigmoid function is then applied: Then, we can obtain the tensor weight ∈ R H * W and use it in the excitation operation. Each recalibrated time-frequency point is obtained as follows: where u i,j is the embedding representation that corresponds to the point at time i and frequency j. weight i,j is the relative importance of u i,j . In this way, the model can concentrate on other informative features from the time-frequency aspect. C. SPACE-CHANNEL-WISE SE In addition to assigning weights to the feature maps alone and to the time-frequency embeddings alone, we also explore the use of both to simultaneously recalibrate U spatially and channel-wise. We use four ways of combining them: parallel addition, parallel concatenation, sequential channel-space operation, and sequential space-channel operation. These ways can encourage the model to extract important features accurately by exploiting different aspects of information. 1) PARALLEL CONCATENATION In the parallel space-channel-wise concatenation method (SCconcat), we first obtain the channel weighted output and the space weighted output concurrently according to CSE and SSE and denote them as U sp and U ch , respectively. Then, we concatenate them on channel relation. Finally, we use pointwise convolution to integrate these two aspects of information and obtain an output of the same size as the original input. The formula is expressed as follows: where PConv n is the point-wise convolutional layer. n is the output feature map number and C is the number of input feature maps, and output is the output of the SE layer. 2) PARALLEL ADDITION In the parallel space-channel-wise addition method (SCadd), we obtain U sp and U ch in the same way as that in the parallel concatenation method. We can directly add them point-topoint to obtain the output of each ACB because they have the same shape. 3 ) SEQUENTIAL CHANNEL-SPACE In the sequential channel-space method (S1C2), we use the channel-wise and SSE methods sequentially to process input U. The formula is expressed as follows: where input is the input of the SE layer. SE sp and SE ch represent the SSE and CSE operations, respectively. 4) SEQUENTIAL SPACE-CHANNEL C1S2 is similar to the channel-space method. The difference is that this method executes the SSE method first and the CSE method next. The formula is expressed as follows: IV. EXPERIMENTAL CONFIGURATION In this work, we select the TIMIT corpus [39] as the clean speech. TIMIT contains 6300 sentences, of which 10 sentences are spoken by each of the 630 speakers from 8 major dialect regions of the evaluation. We remove the dialect sentences (the SA sentences) from its training set and use the remaining 3696 utterances for training. The TIMIT core test containing 192 utterances is used as the test set. The training set is mixed with four kinds of noises (babble, factory1, destroyerops, and destroyerengine) at three SNR levels (−5, 0 and 5dB). And in the test set, we additionally choose nonstationary noise factory2 and stationary noise, and two other SNRs (−10 and 10 dB). All noises used come from Noisex92 [31]. For evaluation, we choose short-time objective intelligibility (STOI) [29] and perceptual evaluation of speech quality (PESQ) [30]. STOI is positively related to subjective speech intelligibility, with a value range of 0 to 1. The larger the value is, the better the speech intelligibility is. PESQ focuses on evaluating the subjective quality of the perceived speech, with values between −0.5 and 4.5 [31]. Like STOI, the larger value indicates the clearer the speech. We choose short-time fourier transform (STFT) to compute the spectral vectors. STFT uses a Hanning window with 256 points and an overlap interval of 128. Given that the 256-point STFT magnitude vector is symmetric, we only use half. All data are resampled at 8 kHz. Considering that the window length, the shift, and the normal length of vowels are approximately 32, 16, and 99 ms, respectively, [40], we set the convolutional kernel in the time axis to 11 in all building blocks. Ultimately, the convolutional kernel can cover approximately 192 ms, which is approximate twice the vowel length. The kernel size in the frequency axis is 11. The filter number of each layer is 12-24-36-48-60-48-36-24-12-1. In the training phase, the Adam optimizer [33] is used for parameter optimization, with a learning rate of 0.0002. We use 8 as the mini-batch size on the utterance-level for training, with mean squared error (MSE) as the loss function. For fair comparison, each model is trained for the same number of epochs. Then, the best model is selected from them for testing. ARCN-CSE, ARCN-SSE, ARCN-SCconcat, ARCN-SCadd, ARCN-S1C2, and ARCN-C1S2 represent the addition of CSE, SSE, SCconcat, SCadd, S1C2, and C1S2 layers at the end of each building block in the RCED, respectively. We choose [32] which is also an attention-based model as the state-of-the-art as a baseline. And we modified the embedding dimension of the middle layer to make its parameter amount similar to RCNA-SCconcat. This work has two main differences with our work. First, [32] combines the attention mechanism with LSTM, and gives weights to the embedding in each timestep. Our work combines the attention mechanism with CNN. Second, the attention mechanism is used only once in [32], and our work calculates and assigns weights after each convolution. A. ENHANCEMENT RESULTS In Table 1, we show the STOI and PESQ scores for different models. Optimal values are marked in bold. Overall, the performance of all proposed models is better than that of the RCED in general. This fact shows the effectiveness of all the SE operations. In most scenarios, the four spacechannel-wise SE models achieve higher PESQ scores than adding SSE only and adding CSE only, which proves that both channel and spatial information is helpful for performance gain. Among them, RCED adding SCconcat yields the most significant improvement for noisy utterances in terms of STOI and PESQ metrics in all cases. For example, in the babble noise, SCconcat provides 8.9% and 0.44 STOI and PESQ improvements over the noisy utterances on average, respectively. Then we compare best-performed ARCN-SCconcat with the state-of-the-art [32]. We can find that in lower SNRs (i.e. 0dB), ARCN-SCconcat obtains significantly higher STOI values than [32]. However, as the SNR increases, the gap between ARCN-SCconcat and [32] gradually decreases, and the differences are negligible at 10 dB. Similar trends can be observed in terms of PESQ. We analyze that, under the condition of low SNRs, after the LSTM transformation, the embedding of the t-th frame and its z frames before and after (i.e. t − z ∼ t + z) all still include some noise components. The mask obtained by calculating the correlation between these noisy embeddings and following transformations may be deviated from the ground truth, so that the enhanced spectrogram still has noise. However, when the SNR is high and the speech component dominates in embedding, the obtained mask can be much closer to the ground truth, thus improving the model performance. And our work can continuously filter out noises through multiple SE layers, so it can obtain satisfactory results at low SNRs. B. GENERALIZATION CAPABILITY For supervised training methods, generalization ability is an important aspect of performance evaluation. The generalization of the model is mainly evaluated from three perspectives, namely, noise, speaker, and SNR generalization ability. Next, the three generalization abilities are analyzed separately. First is the noise generalization. In Table 1, we can see the evaluation for different models on unseen noises (i.e. stationary factory2 noise and un-stationary white noise). A trend similar to that in seen conditions can be observed. That is, the models combined with SE mechanisms perform better than the RCED itself. For example, in the factory2 noise condition, ARCN-CSE, ARCN-SSE, ARCN-SCconcat, ARCN-SCadd, ARCN-S1C2, and ARCN-C1S2 achieve average STOI improvements of 0.38, 0.55, 1.38, 0.61, 0.2 and 0.94, respectively, and average PESQ improvements of 0.05, 004, 0.1, 0.05, 0.05 and 0.03 respectively. To clearly show the SNR generalization capability of the proposed models, we illustrate the percentage growth of the models (i.e., RCED, ARCN-CSE, ARCN-SSE, and ARCN-SCconcat, which have the best performance among the four space-channel-wise models) compared with the unprocessed utterances under trained and untrained SNRs. We use −10 dB and 10 dB as the untrained SNRs in the experiments. Given that the noisy utterances themselves are already clear enough under 10 dB, the models have little room for the improvement of the STOI and PESQ metrics. Therefore, we only list the results at −10 dB. As for seen SNR, we randomly select −5 dB to show for comparison. The results are illustrated in Table 2. We can see that under most noises, the performance increase percentage at −10dB is slightly lower than that at −5dB. This phenomenon is normal because that model VOLUME 8, 2020 can remember some features that appeared during training. Moreover, noisy utterances at −10 dB contain much more noise components, making the difficulty of enhancement increases. And in babble noise, the improvement ratio at −5dB is much lower than that in −10dB. We hypothesize it is because that babble noise is more complicated and thus more difficult to eliminate. Noticeably, in some cases, such as factory2, the growth rate under unseen −10dB is higher than seen −5dB, which proves the SNR generalization capability of proposed mechanisms. As for speaker generalization, given that the TIMIT core test set contains all the SX and SI sentences read by 24 speakers (2 male and 1 female form each dialect region), all utterances for the test are read by untrained speakers. Thus, the experimental results can effectively prove that the models have good speaker generalization capability. C. THE LOCATION OF SE In this work, we add the attention mechanism to both of the encoder and the decoder of RCED for the following two reasons: First, SE can excite informative features in the early layers and becomes specialized in later layers. Second, its benefit can be accumulated through the network, more SE layers can lead to better performance [34]. To further verify the effect of the SE module on the encoder and the decoder for speech enhancement tasks through experiments, we respectively add the proposed SCconcat mechanism to the encoder of RCED only (denoted as RCED-SCconcat-en), the decoder of RCED only (denoted as RCED-SC-de), and both of them (denoted as RCED-SCende). The results are given in Table 3, and each value corresponds to the average result across five SNR levels (−10, −5, 0, 5, and 10 dB) and six noise types (babble, destroyerops, and destroyerengine, factory1, factory2 and white). From the table, we can find that the overall performance from low to high is RCED, SC-concat-en, SC-concat-de and SCconcat-ende. That means SE affects whether it is added in the encoder or the decoder. It is easy to understand why SE affects decoder as it can filter out more valuable information to decoder, thus improving the expressive power of the model. As for encoder, we analysis the reasons for the performance improvement as follow: The encoding process is used to generate redundant information. If we consider the limiting case, i.e. the information generated by the encoder is too redundant, then no matter how good the filtering effect of the decoder is, it is difficult to completely capture useful information from much useless information. As a result, the generated spectrogram still contains some noise components. Therefore, it is necessary to add SE to the encoder as well. D. SE GENERALIZATION The generalization of SE mechanisms indicates that not only can they play a role in the RCED, but also improve the performance of other CNN-based models. To prove it, we test the performance of a simple CNN and its SE equivalents. The detailed description of the CNNs we used with and without SE mechanisms is shown in Table 4 (a) and (b). The STOI and PESQ results are shown in Fig. 3 (a) and (b), respectively. Considering that many state-of-the-art CNN speech enhancement methods have shortcuts [24], [25], we also investigate the effect of the proposed SE mechanisms when combined with shortcut-based CNNs. We build a shortcutbased convolutional network (SCN) that adds shortcuts between the corresponding layers in the encoder and decoder of the CNN used before and then evaluate its performance with and without SE mechanisms. The detailed description is presented in Table 4 (c) and (d). The STOI and PESQ scores of all models are shown in Fig. 3 (c) and (d). The results show that models with SE outperform the normal version in most cases, indicating that the introduction of the proposed SE is beneficial. Therefore, SE can be combined with a wide range of CNN-based models for performance gain. E. SSE 1 VS C The output shape of the SSE mechanism is H * W , and each value corresponds to a time-frequency point. The attention operation multiplies this value with all the values at the corresponding time-frequency embedding. This means that each dimension of the time-frequency point embedding is filtered according to the same weight. However, every dimension of the time-frequency point embedding represents a kind of feature. Some of these features are important, while some are not. If they are treated equally, then the model cannot make full use of informative features and suppress redundant information, thereby limiting the expressive power of the model. To solve this problem, we change the output of the SSE operation (with the shape of H * W ) from 1 (ARCN-SSE-1) to C (ARCN-SSE-C) by setting subscript n in (3) from 1 to C. For example, the convolutional layer in the first building block has 12 output feature maps (i.e. in that building block, C = 12). Take them as the input of the following SE layer, and then generate 12 attention maps. That is, the C in each SE layer is 12-24-36-48-60-48-36-24-12, which is consistent with the number of output feature maps. In Table 5, we evaluate the performance of ARCN-SSE-1 and ARCN-SSE-C under seen and unseen noises. Each value corresponds to the average result across five SNR levels (−10, −5, 0, 5, and 10 dB). In all cases, ARCN-SSE-C yields significant improvements over SSE-1 in terms of STOI and PESQ scores. For example, in the condition of seen babble noise, the STOI and PESQ scores improve by 0.86 and 0.05, respectively. In unseen factory2 noise, the STOI and PESQ scores improve by 0.45 and 0.04, respectively. This result shows that our intuition is correct, that is, each dimension of the embedding has a different importance to the timefrequency prediction. Notably, although the performance of ARCN-SSE-C improve significantly, the number of its parameters also increased. The parameters of ARCN-SSE-1 and ARCN-SSE-C are 1.171 and 1.315 million, respectively, with a growth rate of 12.3%, which is low. However, in the models with many channels, the amount of calculation may increase significantly. Therefore, when the performance requirements are high, we consider the use of ARCN-SSE-C. In circumstances that require high processing speed, such as real-time applications, we opt for ARCN-SSE-1, which has few parameters. Table 6 shows the number of parameters and growth rate each proposed model compared with those of the RCED. The ARCN-SCconcat network with the most parameters increased by 5.51%. Therefore, the introduction of SE mechanisms only increases the computational burden by a small fraction and is perceived as lightweight mechanisms. F. PARAMETER COMPARISON Besides, CNN has a weight sharing and local connection mechanism that enables the reduction of the number of parameters. As mentioned previously, modeling by combining SE and CNN-based models may have great potential in application scenarios that have requirements on model size, such as embedded systems. G. QUALITATIVE ANALYSIS 1) ENHANCEMENT RESULTS To show the denoising effect of each model more clearly, we select a TIMIT utterance spoken by an unseen speaker corrupted by the babble noise at 0dB and then draw the spectrums for each case (clean speech, noisy speech, enhanced by RCED, enhanced by ARCN-CSE, enhanced by ARCN-SSE and enhanced by ARCN-SCconcat) as shown in Fig. 4. From the figure, it can be noticed that all models can effectively remove the noise components. However, there is still some remaining background noise in (c). And some of the recovered speech structures especially in high frequency is somewhat rough. As for adding CSE only (d) and SSE only (e), we can see that both of them can eliminate the background noise and restore the speech component effectively. This proves that the addition of SE enables the model to pay more attention to speech. The enhancement results of (d) and (e) are similar but differ in details. Therefore, when adding SCconcat which combines both, the model can obtain the information obtained by CSE and SSE at the same time and complement each other. In this way, the model can achieve the elimination of noise and the preservation of speech details, making the enhancement effect better. When comparing all the figures, we find that the spectrogram obtained by RCED (c) contains the most noise components and the spectrogram of ACRN-SCconcat (f) is closest to that of the clean utterance. This finding matches the results that RCED and ACRN-SCconcat produce the lowest and the highest metric (i.e. STOI and PESQ) scores respectively in Table 1. 2) CHANNEL-WISE SE VISUALIZATION Each channel can be regarded as a set of certain characteristics of all time-frequency points. Some channels concentrate on speech while some concentrate on noise. The purpose of channel-wise SE is to give greater weights to the channels corresponding to speech and smaller weights to those 78988 VOLUME 8, 2020 corresponding to noise. To prove whether channel-wise SE achieves its purpose, in Fig. 5, we respectively visualize the feature maps assigned to the minimum and the maximum weights in the first and the last building block of ARCN-CSE. And the input utterance we used is the same as that in 1) ENHANCEMENT RESULTS. (c) and (d) are the feature maps with the minimum and maximum weights in the first building block. We can see that although (c) is dominated by the noise components, there still exist some speech components in the black solid frame, and in (d) clear speech texture can be seen, even if they are broken. This is because in the early layer, although SE has a recalibration effect, it is not strong enough. In the last building block, the feature map with the minimum and maximum weights should be similar with the noise and clean speech as the recalibration effect is consistently accumulated through the entire network, and the pattern of (e) and (f) confirm this. Most of (e) is a noise component, and there is almost no speech component. In (f), we can observe a more coherent and clearer outline of the clean speech. 3) SPATIAL SE VISUALIZATION Spatial SE assigns a weight to each time-frequency point. Therefore, inputting a noisy speech, the generated attention map (with the shape of H * W , each value corresponds to a time-frequency point) should be similar with the distribution of the clean speech. In Fig. 6, we visualize the attention map in each intermediate building block of ARCN-SSE. We use the same input utterance with that in 1) ENHANCEMENT RESULTS. In the lower-level layer (c), though the color of the part corresponding to speech is slightly deeper than the surrounding, the colors of the whole picture are similar. Therefore, the speech and noise are still not distinguished. In (d), we can find the sporadic distribution of the speech component, but it is still very broken. Then obvious patterns of the speech begin to appear in the following layers (e and f), but still incoherent. The vertical bars appearing in (g) are similar to the outline of the clean speech, but with little texture of the speech, which is further reflected in (h) and (i). In general, from Fig. 6 we can see that in lower-level layers, the spatial SE mechanism is still unable to recognize speech and noise very accurately. As multiple spatial SE layers are accumulated, the ability to distinguish between speech and noise is enhanced, which meet the conclusion in [34]. In this way, the model can filter out the noise components and retain the speech components, to better restore the enhanced speech spectrogram. VI. CONCLUSION In this paper, we propose ARCN, which combines the attention mechanism and the RCED model for speech enhancement. Attention weight assignment is achieved through the SE mechanism, which improves speech enhancement performance by emphasizing valuable information. The original SE mechanism assigns weights to channels according to global information. Considering that spatial information is also of great importance, we propose to assign a weight to each timefrequency point through SSE. We further boost the performance by concurrently exploiting both of CSE and SSE using four different ways. The experimental results prove that all proposed SE mechanisms can effectively improve the model performance without adding a heavy computational burden and can be generalized well to untrained noises, SNRs, and speakers. The best results are obtained by concatenating the two aspects (i.e., space and channel) of information. In addition to RCED, we also combine SE with other CNN-based models, thereby also achieving performance improvement. This means that the SE mechanism can be treated as a plugin and can be introduced in other CNN-based models for performance gain. WENZHENG YE is currently pursuing the master's degree in software engineering with the University of Electronic Science and Technology of China (UESTC). He has research in the area of speech enhancement, speech recognition, and machine learning. GUOQIANG HUI is currently pursuing the master's degree in software engineering with the University of Electronic Science and Technology of China (UESTC). His research interests include speech recognition and speech enhancement.
8,664.6
2020-01-01T00:00:00.000
[ "Computer Science" ]
Development and Evaluation of Artificial Neural Networks for Real-World Data-Driven Virtual Sensors in Vehicle Suspension Vehicle comfort, handling, and stability can be improved by a semi-active suspension with advanced control algorithms using the vertical velocities of the sprung mass (SM) and the unsprung masses (UMs) as inputs. Displacement and acceleration sensors are often used to estimate vertical velocities of UMs. However, these sensors are expensive and are susceptible to degradation. Virtual sensors (VS) have been proposed as a solution, and previous research using simulation data has shown that artificial neural networks (ANNs) can provide usable UM vertical velocity estimates. This study aims at finding ANNs structure and input sample window size to achieve best performance on real-world data. Novel dataset was created and used to test VSs based on eight structures of ANNs combining multilayer perceptron, convolutional neural network, long-term short-term memory (LSTM), and bidirectional LSTM layers. This article presents the results of 104 combinations of ANN structure and sample window size, which required 6240 training sessions. A Bayesian search was used to tune the hyperparameters of ANNs’ layers minimizing the root-mean-square error (RMSE) of estimations on the validation data, while a grid search was used to select the sample window size that minimizes RMSE of estimation on test data, ensuring that selected combinations are well generalized. The VS based on ANN with convolutional layers, achieved the lowest RMSE of 0.0210 m/s, and processing time of 0.421 ms for a window size of 23 samples while estimating vertical velocities of vehicle UMs from real-world data. I. INTRODUCTION Nowadays, the automotive industry is strongly focusing on automated driving technologies [1], so the ride comfort is becoming increasingly important.In particular, fatigue and motion sickness may be more critical in automated vehicles as the driver is less involved in vehicle control [2].Occupants may also engage in various activities while driving, such as The associate editor coordinating the review of this manuscript and approving it for publication was Junho Hong . working on a laptop or reading, leading to higher demands on ride comfort [3].Some studies show that it would be beneficial to achieve a comfort level in a car similar to that of trains [3], [4].Nevertheless, various control algorithms are developed for driver assistance systems that can be used in conventional vehicles to improve comfort.These algorithms involve adaptive, semi-active [5], [6], [7], and proactive suspension with road surface preview [8]. The ongoing challenge to improve comfort, stability, and handling without increasing hardware costs requires continuous enhancement of chassis control performance [9].For example, semi-active and active actuators must replace passive vehicle suspension components to provide satisfactory comfort and contribute to better vehicle handling [10].Implementation of active components requires advanced suspension control and state measurement algorithms.Active and semi-active actuators can change the system characteristics by changing the damping force.Commonly, industrial control solutions such as Skyhook or hybrid and other state of the art algorithms use vertical velocities of the unsprung masses (UMs) and the sprung mass (SM) as inputs, and they are estimated instead of direct measurement [11], [12].No robust sensor is available at a reasonable price for vertical velocity measurements of SM and UM.Vertical velocities of SM are estimated using IMU's measurements and Kalman filters.IMU sensor placed on SM is used by other systems such as dynamic stability control (DSC).Displacement [13] and acceleration sensors [14] are commonly used and velocity of UM is estimated by fusing measurements of these sensors.This requires installation of additional sensors at UMs, leading to higher overall costs, packaging issues, and robustness problems.UM displacement sensors are sensitive to environmental influences and wear with use.Acceleration sensors measurements are prone to integration bias, and installation of such sensors onto each wheel hub is complex.A possible solution to this problem is the use of UM virtual sensors (VSs) [15].Such sensor could estimate UM vertical velocity using only data from sensors installed on SM. A VS is a software algorithm that generates signals by combining and processing data received from physical sensors and estimators [16].The generated data is fed into complex functions or applications [17].The VS can be model-based and data-driven [18].In model-based approaches, mathematical models are used to define the relationship between input and output variables.Automotive applications often use variations of the Kalman filter (KF) for this type of VS [19].For example, the KF in [20] was used to estimate the vehicle sideslip angle, and the tire forces were estimated in [21]. The study [22] used the KF for suspension state estimation.However, due to the nonlinearity of vehicle components, it can be challenging to use mathematical equations to create an accurate VS model [23].On the other hand, the data-driven VS relies solely on recorded data obtained from observation of system operation [24].Practical implementations of data-driven VS include multivariate statistical methods and artificial intelligence methods. Data-driven VS developed for vehicle suspensions has been investigated in previous studies [15], [23], [24], [25].In [24], the authors presented a data-driven approach based on deep learning (DL) for estimating the road profile height and state variables of vertical displacement and velocity of vehicle UMs using onboard sensors.The proposed VS was compared with the extended KF and the static nonlinear autoregressive exogenous model.The performance results were in favor of the proposed VS model.However, the algorithm was only tested in a simulation environment with a narrow range of driving conditions.Therefore, it is unclear how such sensor would perform with real-world data. Previous studies [23], [25] provided promising results using data-driven VS based on artificial neural networks (ANNs) and specifically deep neural networks (DNNs) for estimation of UMs' vertical velocities. The motivation behind this research rises from the observed deficiency in the development and testing of the data-driven VS for UM vertical velocity using realworld datasets.Furthermore, we aim to compare and select ANN structure and determine the most suitable input sample window size to achieve the highest estimation accuracy. Thus, the objective of this study is to develop and compare data-driven ANN-based VSs (described in Section II) on a real-world dataset.This algorithm is part of vehicle state estimation, which is required as input for feedback control algorithms, that are not part of this article.The research presented in this article includes determining the optimal sampling window length and hyperparameter combinations for each of the selected ANN structures.In order to deal with the lack of datasets, a new dataset was created using raw data collected at the proving ground and under urban driving conditions from a vehicle demonstrator equipped with a semi-active suspension.The ANN training process was repeated 60 times for each combination of structure and sample window size, to select the best hyperparameters using a Bayesian search.Overfitting of the ANN is prevented by selecting the model iteration with the best root mean squared error (RMSE) on validation data from 60 training iterations. The contributions of this study include: • Preparation of a real-world dataset for the development of a VS for the vertical velocity of UMs: A major contribution lies in the careful compilation of a real-world dataset tailored for the development and rigorous testing of estimators for the vertical velocity of UM. • Systematization of tasks and methodologies for the creation of data-driven VSs: This provides a clarification of tasks and outlines a comprehensive methodology for the creation of data-driven VSs for vehicle suspension and sheds light on new approaches in this domain. • Development and selection of best type and optimal structure of ANN for VS of UMs vertical velocity: Through extensive experimentation, this research identifies the most appropriate type and structures of ANNs that are best suited for the prediction of UM vertical velocity. • Determination of the optimal input window size for real-time VS performance: The critical dimension of input sampling window size for the tested ANN structures is introduced, which significantly affects the estimation accuracy. • Analysis of the influence of the input signals on the RMSE of output: In this study, the influence of input signals on the resulting RMSE of the estimations is rigorously investigated and quantified. • Testing the impact of input signals' means compensation on output RMSE: The significance of input signal means compensation is evaluated, providing valuable insight into the impact of such compensation on the performance and estimation accuracy of VS.These contributions collectively advance the state of knowledge in the field of UMs' vertical velocities estimation, offering novel perspectives and methodologies that significantly enhance task understanding and solving capabilities. The rest of this paper consists of four sections, excluding the introduction.Section II describes experimental setup and experiments for the real-world data collection, collected signals and dataset preparation, and development of VS, including testing the importance of inputs, the eight ANN structures, and hyperparameter search methods.In Section III, the experimental results are presented and analyzed, and the best solution is found.In Section IV, the main results of the research are discussed and the conclusions are drawn. II. DATA COLLECTION AND NEURAL NETWORK STRUCTURES A. EXPERIMENTAL SETUP FOR DATA COLLECTION This research aims to create VS that would replace UM displacement and acceleration sensors (see Figure 1) currently used for estimation of UMs' vertical velocities that are used in suspension control algorithms.There is a lack of a real-world dataset suitable for supervised learning of ANN and performance evaluation.Therefore, the original data needs to be collected using vehicle demonstrator and dataset prepared. Audi A6 (2019 model year) with a semi-active suspension prototype developed by Tenneco was used in this research.This demonstrator vehicle was equipped with a dSPACE real-time target machine (RTTM) that runs a suspension controller that uses UM vertical velocity as input.RTTM is also capable of recording sensor, estimated, and algorithms' output data.Therefore, it is used for experimental data acquisition.The data is fed to the connected computer that records it into files.The data is logged at a fixed sample rate of 100 Hz into the recording file. In order to enable the collecting of required data, additional sensors were installed on the vehicle; they included an IMU placed in the center of gravity of the vehicle, optical flow sensor (OFSs) mounted on the rear left side door for sideslip angle measurement, and UM vertical displacement sensors were mounted between lower suspension links and vehicle body in parallel to dampers.Furthermore, access to the vehicle's controller area network (CAN) channels was granted, this provided access to all standard invehicle sensors, which were in their standard locations in the vehicle. An overview of the system is given in Figure 1.There the sensors are connected to the RTTM which implements suspension controller, signal recording, and proposed UMs' vertical velocities VS.It shows how front-left (FL), frontright (FR), rear-left (RL), rear-right (RR) UM acceleration and displacement sensors can be replaced or duplicated by VS for FL, RF, RL, RR. Proposed UM vertical velocity VS is meant to use in RTTM instead of the currently used algorithm for estimation from measured UM vertical displacement.This would allow to remove displacement sensors if estimation using VS on real-world data proves to be good enough compared to current estimation. B. EXPERIMENTS FOR DATA COLLECTION The dataset was created based on test data collected on the Lommel proving ground using vehicle demonstrator.The procedures below describe the tests performed to collect the data.These tests are commonly used for vehicle handling, comfort, and stability studies, therefore recorded in-vehicle sensor signal data should provide enough information for training, validation and testing datasets.The data was recorded for different types of tests: acceleration and braking; skid pad, step steer, double-step steer, obstacle avoidance, sine with dwell, sinusoidal steer, and comfort tests.Also, data was collected under urban driving conditions. In the acceleration and braking test, maximum acceleration is applied from a standstill in a straight line, and at 100 km/h, the vehicle is brought to a standstill with maximum braking force. In the skid test (ISO 7975:2019), the driver controls the vehicle at a constant radius of turn.The tests started from velocity of 10 km/h.The vehicle velocity gradually increased.This test was performed up to the velocity level at which the driver could no longer keep the vehicle on the target trajectory. In the step steering test (ISO 7401:2011), the driver accelerated the vehicle in a straight line until a velocity of 100 km/h was reached.After that, the accelerator pedal was held constant, and the driver performed a stepped steering wheel input (counterclockwise or clockwise) of 100 • with a rate of 400 • /s.The steering wheel angle was held at 100 • for at least 2 s after the first actuation. In the double-step steering test (ISO 17288-1:2011), the driver accelerated the vehicle in a straight line until a velocity of 100 km/h was reached.Afterwards, the accelerator pedal was held constant, and the driver performed a stepped steering wheel input (counterclockwise) of 100 • at a rate of 400 • /s.The steering wheel angle was held at 100 • for 2 s after the initial actuation.After 2 s, the steering wheel was rotated to -100 • and maintained for 2 s.After that, the driver set the steering wheel back to 0 • . The obstacle avoidance test (ISO 3888-2:2011) is a dynamic maneuver in which a vehicle moves rapidly from its original lane to adjacent road lane and returns to the original lane without exceeding lane limits.The goal was to ensure that the vehicle achieves a specific sequence of alternating high lateral acceleration values.During the test, the driver holds the accelerator pedal constantly.The initial longitudinal velocity of the vehicle was measured to ensure the repeatability of the test procedure. In the sine-with-dwell test (ISO 19365:2016), the vehicle was accelerated to a velocity of just over 80 km/h.Afterwards, a constant accelerator pedal position was held.The steering wheel input has a waveform of a sine wave with a frequency of 0.7 Hz that pauses for 0.5 seconds after reaching the second peak. The sinusoidal steering test procedure includes driving conditions where the vehicle reaches a lateral acceleration of about 6 m/s 2 , which is achieved with a steering wheel amplitude of about 50 • at a vehicle velocity of about 80 km/h.Stronger inputs can also be made up to and beyond the handling limit, e.g., steering wheel angle of 100 • for a more aggressive maneuver, but not at the handling limit, steering wheel angle of 150 • for a maneuver at handling limit. The driver accelerates the vehicle in a straight line until a constant velocity of 80 km/h is reached.The accelerator pedal is held at a constant value.After that, the driver gives a sinusoidal, wave-like steering wheel input with a predefined magnitude.The frequency of the sine wave is approximately 1 Hz.The steering input lasts for two cycles. The comfort tests included driving on a road with Belgian pavement, driving on a road with bumps at different velocities, and driving on a road with high-class (D-F) pavement irregularities.Driving is performed at a constant velocity in range of 25-70 km/h.These tests represent good and bad driving conditions. The collected data serves as a comprehensive basis for building DL dataset due to the wide range of real-world driving scenarios.The diversity of these tests, provide valuable data, that is suitable for development of data-driven VS not only for UM vertical velocity but also for other signals that are recorded. C. SIGNALS AND DATASET PREPARATION In this paper, estimation of UM vertical velocity using VS is determined as a time series regression task with multiple inputs and multiple outputs (for each wheel simultaneously) when a current and a certain number of previous samples are available. The main characteristics of each input signal must be described to understand the relationship between input and output and their usability.The values of the signals have different characteristics in terms of continuity, scale, and mean value.Discontinuous signals can be useful as switching signals for changing the behavior of the ANN; however, most ANNs that produce continuous output signals will benefit more from continuous input signals that can be used directly to form the output signal.The mean values of the signals are used to offset that signal towards 0, because it is the point of nonlinear part of selected activation functions.The standard deviation shows the extent to which the signal varies and is a good metric of the signal scale.The scales of the signals must be similar to speed up the DL process; therefore, the signals are normalized by dividing them by the standard deviation.Next, the signals are described and main characteristics are provided in Table 1. Driver torque demand is a signal available from the driveby-wire gas pedal.The value of this signal is important to engine and DSC systems.Although not directly, it does affect the longitudinal acceleration that is measured by IMU.The acceleration induces change of pitch angle, thus unloads and loads the suspension and affects the vertical displacement and velocity of the UM. The DSC regulation signal indicates the system's engagement in braking or engine torque reduction.Active DSC may change the load on opposite sides and ends of suspension, thus influencing the UM vertical velocity and displacement. The master cylinder pressure signal indicates the applied braking pressure that directly correlates with deceleration.This deceleration modulates the vertical load distribution, thereby affecting the UM's vertical displacement and velocity.The steering angle and direction provide information about the steering input of the driver.This information is also used for the lateral acceleration and the distribution of the load on the left and right sides of the vehicle suspension. The optimized steering angle is the target steering angle calculated by the vehicle taking into account user input and the requirements of other onboard systems. Vehicle velocity information is important for ride control when the vehicle overcomes surface irregularities, bends, potholes, and turns.It provides information on the extent to which lateral and vertical acceleration can be expected and how quickly the road impact on the front suspension reaches the rear suspension. The wheels' velocities are provided by the anti-lock braking system over CAN.These signals partly duplicate vehicle velocity, but also adds information about friction. The X, Y, and Z axes accelerations, roll, and yaw rates obtained from IMU provide information about the state of the vehicle: orientation, acceleration, and rotation due to driver input and external factors. The body sideslip angle provides information about the difference between the vehicle's heading and its actual direction of travel, usually is used in DSC. These parameters, along with the vertical velocities of SM and UM, are used in advanced control algorithms for semi-active suspensions.However, not all inputs are equally important in estimating the desired output values.Therefore, it is necessary to evaluate their significance when processed by ANN-based VS.This was done using value replacement methods and evaluation of the RMSE change on the test dataset (see Section III (b)). The output signals are the UM's vertical velocities for each wheel.The ground truth UM's vertical velocities were estimated from the vertical wheel displacement in reference to full suspension extension point on the experimental vehicle.At first, the signals from the vertical wheel displacement sensors were filtered at 50 Hz using a MATLAB Butterworth low-pass filter.Afterwards, the difference between the current and the last sample was calculated.Finally, the resulting difference between the samples is divided by the sampling time to obtain the vertical velocity of the wheel.This is the same filter used in the real-time processing of vehicle suspension control units.As it does use current and last samples, a delay of about half of sample period is introduced into the filtered signal. Multiple pasts values of the input signals are needed to estimate the UM velocity at a given time as they provide information about the change in the signal.Therefore, the window length of the input samples is an important parameter as it affects the amount of signal memory available to the ANNs.The presence of past samples allows the learning of signal features at lower frequencies.At a fixed sampling rate of 100 Hz, the tested sample window sizes of 3 to 51 samples provide a signal buffer of 30-510 ms.This allows real-time signal processing using first-in-first-out buffers, with the newest data point added and the oldest removed at each sampling period.The results in the Section III show that the processing time is considerably shorter than sampling period.Compared to low-pass filters, the proposed ANNs avoid problems related to delays that directly depend on the sample window size, as ANNs predict current state from last and current input signals based on learned model. The total number of samples included in the dataset is about 393 thousand.The samples are randomly divided into 272 thousand (70%) in the training set, 61 thousand (15%) in the validation set, and 60 thousand (15%) in the test set at the experimental record level.Therefore, all samples belonging to one recording were assigned to one of the parts and not seen in the other part, which prevented very similar samples occurring in all parts in due to slow signal changes compared to the sampling frequency.Also, this is only possible way to split data, as most driving action happens in the middle of the recording.The validation samples were used for RMSE minimization during training and Bayesian search, and the testing samples were used to compare ANN structures and sample window size combinations. A grid search was applied to select the sample window size for each ANN structure to systematically explore the different combinations and determine the best configuration.Grid search is a widely used method for implementing structure and parameter tuning in machine learning.The grid values for the sample window size were selected based on performance variation to ensure that a larger window length was near the minimum RMSE point.Window sizes of 3,5,9,11,13,15,17,19,21,23,25,27,35 and 51 samples were selected because 3 samples are the least number of samples that can be used to predict the change of UM movement direction and 51 samples provide information of the lowest frequency that is important for suspension control.More than 51 sample window size was not tested as it would require even computational time while providing a diminishing increase in accuracy. D. STRUCTURES OF ARTIFICIAL NEURAL NETWORKS The selection of an appropriate network type, application domain, hardware imposes several requirements.First, an algorithm of ANN-based VS should be implemented using MATLAB and Simulink.Second, it should be compiled for a specific RTTM.Third, the compiled algorithm should run one cycle in less than 10 ms, because input signals are sampled at 100 Hz.The requirements limit the types of neural layers and scale of the ANN (including number of layers and number of neurons in them).Therefore, small task-specific ANNs were developed instead of using well-known backbone networks such as ResNet [26].For the development of VS, ANNs with multilayer perceptron (MLP), convolutional neural network (CNN), long short-term memory (LSTM), bidirectional LSTM (BiLSTM) layers, and MLP-LSTM combination were implemented and compared. Structures of the networks are shown in Figure 2.There hatched blocks are part of a variant of the same type of network with different number of layers.In Figure 2, only fixed properties of the layers are shown, other important parameters (neurons/units, convolutional kernel's size and stride) are provided in Tables 2-9 and 11.The window size and network hyperparameters were selected using Bayesian and grid searches.This process is described further in Section II (e).Next, each ANN type is explained. The MLP structure is the simplest and most flexible (see Figure 2. a).It includes one or more hidden layers of neurons.Each neuron has a weighted connection to all neurons in the previous layer.The connection weights were changed during the learning process.The disadvantage of this flexibility is the high computational complexity since each connection requires an additional multiplication.Many multiplications and additions are performed in the planned task; therefore, using a matrix form of the equations is more efficient, which can be accelerated by a multiple data instruction operation in modern processors, especially in graphical processing units (GPUs).In this context, an MLP layer with an arbitrary number of neurons can be expressed by the following equation: where y is the output vector; W is the learnable input connection weight matrix of shape (k, n), where k is the number of neurons, n is the length of the input vector; x is the input vector of shape (n, 1), where n is the length of the input vector; b is the bias vector; and σ is the nonlinear activation function.The hyperbolic tangent (tanh) function was used as an activation function for all the MLP layers, excluding the last function that did not use activation.This equation can be simplified by concatenating the bias vector with the weight matrix and adding one to the input vector x.This simplifies Equation ( 1) into a single matrix multiplication as follows: where, W is the learnable input connections and bias weights' matrix of shape (k, n), where k is the number of neurons, n is the length of the input vector + 1; x is the input vector of shape (n + 1, 1) because of the added value of 1 instead of a bias variable, where n is the length of the input vector.This equation can be processed faster because only dot product and activation operations are required.Most machine learning frameworks abstract related mathematical operations into high-level functions or ANN building blocks and use optimized code for processing.The described combination of weighting matrix and bias vector is done within functions, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. and an input vector is a single input of such functions.An MLP layer in a complex ANN is often referred to as a fully connected layer (FC) because neurons in the layer have weighted connections with neurons in the previous layer.MLPs with up to 3 FC layers and different numbers of neuron units were tested, and the best combinations were selected for comparison with other architectures. CNNs have emerged as a powerful class of DL models, particularly due to their exceptional performance in image analysis and recognition [28].In recent years, their applicability has been extended to various domains, including signal processing and VS, due to their ability to extract relevant features from input data [29].They are particularly effective in capturing local patterns and hierarchical representations from such data [30].Therefore, it was decided to test a CNN in VS. In a CNN, each convolutional layer consists of a set of adaptive filters that are convolved with the input data to extract relevant features.These filters capture local patterns in the data by performing element-wise multiplication and aggregating the results [31].Pooling layers can be then used to shrink the feature maps and reduce their spatial extent while preserving the most relevant information [32].The extracted features are then passed to one or more fully linked layers, which perform classification or regression tasks based on the learned representations [33]. A CNN layer with any number of units can be described by the following equation [34]: where X n,i,j is the value of the output feature map matrix of the convolutional layer (before activation function) at index n, i, j (n -index of the convolutional kernel; i -index of the row in feature map matrix; j -index of the column in feature map matrix); W n,h,w -value of kernel filter at index n, h, w (n -index of the convolutional kernel; h -index of the row in the convolutional kernel; w -index of the column in the convolutional kernel; A si+w,sj+h,c value of input array with index si + w, sj + h, c (s stride of kernel; c channel of input array); B n -bias of convolutional neuron with index n; N -number of convolutional neurons in the layer; H number of kernel rows; W number of kernel columns; C -number of channels in the filter that is the same as channels count of the input array.Each signal has own channels.In this study, a CNN-based DNN architecture tailored for this application is proposed to meet the real-time data processing requirements of a VS.The DNN consists of 2 or 3 convolutional layers, each followed by a nonlinear activation function of a leaky rectified linear unit (LReLU) to extract the essential features of signals in the time domain and the relationships between signals as images in 2D space (see Figure 2. b)).The LReLU can be described by the following equation: where a is the settable coefficient of steepness of the activation function in the negative region of the input values (a was set to 0.1). LReLU avoids the problem of ANNs not learning when using rectified linear unit [35].Zero input values cause the zero-gradient problem for negative X t values.The output of the CNN layers is converted to a vector and given to FC layers, allowing the network to learn and make predictions at a higher level, similar to an MLP. In the case of VS, the inputs to MLP and CNN are provided as W×M arrays, where W is the sample window size and M is the number of sensor inputs.These inputs are passed to the ANN in each sampling step and provide the output corresponding to the last sample.In this way, the output delay is minimized to the ANN processing time. Unlike MLP or CNN, the LSTM stores its last state and output and uses them as inputs for decision making when processing the next sample in the sequence (see Figure 2. c)).The LSTM inputs were supplied as a W-length sequence of vectors, the length of which corresponds to the number of M-sensor inputs.The LSTM consists of a forget, input, output, and cell input gate (see Figure 3).The following equations can describe an LSTM layer with any number of units where The LSTM can be unrolled in time to observe the interfaces between successive input samples.Each time, the same LSTM unit processes the next data item in a sequence of length W and an input vector of length N. In such cases, t varies from 0 to W-1. BiLSTM is similar to LSTM in that it uses the same structure (see Figure 2. d)).Unlike the LSTM unit, a BiLSTM unit uses two connected LSTM structures to obtain the last sample and outputs the current sample, while the other unit obtains the next sample and outputs the current sample.The outputs of the two LSTM structures for the same sample are linked.The output at each step depends not only on the past samples in the sequence, but also on the next data sample. Such processing is possible only if all samples of the input sequence are available.The last structure type tested was MLP-LSTM, a combination of MLP and LSTM (see Figure 2. e)).In this model, the sequence data is first convolved and preprocessed with MLP before being deconvolved and processed with LSTM.This structure was tested to determine if an additional feature extraction layer could be better combined with the memory capability of LSTM. E. ANN TYPE AND STRUCTURE SELECTION Eight described above ANN structures with various types and numbers of layers were selected for investigation. After determining all possible combinations of the selected sample window sizes and structures, a Bayesian search is applied to fine-tune the hyperparameters of the convolutional and fully connected layers.Bayesian search is a probabilistic optimization method that iteratively updates a probability distribution over hyperparameters based on the observed performance of the model.By leveraging prior knowledge and iteratively refining the distribution, a Bayesian search aims to efficiently explore the hyperparameter space and identify the most promising hyperparameter values. The advantage of the Bayesian search for hyperparameters lies in its ability to handle a limited amount of data and efficiently use computational resources.It adapts to the information gained during the search process and dynamically focuses on promising regions of the hyperparameter space.This adaptive nature allows the Bayesian search to converge to optimal hyperparameters with fewer evaluations compared to other methods, such as grid search. However, it is important to note that Bayesian search typically requires more computational resources and time than grid search because of its iterative nature and the need to update the probability distributions [27].Additionally, the performance of the Bayesian search depends heavily on the choice of the prior distribution and acquisition function used to guide the search process. The training process was repeated at least 60 times for each ANN and window-size combination structure using ADAM optimization algorithm to select connection weights in neural networks and settings of LSTM and BiLSTM memories, while performing the Bayesian search for selection of hyperparameters.Each training session lasted for 30 epochs. One epoch is a training cycle for all the training dataset samples.The selected initial learning rate was 0.001 with one learning rate drop by a factor of 0.2 after 20 epochs.The learning rate reduction enables more accurate minimization in the selected minimum point of the cost function.Also, the gradient rate threshold of 1 was set, to reduce the effect of high error during the start of the learning process.The data samples were shuffled before each epoch to improve the stability of the learning process.This article presents the results of 104 ANN structure and sample window size combinations, which required 6240 training sessions.The testing of RMSE and processing duration were carried out for each pair of ANN type and sample window size after training.The test set is being processed using a central processing unit (CPU) only and a batch size of 1, which includes sensor data that fills the selected sample window size.One Nvidia Geforce RTX 2080 Ti GPU was used for training using mini-batch sizes up to 10922 samples, and was limited by the amount of GPU memory, therefore mini-batch size was calculated dividing 32 768 by the selected sample window size.One computing core of the Intel i7-8700K central processor was used for processing duration testing using single sample mini-baches as in real-time processing.One core of the central processor was used as it reassembles well the processing on RTTM if no multi-threading is supported.The processor of RTTM provides lower core clocks, different CPU architecture and instruction sets, therefore the results are comparable between the test only.Simulink implementation on the same CPU would further reduce the computing duration, while implementation on RTTM would increase the processing duration.The average processing duration is calculated by dividing the test set processing time by the number of samples in it. Overfitting is possible when training ANNs.Overfitting the training data usually results in the RMSE increase for the validation data, whereas the RMSE of the training data decreases.Several measures were adopted to prevent overfitting.First, the checkpoint with the best RMSE for the validation data is recorded for each Bayesian iteration.Second, dropout layers were introduced before the output FC layer in each ANN, and earlier in the MLP-LSTM structure.Thus, the risk of overfitting is reduced to the maximum possible extent for the current dataset. The MATLAB code for VS training and testing is available at: https://github.com/eldux/UMvelocityVSresearch. A. ANN TYPE AND SAMPLE WINDOW SIZE TESTING RESULTS The performance metrics achieved by the simplest MLP with one hidden FC layer after hyperparameter search and learning with window sizes ranging from 3 to 51 samples are shown in Table 2.For these networks The RMSE ranged from 0.0235 to 0.0261 m/s, and the processing time ranged from 0.339 to 0.411 ms/sample.The processing duration is not strongly correlated with the number of neuron units in FC or 13190 VOLUME 12, 2024 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.the window size.This is due to hardware limitations caused by the latency in data transfer between the memory and the computational units.The best performance for an MLP with one hidden layer was achieved using window size of 51 samples -RMSE of 0.0235 m/s and processing duration of 0.391 ms/sample.This model contained 752 hidden neurons.Next, 2 hidden layer MLP was tested.The performance metrics of the MLP with two hidden layers are listed in Table 3.The RMSE ranges from 0.0218 to 0.0256 m/s and processing duration from 0.404 to 0.507 ms/sample.The highest accuracy of 0.0218 m/s RMSE was achieved using a window size of 51 input samples with a processing time of 0.404 ms/sample.The model contained 1118 hidden neurons.Next, 3 hidden FC layer MLP was tested.The performance of the MLP with three hidden FC layers is shown in Table 4.The RMSE ranges from 0.0218 to 0.0246 m/s, while the duration from 0.350 to 0.6 ms/sample.The best model achieved an RMSE of 0.0218 m/s and a processing duration of 0.350 ms/sample with a window size of 51 samples.This model has 1222 hidden neurons and is faster than MLP with 2 hidden layers while providing same RMSE.The number of neurons on the second layer (FC2) is 50, which is noticeably lower compared to the first (FC1) and third (FC3) layers.This hourglass-like shape leads to a reduction of connections between layers and faster performance.Next, a DNN with 2 CNN layers was tested. It should be pointed out that for all tested structures RMSE difference with input window sizes from 3 to 51 samples was in a range 18% from minimum RMSE.The performance of DNN with 2 CNN layers was tested and is shown in Table 5.The RMSE ranges from 0.0210 to 0.0234 m/s and the processing duration from 0.365 to 0.620 ms/sample.The best RMSE of 0.0210 m/s was achieved for window sizes of 21, 23, and 35 samples.Among them, one with input sample window size of 23 has shortest processing duration among them -0.421 ms/sample.This model achieved a 3.67% reduction in RMSE but was 20.3% slower compared to MLP with 3 hidden FC layers.This model has 195 convolutional units and 94 FC neuron units.Processing time was within acceptable limits, while performance was improved.Next, the 3-layer CNN was tested.The performance of the DNN with LSTM is shown in Table 7.The RMSE ranges from 0.0226 to 0.0244 m/s and the processing duration from 0.512 to 1.079 ms/sample.The best accuracy of 0.0226 was achieved for 35 samples window with a processing duration of 1.079 ms/sample.The RMSE is 0.0005 m/s higher compared to the best CNN based DNN.This model has 189 LSTM and 222 FC neuron units.The overall performance is worse than the best DNNs with 2 or 3 convolutional layers, and 2 or 3 hidden FC layers but better than ANN with one hidden FC layer.Next, BiLSTM was tested.The performance of the BiLSTM is shown in Table 8.The RMSE ranged from 0.0225 to 0.0244 m/s and the duration from 0.564 to 1.019 ms/sample.The best accu-racy of 0.0225 was achieved for sample window size of 35 with a processing duration of 0.986 ms/sample.This model has 98 BiLSTM and 238 FC neuron units.The RMSE is 0.0001 m/s lower than the best LSTM but lower by 0.0015 m/s than the best DNN with 2 convolutional layers.It estimates more accurately than an ANN with one hidden FC layer.It appears that bidirectional signal propagation provides a small RMSE advantage of 0.0001 m/s in the case of VS.BiLSTM was not investigated further.The performance by DNN based on MLP-LSTM combination is shown in Table 8.The RMSE ranges from 0.0229 to 0.0240 m/s and the duration from 0.478 to 1.151 ms/sample.The best accuracy of 0.0229 was obtained for 21 and 23 sample windows sizes with the shortest processing duration of 0.568 ms/sample for 21 sample window size.This model has 511 FC neuron units and 68 LSTM units.The processing time of MLP-LSTM seems to be slower than that of the DNN with 2 hidden FC layers and the same RMSE.Bayesian search resulted in lower LSTM unit counts for all investigated window sizes compared to LSTM and BiLSTM, and on average shorter processing times.The accuracy was worse than the best ANNs with LSTM/BiLSTM, CNN, and 2 or 3 hidden FC layers, but better than an ANN with one hidden FC layer.Developed VS for UM is using data from SM, the high frequency oscillations are not reconstructed, as they are not transmitted to SM (due tire and suspension damping).Lower frequency signals are more important in suspension control, thereby the performance of developed VS is sufficient. In summary, the best-performing VS was based on DNN with 2 convolutional layers using input window size of 23 samples, and the second-best was based on DNN with 3 convolutional layers using input window size of 19 samples. B. RESULTS OF INPUT IMPORTANCE TESTING The importance of the inputs was tested by setting each input one by one to 0 or 1 and recording the difference of the RMSE on the test set.0 is used as it would mean no input, and 1 is used as it is the standard deviation after normalization.The results are shown in Table 10.These results show that the IMU roll rate is the most important input signal for estimating the UM vertical velocity.Other important signals include the IMU yaw rate, accelerations on X, Y, Z axes, steering angle optimized, and front left wheel velocity.Slip angle, longitudinal and transversal velocity, FR, RL, RR wheels velocities, driver torque requirement, and master cylinder pressure signals are less important.DSC regulation signal could be omitted, as it is not having impact.However, there is no signal that, when set to zero or one, improves performance.Redundant velocity readings from the wheel velocity sensors and transversal velocity readings from the OFS sensor reduce the effects of signal and measurement noise, as removing these signals increases RMSE.Yet if it is not possible to use OFS, it can be discarded as the impact on performance would be manageable, and the model could rely more on the IMU and CAN data instead. C. RESULTS OF NO INPUT MEAN COMPENSATION To better understand how mean value compensation of the input signals affects the performance, additional tests were performed where only the standard deviation was normalized each signal.This is important because the mean and standard deviation were calculated using only the training data.The mean was subtracted from the validation and test set data, and the results were divided by the standard deviation to bring all signals to approximately the same level. Table 11 shows the performance results obtained with various sample window sizes, which are compared with those in Table 5.The lowest RMSE was obtained with window sizes of 21 and 27.The processing duration for window size of 27 samples was lower -0.425ms/sample.The RMSE range of 0.0210-0.0233m/s was similar to the one achieved by DNN model with 2 convolutional layers trained on data with mean subtraction.The computation time was not affected by not using the mean compensation.These results show that there is no significant difference between using or not using mean compensation. IV. DISCUSSION AND CONCLUSION The real-world dataset developed for this study provides a valuable resource for future research and industrial implementation of ANN-based VSs.It addresses the previous limitation of the lack of such datasets for supervised DL.The study of the optimal sampling window length for input signals and ANN hyperparameters contributes to understanding how ANN models can be tuned to achieve maximum VS efficiency and accuracy in estimation of UM vertical velocity.The results of the study suggest that DNNs with convolutional layers have the greatest potential to achieve this goal and outperform other types of DNNs by providing the lowest RMSE of 0.0210 m/s and a sufficiently short processing time of 0.421 ms/sample with a window size of 23 samples.A closer examination of the ground truth and estimated signals showed that CNN-based VS rejects higher frequency oscillations together with process and measurement noise while preserving average values.This is because SM parameters are used as inputs, which are already damped by the suspensions system.In addition, it was found that VS relies primarily on the roll rate provided by the IMU.However, none of the signals can be removed to achieve better performance, and DSC regulation signal can be removed without loss of accuracy.Other tests showed that the input signal mean subtraction has little impact on performance. In summary, our results represent a significant advance in data-driven VS for vehicle suspensions control, especially for UM vertical velocity estimation.As one of the most important inputs for suspension control, accurate estimation of vertical velocities of UM and SM can significantly improve vehicle comfort, handling, and stability without the need for costly physical sensors.The findings of this research can speed up research of VS for UM vertical velocity development. Future research should further optimize the ANN with convolutional layers to achieve even better accuracy and implementation on in-vehicle RTTM, such as the dSPACE MicroAutoBox, while maintaining a processing time of less than 10 ms.Such integration would allow real-time testing of the created algorithms and analysis of the impact on comfort when using VS instead of physical sensors.For this purpose, also a quarter-car test setup or demonstrator vehicle can be used. FIGURE 3 . FIGURE 3. The internal structure of one LSTM unit. 35 samples window.The RMSE difference between LSTM and BiLSTM was small, and the dependence on window size was very similar. FIGURE 4 . FIGURE 4. UM vertical velocity estimation RMSE using various sample window sizes with all tested ANNs.Estimated FL UM vertical velocity and the ground truth signals are shown in Figure5.Estimation was made using VS based on DNN with 2 convolutional layers.The estimate of the lower-frequency high-amplitude UM vertical velocity is shown in Figure5(a), and the estimate of the higher-frequency UM vertical velocity is shown in Figure5(b).It can be seen that the VS estimation tracks more closely the changes at lower frequencies and higher amplitudes, while rejecting high frequency oscillations. FIGURE 5 . FIGURE 5. Virtual Sensor and Ground truth comparison: a) lower frequency signal example; b) higher frequency signal example. TABLE 1 . Input signals, sources, and statistical characteristics. TABLE 2 . Metrics of 1 hidden layer MLP with various sample window sizes. TABLE 3 . Metrics of 2 hidden layers MLP with various sample window sizes. TABLE 4 . Metrics of 3 hidden layers MLP with various sample window sizes. TABLE 5 . Metrics of 2-layer CNN with various sample window sizes. Table 6 . The RMSE ranges from 0.0215 to 0.0231 m/s, and the duration from 0.429 to 0.604 ms/sample.The best accuracy of 0.0215 m/s was achieved for 19, 21, 27, and 35 samples windows sizes with a best processing duration of 0.525 ms/sample for window size of 19 samples.That model has 324 convolutional neurons and 200 FC neuron units.The overall performance was worse compared to DNN with 2 CNN layers.Therefore, DNNs with more than 3 CNN layers were not tested.Next, DNN with LSTM was tested. TABLE 6 . Metrics of 3-layer CNN with various sample window sizes. TABLE 7 . Metrics of LSTM with various sample window sizes. TABLE 8 . Metrics of BiLSTM with various sample window sizes. TABLE 9 . Metrics of MLP-LSTM with various sample window sizes.Figure 4 compares for all window sizes in a graph.There the most promising window sizes are 19 to 35.MLP, LSTM, and BiLSTM show decreasing RMSE up to Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. TABLE 10 . Input signal importance by change of test RMSE. TABLE 11 . Metrics of 2-layer CNN with various sample window sizes and no input mean subtraction.
10,879.4
2024-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Using Virtual Laboratories as Interactive Textbooks : Studies on Blended Learning in Biotechnology Classrooms Virtual laboratories, an ICT-based initiative, is a new venture that is becoming more prevalent in universities for improving classroom education. With geographically remote and economically constrained institutes in India as the focus, we developed web-based virtual labs for virtualizing the wet-lab techniques and experiments with the aid of graphics favoured animations, mathematical simulators and remote triggered experimentations. In this paper, we analysed perceived usefulness of Biotechnology virtual labs amongst student groups and its role in improving the student’s performance when introduced as a learning tool in a blended classroom scenario. A pedagogical survey, via workshops and online feedback, was carried out among 600 university-level students and 100 remote users of various Indian universities. Comparing learning groups on usage of blended learning approach against a control group (traditional classroom methods) and an experimental group (teacher-mediated virtual labs), our studies indicate augmented academic performance among students in blended environments. Findings also indicated usage of remotely-triggered labs aided enhancing interaction-based lab education enabling anytime-anywhere student participation scenarios. Introduction The current challenging era of technological innovations provided a new pedagogical and economic platform that creates a synergy between teaching and learning in educational system [1].The style of learning in universities has experienced a paradigm shift from the conventional method of knowledge transfer, where the teachers normally use blackboardchalk and textbook based teaching process [2].Education system look to computer instructed modes to promote an active blended learning process, allowing Information and Communication Technology (ICT)-enabled techniques to be integrated into traditional classroom learning [3], [4].Developing e-learning platforms for visualizing complex biological concepts has thought to be a promising aspect for effective perception of biological processes [5].Such technologies have become a tool to overcome geographical barriers and thus assisted everybody to learn anytime-anywhere in the absence of an instructor [6], [7].It had been shown that the performance of students in higher educational institutions who used e-learning tools to aid their education process additionally was better compared to those with face-to-face traditional classroom scenario [8].Previous studies revealed that E-learning played an important role in diverse regions such as India where the traditional lab facilities at Universities were not very well localized to suit requirements of all subregions [6], [9]. ICT-enabled virtual labs, an innovation in technology has proven to be a powerful tool that offers innovations in learning through shared interactions to promote learning process [10].The usage of virtual laboratories in education system has been reported for over 20 years, but its prominent application has grown in the last 5 years to overcome the difficulties faced in a traditional classroom scenario [11], [12].Several studies on virtual laboratories have been reported recently [13][14][15][16][17]. Governments and educational organizations are now taking up initiatives in setting up virtual laboratories as E-learning repository to augment current learning infrastructure [18].To students, virtual labs are seen as a personalized learning environment via ICT-mediated visual graphics such as animations and user interactive simulations [19].From a teacher perspective, the use of advanced computer technologies in a traditional classroom provides a prominent platform for modelling student participation where teachers can monitor the constructivist learning of the students in a better way [20].It plays a pivotal role in bridging the lack of lab facilities, and devising individual experience at a low cost and thus increases the chances of self organized learning strategies [21].This ultimately imparts analytical thinking skills amongst the learners [22]. Blended learning, that combines traditional classroom scenario with the usage of computer technologies, is becoming a new approach in the University education [23].Integration of 'blended learning' approach in the curriculum has shown to extract the advantages from both traditional and e-learning environments.Teachers and researchers of various educational institutions [24] have show.Such learning processes have emerged as a novel trend and is illustrated as a new educational paradigm.It was reported as an alternative solution for overcoming the problems faced in a traditional classroom such as time constrains, distance, sharing of costly equipment, shortage of chemicals and reagents etc. [25].Students using blended education in universities have been observed to be advantaged in problem-solving skills, time management, and sharing of information that improves the quality of their learning [26].VL environments also provide an improved individualized learning that helps to meet the needs of both urban areas and economically and geographically challenged rural areas with high level of flexibility and reduced the concerns regarding cost for laboratory set-up [27]. Many universities and research centres included their own virtual laboratories in both science and engineering field to facilitate autonomous learning model [6], [13], [28].Biotechnology, a growing field in life science is becoming more popular that has led to new advancements in many areas such as medical diagnostic tests, industrial scale testing and other agricultural related researches [29].Through the virtual learning platform, biologists can tackle the challenges in universities through quantitative experiments and mathematical models [13].Through the virtual learning platform, biologists can tackle the challenges in universities through quantitative experiments and mathematical models [13].This cost effective virtual laboratories train the students with sophisticated and complicated instruments routinely employed in modern biological and chemical laboratories.With multi-campus scenarios as in some Universities offering cross-disciplinary courses needs to exploit the use of extensive e-learning facilities [30][31]. Typical methods to predict user acceptance and behaviour in information technology and e-learning are TAM and OER based questions [32].Such questions include set of constructs to study student behavioural intentions, attitude and other cognitive constructs, namely perceived ease of use and perceived usefulness of the elearning platform.OER based feedback survey also helped to assess the usage roles in applying virtual labs amongst different users [33]. In this paper, we focus on the role of virtual labs in blended learning and online platform where we access freely available content rich materials including animation, simulation and remote triggered experiments for student groups to analyze their learning behaviour.We also tested including a hybrid approach of using Virtual labs in the curriculum for enhancing the student's academic performance in a blended learning classroom environment. Virtual and remote triggered laboratories in Biotechnology Like most science courses, biotechnology courses also require continuous syllabi update and use of complex laboratory techniques, sophisticated instruments and wellstandardized protocols [13], [34].Most common courses that biotechnology programs focus at the University level in India consist of immunology, cell biology, molecular biology, microbiology, immunology, biochemistry, population ecology and biophysics.Significant advances in research have brought laboratory experiences as a key factor of active learning for biotechnology education [35].However, previous studies [36] reported lots of limitations while performing real labs especially in developing countries.Real lab courses in a curriculum has limited time period of about 2 -3 hours in a week, which is a major concern for students to get the correct ideas on the experiment [37][38].Also inadequate power supply, lack of costly reagents and equipment, issues with use of experimental animals and other personnel safety related issues [27] were the most critical problems facing in Universities that were not very well localized.Recent pedagogical surveys have reported that adapting ICT enabled e-learning tools such as virtual labs in education could be a solution to overcome some of the difficulties faced by the traditional labs to a greater extent [13] [32].The techniques included are in the form of animations, simulations, emulations, haptics, videos and remote triggered experiments [39][40] mediated that helps to make the virtual laboratory with a close semblance to a traditional laboratory setup (Figure 1A and Figure 1B).This realistic representation with the technological advancement has found various applications in modern education system [40][41][42].The remote-triggered laboratories are a new venture to enhance laboratory education.These labs are a hybrid approach that provides real access to costly lab equipment and experiments through the internet.Remote-triggered experiments are designed in such a way that the users can control the remote lab set up with the aid of an interface window which can be viewed through browsers (Figure 3A and Figure 3B).In order to reduce inconsistencies, a slot-booking system was implemented to reserve a particular remotely triggered experiment to a specific user for a time-slot. Methodology This work was conducted on a group of 600 undergraduate and post graduate students (500 students participated in virtual lab workshops conducted in the year 2014-2015 and 100 online student users) of different Indian universities with Biotechnology course. Analysis on impact of virtualization in learning process In this case study, the data to estimate impact of visualization was collected via a one-day seminar and workshop on "The use of ICT in education", on February 27, 2015.The demonstration and hands on session were followed with a set of questionnaires to evaluate its significant impacts in learning process.Both qualitative and quantitative analysis of content quality, easiness of use of the material, extended use of technologies in education were carried amongst the student groups of a geographically diverse and financially challenged institute in India.The feedback survey included the following statements (Table 1).The participants showed their responses by marking Yes/ No to the respective questions of analysis.The responses from the students were tabulated and recorded for further studies. Analyzing the impact of different learning approaches (Traditional learning, Virtual lab learning and Blended learning) -A case study based on Microbiology virtual labs The study was performed via organized workshops to analyze impact of different approaches in enhancing student's performance in a classroom. User Data We employed pre-test (a test before performing VL) and post-test (a test after performing VL) analysis in this study to evaluate participating groups. Student Groups The participants were divided into three groups.Control Group (CG) comprising of 250 students who were subjected to traditional classroom-based learning, Experimental Group (EG) comprising of 250 students who accessed virtual lab platform for their learning process, without the help of an instructor.The third group is the Blended learning group, which comprises of 500 students (including CG and EG), who were subjected to both traditional learning and teacher-mediated virtual lab learning process. Our preliminary study was to analyze whether the students (Control group) could learn theoretical and experimental concepts in a traditional method of teaching.The overall time period for the learning process was limited to 3 hours.As a part of this study, and as a first step, 30 minutes chalk-and-board lecture series of the experiment, "Carbohydrate Fermentation Test" in Microbiology Lab was provided to the group of students.30 minutes were provided later to follow a standard laboratory textbook for learning the theoretical and practical background of the experiment.In the next step, they practiced real laboratory techniques where they learnt different fermentation pattern of carbohydrates using different classes of microorganisms such as Staphylococci, E.coli, Proteus etc., within the time limit (2 hours) provided.They were then subjected to a class test (Pre-test), where a set of questions regarding the fermentation media preparation and the fermentation pattern produced by different microorganisms were provided.The individual performance report was noted and tabulated for further analysis.During this study, the difficulties/problems reported by individual student were also recorded.Later the same groups of students (control group) were subjected to learn the same experiment using virtual labs as a learning platform, without the help of an instructor.After the virtual lab experience, a post-test was conducted with the same set of questions as in the pre-test.The performance level of students in each test was noted for further analysis. In the next step, the experimental group comprising 250 students were allowed to learn the experimental concepts and procedures using Virtual lab learning platform, without the help of an instructor.The study participants were asked to perform the 'Carbohydrate Fermentation Test using the content rich virtual lab material (Figure 4). Figure 4. Content rich earning resources available in virtual laboratories The time for completing the virtual lab exercises was 2 hours (Figure 5).An examination was conducted for participants with the same set of questions as given to the control group.The individual scores were tabulated for further analysis. In the next step, both CG and EG group were subjected to a blended learning approach.In this scenario, the participants were first subjected to a traditional way of teaching followed by a teacher mediated virtual lab based learning method.An examination was conducted to evaluate the student's performance as in the control group and experimental group.The performance level of students in each test was noted for further analysis (See Table 2 for examination questions for control group, experimental group and blended learning group). Analyzing the role of Remote labs in supporting science education As a part of the study, a solar panel experiment was used as a learning exercise to a group of 100 online student users from various regions in India.They were allowed to use the solar panel over the internet.Slot-booking system was enabled to reduce time -management issues related to remotely controlled systems.Questionnaire-based online feedback was collected to analyze the effectiveness of adding remote labs in curriculum and analyzing its significant impact in enhancing students learning process. The questions for analysis were shown in Table 3.The participants marked their choices and individual ratings were recorded. Visualization techniques using virtual labs improve student attention in a classroom For analyzing the content quality of virtual labs, a feedback survey was conducted amongst the student participants (Table 1).Analysis of feedback survey showed that 90% of the students indicated that virtual labs (VL) learning platform helped them to integrate both theory and experiment in a better way.Also, 88% of them suggested that they gained a clear understanding of the experiment and related topics with the usage of VL system in their learning process.Moreover, 85% of the participants supported that VL provided higher level of engagement in their studies making it as a complementary tool for making education more interesting and easier.Nearly 95% of them reported that they could easily interpret the results of the experiment with user-interaction and reduced their problems in laboratory education (Figure .6).But, 10-15% of the participants reported they were facing difficulties in using virtual labs, due to the lack of computer literacy (comments from the feedback).(See Table 1 for the analysis questions) Feedback data was also collected from the participants for analyzing the easiness of use of the learning material provided by the virtual lab education platform (See Table 1 for analysis questions).Among the student participants, 95% of them suggested that they could easily repeat all the steps in the animation and simulation techniques.90% of them indicated they were able to follow the virtual lab experiments even without the help of a teacher or lab instructor.Nearly 87% of the students indicated that the virtualization of the experimental procedure was motivating enough to engage them in their laboratory education.Also 90% indicated that they noticed/made mistakes while performing the virtual experiment and hence suggested that it could reduce the typical mistakes that could go wrong in a real lab scenario (Figure 7). Figure 7. Analysis of easiness of virtual lab material In addition to this, the feedback reports also suggested that the extended use of ICT enabled technologies have a significant role in enhancing the education process.Feedback report from the surveys amongst the students showed that the clicking component (interactive sessions) of the simulator was self-indicative that helped them to improve their laboratory skills.Also 100% of them supported the animated experiments provided the details of the experiment in a step-by-step approach that helped them to understand the experimental setup in a better way.This indicated that most of the students preferred both animations and simulations in their learning process.95% of them completed the simulation based experiments with mistakes and thus they could repeat the experiment several times without the loss of chemicals, equipment damage etc.Moreover, 90% of them indicated animations and simulations provided them an actual feel of a real lab, thus facilitating the laboratory experiences at anytimeanywhere (Figure 8). Blended learning with ICT enabled virtual labs -role of explicit interactions in enhancing student's academic performances Blended learning with ICT enabled virtual labs improved student's performance level.The statistics showed that there was a significant difference between the pre-test and post-test score of the control groups.4% (n=10) of the control groups were able to score marks in the range of 90-99% in the post-test, whereas the same group of users did not score as much in their pre-test evaluations.Most of the control groups (n=225) scored more than 60% of marks in the post test evaluations, thus improving the class average from the pre-test scenario (Table 4).The study suggested the role of virtual labs as an augmented laboratory education tool for making education more effective. The study was extended to analyze significant impact of including virtual labs in the university curriculum.A comparative analysis of percentage of marks scored by the students in the experimental group and blended learning group were also tabulated (Table 5).The study was extended to analyze significant impact of including virtual labs in the university curriculum.A comparative analysis of percentage of marks scored by the students in the experimental group and blended learning group were also tabulated (Table 5). The statistics showed that the students performed better in a blended learning approach compared to teacher assisted virtual lab learning platform.This suggested that they could use virtual labs as an interactive textbook in addition to the traditional classroom learning that helped them to improve their average performance level in the examinations. Remote labs as better online interactive textbooks Questionnaire based feedback data were collected after deploying the remote solar panel experiment.The survey (see Table 3 for analysis questions) amongst the students showed that 75% of them were able to operate the remote equipment easily without the help of an instructor, while 15% of them indicated that they faced difficulties while performing the experiment.Moreover, 10% of the participants suggested that they need further improvements in the remote experimentation for making it more functional for their learning process (Figure 9).Also, 35% of the participants rated that the remote experiments are more similar to the real lab scenario that could help them in their learning process remotely.30% of them indicated remote labs provide an alternative working environment with controls over the internet.While 25% of the students suggested that they need more improvements to make the experiment to a close proximity to the real lab setup.10% of them indicated that remote labs never replace the real labs, due to the difficulty in operating the remote experiments and equipment due to the server issues (Figure 10).Also, the feedback analysis indicated student's choices of using remote triggered labs in their learning process.From the participant feedback, 87% of them suggested that remote triggered labs are useful as a pre-lab material for making the laboratory education more effective while 13% of them suggested that they could use it as a post-lab material after completing the real lab experiments (Figure 11).This would be helpful for them to repeatedly use the remote laboratory equipment without any damage or cost related issues.Moreover, 72% of the participants supported that they were able to operate the remote experiments without the physical presence of instructors, where as 28% reported that they need an instructor for operating the equipment (Figure 12), since they experienced several issues during the experimental process (Kumar et.al, manuscript submitted).Among the students who participated in the online remote triggered lab workshop, 82% of them suggested that technologies like remote triggering of lab equipment were helpful in their classroom education scenario, whereas 18% did not favour use of such tools in blended learning due to the lack of internet access at the remote places (Figure 13). Discussion In this diffusion-based case study, we analyzed the impact of virtualization techniques such as animation and simulation in enhancing the learning process of the student groups, specifically among various Indian Universities.Overall studies indicated that the virtual lab demonstrations were a new venture in the in the field of education, as they make a new dimension in visual learning.Also, we have used a pre-test and post-test examination pattern for analyzing whether the virtual lab platform has a significant role in increasing the student's performance level in a classroom education.The results suggested that the content-rich learning materials provided by the virtual labs helped the students to understand the basic concepts of the experiments in a better way.Blended learning approach was tested because of the direct correlation between teachers and students in a classroom.The evaluation scores of the experimental group and blended learning group showed the use of virtual labs as a supplementary learning material and its subsequent usage in the curriculum ensured a better performance during evaluations.The results suggested that the students, who used virtual labs as an addition to traditional laboratory studies performed better in the examination compared to those who went through the typical classroom approach alone.The incorporation of hybrid approaches of using virtual labs along with classroom laboratory education allowed making significant improvements in student academic performances.During the workshops, some of the students indicated their comments as: "The opportunity for manipulating or changing the parameters of the experiment according to our wish is something that I found very interesting in virtual labs"."Virtual labs provided a wide range of possible results, as in the experiment Selective and Differential media of microorganism, which gave a feeling of trying out different possible reactions of microorganism.Such possibilities would be less in a wet lab, augmenting virtual labs as a supplementary education tool"."Vivid presentation in an interactive way itself is interesting.Curiosity helps to complete the experiment in an easy way".Also, a professor who participated in the teacher workshop indicated that "The workshop was very useful and I am sure it will attract students also.Less time, No chemicals, no pollution involved.Hence Green approach, well organized platform for learning.The system is highly benefited for improving the quality of education in rural areas." User interaction and learner satisfaction were the primary challenges for constructing successful remote triggered laboratories.We also evaluated the role of remotely triggered labs in a blended learning system.Most users suggested that the use of advanced technologies like remote triggering were helpful in classroom education scenario, making it as pre-lab material for enhancing laboratory education.Most students were able to control the remote experiments without any difficulties, thus providing a user-friendly outlook to the modern laboratory education.The repeated usage of the lab equipment has a potential role in adaptive learning to improve the level of performance in the classroom.This suggested the implementation of remote labs in a blended learning approach to reduce most of the economical and financial issues faced by many universities in India.Several internet related issues were also a major concern for the successful deliverability of such ICT-enabled techniques, especially in the remote location.Although these initial results suggested virtual labs to be effective, the study is being extended to understand the interaction of social, cognitive and teaching presences in a virtual scene and within traditional blended learning environments. Conclusion Virtual labs are becoming a predominant classroom component for experimental studies in many universities including those in India and other developing nations.The emerging achievements of virtual labs in creating online Biotechnology courses need further research.We foresee this virtual lab system as a version of an interactive textbook.Interactive mechanisms in such environments seems to aid student usage and enhance student participation in blended and remote learning scenarios.The virtual lab project is freely already online for public use via http://amrita.vlab.co.in/ and can be accessed after signing up with a google or open id. Figure 1A . Figure 1A.Graphical illustration of Quadrant streaking Figure 1B.Quadrant streaking in real lab scenarioSimulation based labs platform needs high degree of interactivity with users and computers.Such labs are thought to be a bio-realistic model that has a synergy between biology and mathematical equations (Figure2Aand Figure2B). Figure 2A . Figure 2A.Simulated light microscope Figure 2B.Light microscope in a real lab scenario Figure 3A . Figure 3A.Hysteresis loop remote live video Figure 3B.Performing Hysteresis loop in a real lab Figure 5 . Figure 5. UG student using virtual labs as a learning platform Figure 6 . Figure 6.Analysis of content quality of virtual labs(See Table1for the analysis questions) Figure 8 . Figure 8. ICT enabled technologies (animation and simulation) in laboratory education Figure 10 . Figure 10.Student's responses on comparing the remote labs to a real lab scenario Figure 9 . Figure 9. Analysis of remote equipment control Table 1 . Analysis of virtualization techniques in enhancing education Table 2 . Examination questions for control group, experimental group and blended learning group Table 3 . Analyzing role of remote labs in education platform Table 4 . Control group user's performance rate in pre-test and post-test examination. Table 5 . Experimental group and blended group user's performance rate in the examination
5,681.2
2015-07-14T00:00:00.000
[ "Biology", "Computer Science", "Education" ]
Simulation on the Effect of Bottle Wall Thickness Distribution using Blow Moulding Technique The aims of this study are to assess the deformation behavior of a polymeric material during a blow moulding process. Transient computations of two dimensional model of a PP bottle were performed using ANSYS Polyflow computer code to predict the wall thickness distribution at four different parison’s diameter; 8mm, 10mm, 18mm, and 20mm. Effects on the final wall thickness diameter and time step are studied. The simulated data shows that the inflation performance degrades with increasing parison diameter. It is concluded that the blow moulding process using 10mm parison successfully meet the product processing requirements. Factors that contribute to the variation in deformation behaviour of the plastic during the manufacturing process are discussed. Introduction In today's competitive foundry industry, manufacturers are aiming for reliable products with characteristics such as light weight, high-quality parts, defect-free output and minimal lead time, all at the lower level of investment.One of the obstacles of its emergence as a major product on the market is the added cost of fabrication and a large uniform thickness distribution [1].Blow molding technologies will be used in these study for the improving the quality of plastic products for the replacement of conventional blow molding techniques.Blow techniques have many advantages over that conventional blow molding such as higher part quality in more uniform wall thickness distribution, maintained the mechanical properties, lower regrind content and lower flash weight.The temperature control, blow pressure, time control, thermal properties and die design is a importance variable that need to be consider in production to has a better final products [2,3] The design of blow molds, parison and the specification of process parameters are important and it combined of science, art and skill.A small change in die and mold design, die temperature and blow pressure can greatly effect on the molding results, plastic forming behaviour, materials parameters, the fluid viscosity and quality of the products [4,5].To validate these parameters and accelerate the design approval, prototype tooling is needed and it very costly and takes time.To reduce lead time and expenses, Finite Element Modelling (FEM) analysis is needed [1].It will predict and virtually assist the blow molding process design and very useful to support the foundry industry especially in designing a new products, redesign of existing products and detect the defects.By inputting blow pressure and temperature characteristic data, this analysis is able to simulate and visualize the blow molding process for achieving a uniform wall thickness in the final product [6,7].In blow moulding simulations, numerical models have to take into account large deformations of the material, the evolving contact between tools (mould and stretch rod) and polymer, and temperature gradients [8].The aims by using this information, the manufacturer can improved the quality of its product, reduce lead time and reduce unnecessary costs, which eventually make them more competitive and gain more profit [9].The analysis will focused to simulate the wall thickness distribution and stress counter [10]. Simulation Work In this study, ANSYS Polyflow software was used to predict the blow molding behavior on the effect of four different diameter's parison; 8mm, 10mm, 18mm, and 20mm on the final wall thickness distribution and stress contour occurred.Polypropylene (PP) ASTM D4101 was used as the parison materials and aluminium as mould material.PP has excellent moisture resistance, low density, good fatigue resistance, high flexural strength and good impact strength [11,12].The PP property is shown in Table 1. The simulation task is divided into five stages.First, the 2-D models of the mould and parison shape circular cross section was developed at geometry stage.The visual of model is formed in one half of the mould.The graphical of bottle is sketch referred the design according by Pepliński and Bielinski, 2009 [13].The design cavity should meet the equation 1 and 2 requirements.The diameter of the bottle is 100mm, depth, D is 0.05 to 0.08mm, blowing pressure, p is 2.0 MPa and density is 2.7g/cc.The simulation starts with the generation of mesh and the value of element size was set to 0.001m.The mesh is done to divide the geometry into cells or control volumes.Next, the simulation parameter and specific materials for the mould and parison was set up.Table 2 shows the assumption parameters were considered in this simulation.Next step was set up the condition of analysis for operating at setup stage.Lastly, the behavior of plastic during the process and the wall thickness distribution and stress contour results can be viewed in 2-D graphic at result stage. Results and Discussion Figure 1 shows the results of wall distribution thickness at the surface contour on the bottle obtained from parison diameter from initial time step (TS) from 4 until the blowing process finished at time step 33.Dark blue depicted the region is too thin and can be acceptable for safety reasons and yellow and red colour depicted the region can be economically attractive.The wall thinning influenced to the mechanical properties of the product.The wall thickness on red region was more thick compare to blue region was more thin because the forces on the upper of the bottle is too high to maintain the bottle strength.It is decrease the thickness slowly until the uniformly surface along the bottle.At the bottom of bottle the thickness become high because it want to support the density of the bottle when the water was insert into the bottle.The total steps are also effect on the blowing process where the 8mm of parison diameter will take more steps between 20mm of parison diameter.It is happen because the parison will contact at the mould cavity based on the parison size. Table 3 shows the minimum and maximum thickness value.The agreement is fair.Figure 1 (d) shows the distribution of wall thickness in the final bottle and 10mm diameter parison shows the uniform wall distribution and has full contact with mould cavity's wall.The parison is not fully inflated for parison diameter 18 and 20mm because the blow up pressure is too low and blow time is too short.To solve this problem the blow up pressure and blowing time must be increased.Figure 2 shows the graph of comparison thickness between different parison diameters.The distributions graph moving high thickness and then decrease the thickness because it is currently at the middle of the bottle and the wall thickness is uniformly.Then the thickness increase when the parison contact at the bottom of the mould because the bottom of bottle is much strength from the middle of the bottle. Conclusion A 2-D model has been developed for the simulation of blow molding using ANSYS Polyflow software and optimization of the perform shape to minimize the weight of the bottle.The simulation works allows for minimizing the consumption of plastic on the product while retaining some structural assumption and the higher dimension of parison effected the parison thickness at the bottom of bottle because the pressure push the parison to the following shape of bottle.The 10mm diameter shows the best results compare to others diameter because the parison is fully contact on the mould cavities and the shape and cavity of mould appropriate with the material used.Based on the results it can see the bottle wall thickness depends primarily on the shape and dimensions of the cavity mould and varying degrees of individual areas stretching of parison and at different times of contact parison with the mould.At the end it can exception may arise from the relationship between the parison diameter and the material waste obtained in the upper and lower zone of the bottle and also minimization the product thickness in these simulation process. Table 2 . The parameter input for simulation Table 3 . Summary of the results obtained the simulation
1,783.6
2016-02-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Atopy manifestations in pediatric patients with acute lymphoblastic leukemia: correlation assessment with interleukin-4 (IL-4) and IgE level Background Acute lymphoblastic leukemia (ALL) is the most common type of cancer in the age range of under 15 years old and accounts for 25–30% of all childhood cancers. Although conventional chemotherapy regimens are used to improve the overall survival rate, it has been associated with some complications, amongst which allergic manifestations with unknown mechanisms are more common. Methods Our study compared serum IgE and IL-4 concentration, as a hallmark of allergic responses in pediatric ALL patients before and after 6 months of intensive (high-dose) chemotherapy, to show whether changes in the level of these markers may be associated with atopy. Serum level of IL-4 and IgE was measured using enzyme-linked immunosorbent assay (ELISA) method. Results The results showed that the level of IgE and IL-4 increased following chemotherapy in both ALL patients with and without atopy. In addition, post-chemotherapy treatment IgE and IL-4 levels were significantly elevated in patients with atopy compared to those without it. The difference between baseline and post-chemotherapy level of IgE and IL-4 was significantly higher in patients with atopy compared to those without it. Conclusions To the best of our knowledge, this is the first study that showed a connection between post-chemotherapy allergic manifestations in pediatric ALL patients and IL-4 and IgE level. Flow cytometry analysis of the T-helper 2 (Th2) lymphocytes and other allergy-related T cell subsets like Tc2 and Th9 as well as the study of the genetic variations in atopy-related genes like IL-4/IL-4R, IL-5, IL-9, IL-13, and high affinity FcεRI IgE receptor and also HLA genes is necessary to clearly define the underlying mechanism responsible for post-chemotherapy hypersensitivity reaction in pediatric ALL patients. Background Acute lymphoblastic leukemia (ALL) is the most common childhood cancer (accounting for about 25-30% of cancers in children under 15 years old) and also the most common type of leukemia (about 80%), characterized by malignant transformation of the lymphoid precursors in the bone marrow [1]. Usually, chemotherapy is used as Open Access *Correspondence<EMAIL_ADDRESS>1 Hematology Research Center, Shiraz University of Medical Sciences, Shiraz, Iran Full list of author information is available at the end of the article the standard first line treatment for pediatric ALL. The established treatment protocol includes induction, consolidation, and long-term maintenance, along with CNS prophylaxis given at specified intervals during therapy [1,2]. Although chemotherapy has greatly improved the clinical outcome of patients, the main barrier is postchemotherapy adverse events, which potentially affect the efficacy of treatment [1,2]. Hypersensitivity is the major infusion reaction observed after chemotherapy, which occurs as a result of the immune system activation against chemotherapeutic agents [3,4]. However, the rate of these reactions has been reduced remarkably by administration of the less immunogenic form of chemotherapy drugs [3,5,6]. Nevertheless, some patients still develop hypersensitivity reactions with unknown reason. The underlying mechanism has not been clearly defined, but production of the allergy-promoting mediators by the immune system might be implicated in this phenomenon. IL-4 is the most common cytokine produced by T-helper 2 (Th2) lymphocytes and the key cytokine that regulates Th2 cell polarization [7,8]. Signaling delivered through IL-4/IL-4R promotes STAT3 activation followed by activation of c-Maf and GATA-3 Th2-polarizing transcription factors, which further stimulate Th2 cell differentiation and IL-5, IL-13 as well as IL-4 production. Therefore, they potentiate Th2 responses [7,8]. In addition, IL-4/IL-4R signaling promotes B cell proliferation and stimulates immunoglobulin class-switching to IgE antibody, the major antibody in allergic reactions [7,8]. Production of these cytokines by Th2 lymphocytes and other cells accounts for the activation of the mast cells, basophiles, eosinophiles and smooth muscle cell contraction as well as stimulation of B cell differentiation into IgE-producing plasma cells, thus promoting several allergic reactions including allergic rhinitis, anaphylaxis, atopic dermatitis, and asthma [7][8][9]. Until now, there has not been enough data on whether the hypersensitivity events in ALL patients are dependent on the IL-4 and IgE production or not. Therefore, in this study, we aimed to evaluate the allergic manifestations in pediatric patients during intensive (high-dose) chemotherapy and its association with change in the serum IgE and IL-4 levels during this period. Patients' characteristics and study design This is a cohort study in which 39 newly diagnosed untreated pediatric ALL patients who were admitted from May 2019 to January 2021 in Amir Oncology Hospital affiliated to Shiraz University of Medical Sciences were enrolled. All participants had confirmed diagnosis of ALL (B-ALL/T-ALL) by bone marrow aspiration, biopsy, and flowcytometry and had received standard risk or high-risk chemotherapy protocol, which was adjusted by the age and total white blood cell count at presentation. All patients experienced chemotherapy drugs including vincristine, doxorubicin, peg-asparaginase, methotrexate, cytosar, mercaptopurine, thioguanine, and cyclophosphamide during the first 6 months of intensive therapy. Inclusion criteria were newly diagnosed untreated ALL patients with negative history of atopy among them or their first-degree relatives. Exclusion criteria were history of previous treatment with chemotherapy agents for any reasons and/or previous history of any rheumatologic or any chronic diseases, which need regular medical treatment as well as congenital or acquired cellular or humoral immunodeficiency disorders. Patients were followed through the first 6 months of intensive (high-dose) chemotherapy for any allergic manifestations including allergic rhinitis (AR), upper airway hypersensitivity reaction, asthma, urticaria, and eczema. Accordingly, among the included patients, those who showed allergic symptoms at the end of 6 months highdose chemotherapy were considered as the atopy ( +) group. The remaining patients who did not present allergic symptoms were known as the atopy (-) group. The laboratory data including white blood cell (WBC) and platelet (Plt) count, percentage of the neutrophils, lymphocytes, and eosinophils as well as serum hemoglobin (Hb) level were measured in all patients at diagnosis and after 6 months of therapy. Sample collection Five milliliters of the peripheral blood were collected prior to chemotherapy onset and before maintenance therapy (about 6 months after intensive chemotherapy treatment). The serum specimens were isolated from the samples by centrifugation (Sigma-Aldrich, USA) of blood samples at 3000 rpm for 5 min; then, they were kept at -80 °C refrigerator until use. Quantification of serum IgE Serum IgE was measured using enzyme-linked immunosorbent assay (ELISA) method (AccuBind ® , Monobind Inc., Lake Forest, USA). The sensitivity of the kit was 0.1424 U/ml. The concentration of IgE antibody in unknown samples was calculated based on the standard curve. OD value at 450 nm was measured for all samples by spectrophotometer (BioTek Epoch, UK). Quantification of serum IL-4 cytokine Serum IL-4 was measured by enzyme-linked immunosorbent assay (ELISA) method (Invitrogen, USA), according to the manufacturer's instruction. The sensitivity of the kit was < 2 pg/ml with assay range (7.8-500 pg/mL) and the specificity was 3% (Intra-assay) and 4.5% (Interassay). The OD value at 450 nm was measured for all samples by spectrophotometer (BioTek Epoch, UK). The concentration of IL-4 cytokine in the serum of patients was calculated according to the standard curve. Statistical analysis All data were analyzed using IBM Statistical Package for the Social Sciences (SPSS) version 23. Descriptive data were presented as mean ± standard deviation (SD) and percentages. Comparison of qualitative and quantitative variables was performed by Chi-square test and Student t-test between the two groups of patients, respectively. Comparison of the serum level of IgE and IL-4 at baseline and 6 months after treatment was done by Paired t-test in each group. Pearson correlation coefficient was calculated for the relationship of quantitative variables. P-value less than 0.05 was considered statistically significant. Alteration in the IgE and IL-4 level after chemotherapy in the atopy ( +) and atopy (-) patients The baseline and 6-month value of serum IgE was calculated and compared in the atopy ( +) as well as atopy (-) patients. The results showed that the level of serum IgE significantly increased in the atopy ( +) group after 6 months compared to the baseline level (446.67 ± 113.56 vs. 25.85 ± 20.51, respectively; *P < 0.001) (Fig. 1). The IgE level was also elevated after 6 months in the atopy (-) group although with the lesser extent (153.94 ± 79.14 vs. 21.05 ± 12.85, respectively; *P < 0.001) (Fig. 1). Comparison of the IgE level 6 months after chemotherapy indicated that the post-chemotherapy level of IgE was significantly higher in the atopy ( +) compared to the atopy (-) patients (446.67 ± 113.56 vs. 153.94 ± 79.14; *P < 0.001). The concentration of serum IL-4 was compared between the baseline and 6 months after chemotherapy in both groups. Analysis of the results revealed that, similar to the serum IgE, the level of IL-4 significantly increased post-chemotherapy in comparison to its baseline level in the atopy ( +) group (51.2 ± 35.22 vs. 21.86 ± 7.75, respectively; *P = 0.001) (Fig. 1). Moreover, the concentration of the serum IL-4 of the atopy (-) group was significantly raised 6 months after treatment (23 ± 6.04 vs. 20.84 ± 5.86, respectively; *P < 0.001) (Fig. 1). Consistent with the serum IgE, the post-chemotherapy level of IL-4 was significantly higher in the atopy ( +) patients compared to the atopy (-) ones (51.2 ± 35.22 vs. 23 ± 6.04; *P = 0.001). Comparison of the change in the serum IgE and IL-4 between the atopy ( +) and atopy (-) groups The change in the concentration of serum IgE and IL-4 was calculated by subtraction of their initial value from their post-chemotherapy value and considered as "difference" or "change" in the expression of serum IgE and IL-4 level during this time. Then, the difference in the concentration of the serum IgE and IL-4 was compared between the atopy ( +) and atopy (-) patients. The results demonstrated that the serum IgE level significantly changed in the atopy ( +) compared to the atopy (-) patients following chemotherapy (420.82 ± 124.03 vs. 132.89 ± 81.62, respectively; *P < 0.001) (Fig. 2). Similarly, the difference of serum IL-4 was significantly higher in the atopy ( +) patients compared to the atopy (-) group (29.33 ± 5.58 vs. 2.16 ± 1.71, respectively; *P = 0.001) (Fig. 2). No correlation was observed between the changes in the serum IgE and IL-4 levels among the atopy ( +) as well as atopy (-) patients (P > 0.05). Change in the serum IgE and IL-4 level and demographic and laboratory data The difference in the level of IgE and IL-4 was compared between males and females. Accordingly, the serum IgE and IL-4 alteration did not significantly differ between males and females in patients with and without atopy (P > 0.05). No positive correlation was observed between the difference in IgE and IL-4 level and age of patients with and without atopy (P > 0.05). In addition, the IgE and IL-4 change did not correlate with laboratory data including WBC and platelet count, percentage of the neutrophils, lymphocytes Discussion For many decades, conventional chemotherapy regimen, which is used to improve the overall survival rate in children with ALL, has been connected to different adverse events, amongst which allergic manifestations have gotten more attention [3,10]. Even though the effector mechanisms are not clearly identified, IgE antibody (antibody-dependent allergic reactions) and other allergy-related mediators including IL-4 cytokine might be involved in the pathogenesis of chemotherapy-related allergic manifestations. In this study, serum IgE and IL-4 levels were evaluated at baseline and after 6 months of chemotherapy as a hallmark of post-chemotherapy allergic susceptibility mediators to show whether changes in their level is associated with hypersensitivity presentations in pediatric ALL patients during high-dose intensive chemotherapy. Our results showed that the amount of IgE and IL-4 increased after 6 months in both ALL patients with and without atopy compared to the baseline level in each related group, but 6-month post-chemotherapy level of both IgE and IL-4 was significantly higher in the atopy ( +) compared to the atopy (-) group. In addition, changes in the IgE and IL-4 levels after 6 months were significantly higher in the atopy ( +) compared to the atopy (-) group. Post-chemotherapy hypersensitivity reactions are the commonly observed feature of cancer patients. It is not clear whether changes in the IL-4 and IgE levels in our study are secondary to immune dysregulation in these patients or they are a general reaction against chemotherapy drugs. Although both atopy (+) and atopy (-) ALL patients actually received the same main treatment protocol, the reasons of why atopy is limited to some patients are unknown. Obviously, genetic factors like special variants of the IL-4, IL-4R, and IL-13 genes may have a prominent role in development of allergy [11][12][13][14]; therefore, their contribution should be carefully noticed. Consistent with this, studies showed that cytokine variants including TNF-α 308 A → G, IL-13 and IL-4RA as well as genetic variation in IgE receptor were associated with predisposition to drug-induced allergy [15,16]. Interestingly, recent studies revealed that in addition to IgE-mediated drug-induced allergic reactions, differences in major histocompatibility complex (MHC) molecules are the main contributor of T cell-dependent drug-induced allergic manifestations [16]. The types of drugs as well as repeated exposure to chemotherapeutic agents are other factors that have a Fig. 2 Comparison of the changes in the serum IgE (A) and IL-4 (B) level between atopy ( +) and atopy (-) pediatric ALL patients. The graph is created by GraphPad Prism 8. Data are presents as mean ± SD. P < 0.05 is considered as statistically significant. Atopy ( +): patients with atopy, Atopy (-): patients without atopy fundamental role in antibody-mediated allergic reaction and thus, should be taken into account in patients' management [3,17]. Therefore, the study of polymorphism in the atopyrelated genes including IL-4/IL-4R, IL-5, IL-9, IL-13, IgE receptor, and genetic variations in HLA molecules should not be underestimated and might provide additional data on the exact role of these factors in the development of allergic manifestations in ALL patients. In addition, analysis of IL-4 and IgE concentration at different time points post-chemotherapy especially when patients entered the maintenance phase is required to specify the role of chemotherapy in this phenomenon. T-helper 2 (Th2) subtypes of CD4 + T cells are a subgroup of the lymphocytes which contribute mainly to allergic reactions and immune responses against parasites and helminthes by production of the cytokines IL-4, IL-5, IL-9 and IL-13 that promote B cell proliferation and immunoglobulin class-switching to immunoglobulin E (IgE) [7]. Data on Th2 responses and its related cytokines is very limited in ALL patients and their mechanism of action is poorly understood in these patients. In a study by Zhang et al. on the IL-4-producing CD4 + (Th2) and CD8 + (Tc2) subpopulations, it was demonstrated that the Th1/Th2 and Tc1/Tc2 ratios were significantly decreased in the peripheral blood T cells of ALL patients (n = 30) compared to the healthy controls, suggesting the dysregulated differentiation of Th2 and Tc2 in these patients [18]. Also, Horacek et al. reported a higher IL-4 level in the serum samples of newly diagnosed ALL patients compared to the healthy controls [19]. Stachel et al. showed the increased expression of IL-4 mRNA in the bone marrow of 49 pediatric patients with B cell precursor ALL with late relapse proposing that ALL leukemic cells mediate a shift toward Th2 responses and, thus, influence the relapse risk [20]. Consistent with this, Cardoso et al. revealed that IL-4 positively stimulated the proliferation and growth of T-cell ALL cells by activating mTOR signaling which affects the disease outcome [21]. However, Pérez-Figueroa showed a polarized Th1 cytokine profile (IFN-γ and IL-12) in children with ALL (newly diagnosed) while the level of Th2 cytokines (IL-4 and IL-13) were similar compared to the healthy control group [22]. Our study confirmed that both atopy (-) and atopy ( +) ALL patients developed higher IgE and IL-4 (albeit with the higher extent for atopy ( +) patients) after chemotherapy compared to their corresponding baseline level. Although the reason for this finding is not clear, it could be assumed that the increase in the IL-4 and IgE production in both atopy ( +) and atopy (-) patients might be the result of the dysregulated Th2 responses in these ALL patients. In addition to Th2 lymphocytes, CD8 + T cells as well as cells of the innate immune system including the mast cells, eosinophils, basophils, NKT cells and, innate lymphoid cells are also responsible for IL-4 production and IgE class-switching [9,23]. Accordingly, flow cytometry analysis of Th2 lymphocytes at baseline and after 6 months of chemotherapy can be highly informative and may be necessary to clarify whether Th2 lymphocytes are implicated in the elevation of IL-4 and IgE production in ALL patients and consequently, post-chemotherapy allergic manifestations in the atopy ( +) group. In line with this scenario, it should be noticed that comparison of the frequency of other CD4 + subsets that are linked to the allergic diseases like Th9 cells between the atopy (-) and atopy ( +) patients could clearly define the underlying mechanisms responsible for allergic symptoms in atopy ( +) patients. Moreover, mast cells are another compartment of the immune system, which are known as a key driver along with IgE in pathophysiology of allergic reaction [24,25]. Engagement of FcεRI IgE receptor on the surface of mast cells leads in to mast cell activation and degranulation and thereby, release of inflammatory mediators like histamine, prostaglandins, leukotrienes, cytokines/chemokines, and neutral proteases (including chymase and tryptase) which promote allergic responses [24,25]. It is tempting that the difference in the mast cell characteristics might be responsible for allergic manifestation in some ALL patients post chemotherapy. This assumption needs to be verified by more studies. The small number of ALL patients is another limitation of our study that should be taken into account. Accordingly, multi-center studies with higher number of ALL patients could be helpful for better understanding the biological role of IL-4 and IgE as well as other allergyrelated mediators in the pathogenesis of post-chemotherapy atopy in ALL patients. Conclusion It is the first time that the higher concentration of IL-4 and IgE has been shown to be associated with post-chemotherapy allergic manifestations in ALL patients. Larger number of ALL patients along with the specific analysis of the Th2 lymphocytes and also other allergy-related subsets like mast cells, Tc2 and Th9 cells is necessary to clarify the role of these cells in post-chemotherapy hypersensitivity reactions in pediatric ALL patients. In addition, study of the genetic variation in IL-4/IL-4R, IL-5, IL-9, IL-13 cytokines, high affinity FcεRI IgE receptor as well as HLA genes and also the evaluation of the level of the cytokines at different time points post-chemotherapy could assist in delineating specifically the underlying mechanism responsible for atopic manifestation postchemotherapy in some ALL patients.
4,345.6
2022-03-21T00:00:00.000
[ "Medicine", "Biology" ]
Routing of Electric Vehicles With Intermediary Charging Stations: A Reinforcement Learning Approach In the past few years, the importance of electric mobility has increased in response to growing concerns about climate change. However, limited cruising range and sparse charging infrastructure could restrain a massive deployment of electric vehicles (EVs). To mitigate the problem, the need for optimal route planning algorithms emerged. In this paper, we propose a mathematical formulation of the EV-specific routing problem in a graph-theoretical context, which incorporates the ability of EVs to recuperate energy. Furthermore, we consider a possibility to recharge on the way using intermediary charging stations. As a possible solution method, we present an off-policy model-free reinforcement learning approach that aims to generate energy feasible paths for EV from source to target. The algorithm was implemented and tested on a case study of a road network in Switzerland. The training procedure requires low computing and memory demands and is suitable for online applications. The results achieved demonstrate the algorithm’s capability to take recharging decisions and produce desired energy feasible paths. INTRODUCTION The importance of electric vehicles (EVs) has increased steadily over the past few years with growing concerns about climate change, volatile prices of fossil fuels and energy dependencies between countries. The transportation sector accounts for 27% of global greenhouse gas emissions in the EU, 72% of which are contributed by road transport (European Environmental Agency, 2019). Therefore, switching to electric mobility is seen as a primary mean of reaching emissions' reduction targets. Although the EV deployment grows fast around the world (+40% in 2019) with Europe accounting for 24% of the global fleet, specific barriers for a massive uptake of EVs still exist (International Energy Agency, 2020). Researchers in (Noel et al., 2020) identify technical, economic, social and political barriers of EVs' broad adoption with limited cruising range and sparse charging infrastructure prevailing at present. These barriers are in the essence of the "range anxiety problem" defined as a fear that an EV will not have sufficient charge to reach its destination. However, optimal EV route planning together with higher-range EVs entering the market can mitigate this problem. Route planning strategies have been widely researched for conventional fossil-fuel vehicles. However, to solve the same problem for EVs, one should consider specific characteristics of this technology, such as limited battery capacity and ability to recuperate energy. Moreover, inadequate charging infrastructure and long charging times call for selective choice of charging stations. Significant factors influencing this choice include the price of electricity, expected charging power, distance from EV to charging station, the current state of charge, expected waiting and charging times, and incentives from electricity providers. Another difficulty in route planning for EVs lies in the choice of the optimization goal. Conventional routing algorithms, such as Dijkstra (Dijkstra, 1959), yield either the least travelled time or distance. However, none of these options guarantees the generated route's energy feasibility. Therefore, a need for EVspecific routing algorithms that strive for energy efficiency emerged. The algorithms in the field vary significantly by the EV-specific features considered, the complexity of the methodology and application use cases. The first group of algorithms uses detailed energy consumption models respecting the EV's ability to recuperate energy. Concurrently, these algorithms neglect the possibility of battery recharges on the way. Researchers in (Cauwer et al., 2019) used the shortest path algorithm to find the optimal energy route on a weighted graph with a data-driven prediction of energy consumption. Authors in (Abousleiman and Rawashdeh, 2014) deployed the ant colony and particle swarm optimization to generate the most energy-efficient route. Despite being fast, the solution is tedious to formulate and requires adaptation to different EV usage cases. An interesting approach based on learning from historical driving data is demonstrated in (Bozorgi et al., 2017). The proposed solution aims at minimizing both energy consumption and travel time while accommodating particular driving habits. The second group of algorithms focuses on EV's interaction with charging stations while considering constant energy consumption without energy recuperation. (Sweda and Klabjan, 2012) used approximate dynamic programming to minimize traveling and recharging costs. (Daanish and Naick, 2017) deployed a nearest neighbour search-based algorithm to find the shortest energyefficient path. Researchers in (Schoenberg and Dressler, 2019) and (Tang et al., 2019) proposed algorithms to reduce the total travel time. The prior suggested a multi-criterion shortest path search with an adaptive charging strategy. The latter solved a joint routing and charging scheduling optimization problem that additionally minimizes the monetary cost. The third group demonstrates an improvement in EV routing by considering both energy recuperation and battery recharging. A dynamic programming approach was proposed in (Pourazarm et al., 2014) to minimize total travel time in the road network defined as a graph. Despite successful application for a case of one car, the approach showed poor scalability in terms of convergence speed when the number of vehicles increased. (Morlock et al., 2019) suggested a trip planner that solves a mixed integer linear program to reduce the overall trip time. The authors introduced the driving speed as an additional degree of freedom and forecasted energy consumption from historical data. However, their approach works only along the desired route without considering alternative trajectories. Although the majority of the proposed algorithms deal with route planning for casual EV driving, the efforts are made to adapt EVs for specific use cases of customer serving and delivery operations. Researchers in (Schneider et al., 2014) deployed a hybrid heuristic search algorithm to minimize the total time consisting of travel time, recharging time and time spent at the customer. Authors in (Mao et al., 2020) aimed for the same goal with battery swapping and fast charging options using improved ant colony optimization. (Felipe et al., 2014) used simulated annealing to find a feasible route while determining the amount of energy to be recharged at the charging station along with the type of charging technology. Despite considering the recharging possibilities on the way, these works neglect the EV's ability to recuperate energy by assuming constant energy consumption proportional to the travel distance. This paper aims to address highlighted drawbacks in the EVspecific route planning by proposing a novel problem formulation suitable for solving by reinforcement learning (RL) techniques. To the best of our knowledge, it is one of the first applications of this area of machine learning to the field of EV path planning. Previously, the success of using RL, namely the policy gradient algorithm, was demonstrated in (Nazari et al., 2018) to minimize the total route length of a conventional fossilfuel vehicle. Additionally, researchers in (Zhang Q. et al., 2020) used actor-critic RL to minimize the route's energy consumption without recharging opportunities. In (Zhang C. et al., 2020), a deep RL approach was proposed to reduce both travel time and distance while different charging modes and occupation of charging spots were considered. In this research, we formulate the EV-specific routing problem in a graph-theoretical setting as a Markov decision process (MDP) and suggest a possible modelfree RL algorithm to solve it by generating energy feasible paths for EV from source to target. Specifically, we take into account recharging possibilities on the way through intermediary charging stations and the ability of EV to recuperate energy by considering the elevation profile of the road network. METHOD Two main components are required to frame the problem of EV routing with intermediary charging stations. First, the environment where an EV operates, namely the road network, has to be described mathematically. In this research, EV routing is analyzed in a graph-theoretical context. Second, the problem has to be formulated as an MDP to provide modelling capabilities of the EV movement and its way of making decisions. Environment The road network can be modelled as a simple undirected weighted graph G (V, E) as follows: • V {1, . . . , n} is the set of n nodes representing the points of interest on the map. The subset of these nodes C {1, . . . , m} ⊂ V can provide recharging capabilities to EVs. Each of the nodes v i ∈ V can serve both as a source v 0 and as a target v f that are EV's starting and destination points respectively. To consider the EVs' ability to recuperate energy when moving downhill, we characterize each node v i ∈ V by its elevation z i . • E ⊂ R is the set of weighted edges that connect the nodes on the graph. Each edge can be defined as an unordered pair There are no multiple edges that are incident to the same two nodes. As the graph G is undirected, the edges are equivalent to two-way roads in the real world. The weights of the edges correspond to the energy costs required to traverse the edge. The definition of edges' weights was adapted from (Bozorgi et al., 2017). Therefore, the energy cost between two nodes v i and v j can be determined as follows: where E flatij and E inclinedij represent EV's energy consumption on flat and inclined surfaces respectively. The term E otherij signifies additional energy costs depending on road type, urbanization, weather conditions and usage of auxiliary components (Li et al., 2016). For the sake of simplicity, E other ij 0. The basic energy consumption on the flat road can be determined according to Equation 2, where h is the EV's specific energy consumption per 100 km and d ij is the distance between nodes. The value of h is determined experimentally for different models of EVs according to typical driving cycles such as WLTP (European Automobile Manufacturers Association, 2017). The contribution of an inclined surface to EV's energy consumption is proportional to the potential energy and can be calculated as follows: where m is the combined mass of EV and its payload, g is the acceleration of gravity, Δz z j − z i is the elevation difference between nodes, and η is the EV's transmission efficiency. The value of E inclined ij is responsible for EV's energy recuperation ability. In downhill, Δz < 0, therefore E inclined ij < 0 and EV can recuperate energy if E inclined ij > E flat ij . In contrast, Δz > 0 when EV moves uphill, thus E inclined ij > 0 and additional energy has to be spent. If two nodes have no edge connecting them, the weight E ij ∞ makes it impossible for EV to traverse the graph in this direction. Markov Decision Process To formulate the EV-specific routing problem, we use an MDP mathematical framework which provides the best way to generalize optimal behaviour problems under uncertainty. An MDP model (S, A, P, R, c) consists of the following elements: a finite set of states S, where each of them obeys the Markovian property, a finite set of actions A, state transition probability matrix P, rewards function R, and discount factor c. The definition of states and actions is related to the graphtheoretical context of the problem and can be represented as a matrix depicted in Figure 1. State space S contains all possible states s that an agent can have when interacting with a given environment. For the case of EV routing, a state can be described as a vector s (location, charge), where location ∈ V and charge corresponds to the battery energy level. The latter is constrained due to battery's operational limits such as bat min ≤ charge ≤ bat max . The upper bound bat max is imposed by the battery capacity and the lower bound bat min is determined by the advised discharging policy. As most rechargeable batteries are not meant to be fully discharged, a minimum allowed state of charge is set to avoid battery damage. In this research we assume bat min 20%bat max . Contrary to location, charge is a continuous variable requiring discretization that can be achieved through binning. The number of bins is determined experimentally through uniform binning, where the bin's lower bound defines the new state, once the action is executed. The discretization procedure is discussed further in Section 4.1. Action space A contains all possible actions that an agent can perform in the environment. An action can be described as a vector a (next_location, decision), where next location ∈ V and decision indicates the charging intention at this location. If next location ∈ C, an agent can choose whether to charge decision 1 at this node or not decision 0. If next location ∉ C, the agent has no choice and decision 0. However, at any state s not all actions are available to the agent. The action a is considered available at state s only if charge s − E sa ≥ bat min , where E sa is the energy cost to move from location to next_location. Rewards function R is a measure to encourage the particular behaviour of an agent. While interacting with the environment, the agent takes action from the current state, observes the new state and receives a reward. By continually getting feedback from the environment in the form of rewards, the agent learns the desired behaviour through maximizing its discounted cumulative reward. In the EV-specific routing problem, we mainly want to incentivize only one type of behaviour by setting reward equal to 1: reaching the target v f from the source v 0 with charge level charge ≥ bat min . Rewarding the arrival to the final destination is essential for the agent's understanding that it has to explore the graph in a specific direction and not just wander around the environment. However, not all rewards have to be positive. Sometimes, rewards are used to penalize particular behaviour. In the current case of EV routing, an agent receives a negative reward equal to −1 when there are no available actions at the current state. In the real world, it means that EV has exhausted its battery capacity and thus got stuck on its route before reaching the destination. Discount factor c is used to emphasize the importance of the rewards achieved in the future. The agent selects actions to maximize the cumulative discounted reward G t at time point t according to Equation 4, where R t signifies the reward's value at time t and n defines the number of steps to complete the task. The discount rate c obeys 0 ≤ c ≤ 1, therefore one needs to find balance between caring about immediate rewards only (c 0) and caring about distant future (c 1). In this research, we do not calculate explicitly the state transition probability matrix P due to the following assumptions in formulating the EV-specific routing problem. First, we do not consider specific traffic conditions. It is common for drivers to plan their routes according to traffic congestion and even change them while driving. Therefore, the probability of choosing a particular road would need to be adjusted dynamically. Second, as we aim to solve the routing problem for energy feasibility, we do not take into account the occupation of the charging stations and the time required for charging. Third, we assume that there are no partial recharges and that all EVs leave the charging station with the full battery. Moreover, although the behaviour of an EV driver is presumed to be rational, in the real world, it is still stochastic. The drivers are free to choose the next points on their path according to any unforeseen events or their personal beliefs. Considering all the points discussed above, calculating the state transition probability matrix P that would accurately reflect real-world environment dynamics does not seem possible. Therefore, a model-free RL algorithm that operates regardless of any representation of P should be selected to solve the suggested MDP. To find the target policy that fully defines the agent's desired behavior, we deploy the off-policy learning method that allows to do it independently from the followed exploratory policy. Algorithm As one of the possible methods to solve the suggested MDP formulation of the EV-specific routing problem, we choose the Q-learning algorithm, which is a specific instance of temporal difference learning that looks only one step ahead. Moreover, it is suitable for discrete state and action spaces and is easily interpretable. The idea of Q-learning is to allow improvements for both target and exploratory policies. The target policy is a greedy policy that obeys the following definition: where π is the policy, Q is the action-value function, s ′ is the next state and a ′ is some alternative action that maximizes the Q-value. The real behavioural policy that the agent follows is an ϵ-greedy policy which ensures continual exploration. The policy is defined as follows: where s and a are the current state and action taken at this state, ϵ is a parameter that governs the explorationexploitation trade-off, m is the number of actions available at the current state, and a* is the best possible action. The Q-value function is updated according to Bellman's optimality equation in the following way: (7) where Q(s, a) is the Q-value of the current state and action pair, R(s, a) is the observed reward after the action a is taken and α is the learning rate bounded by 0 ≤ α ≤ 1. The latter determines to what extent newly acquired information overrides old information. The complete Q-learning algorithm is described in Figure 2. 3 RESULTS Case Study To validate the proposed method for solving the EV-specific routing problem, we created a case study within the framework of the Digitalization project (SCCER, 2020). The case study deals with the section of the road network of the Val d'Hérens alpine region in Switzerland. Figure 3 depicts the graph representation of the road network. The environment encompasses 66 nodes and 223 edges, which represent the points of interest and the connection roads, respectively. The thickness of the edges varies depending on the relative remoteness of the nodes. Each node is characterized by its geographical coordinates: latitude, longitude, and elevation. The agent is an EV defined by its battery capacity, energy consumption rate, and mass. In our case study, we use Citroen C-Zero with 16 kWh battery and an average 12.6 kWh energy consumption per 100 km (Electric vehicle database, 2020). Training The training procedure in RL is defined as a sequence of episodes. One episode represents the movement of an agent along the path from source to target. The episode is considered complete when the target is reached. The number of episodes should be sufficient to achieve a stable matrix of Q-values which is initialized to zeros at the beginning of the training procedure. Such Q-matrix represents the maximum expected future rewards for each action at each state. The training convergence is achieved when the updates of the old Q-values become insignificant. Therefore, an agent learns the optimal policy once the algorithm converges. The parameters that govern the training process are set to the following values: discount factor c 0.9, Frontiers in Big Data | www.frontiersin.org May 2021 | Volume 4 | Article 586481 5 learning rate α 0.8, and ϵ 0.1. The values are tuned experimentally to ensure convergence and satisfactory execution speed. Figure 4 depicts an example of a learning curve of the algorithm's training process, where the x-axis denotes the number of episodes, while the y-axis represents the training score. The episode's training score is determined by the mean of the scores obtained at each step of the episode. The step's score is calculated as a sum of the Q-values in the Q-matrix. Therefore, the learning curve arrives at a plateau when the Q-matrix stabilizes. In the demonstrated example, the algorithm converges after 250,000 episodes, which takes approximately 6.2 min. The Q-learning was programmed in Python, and the training procedure was executed on a personal laptop (Intel i7-7600, 16 GB RAM). One has to note that training uses a fixed target while the source is chosen arbitrarily. Therefore, the algorithm requires retraining when the destination is changed. Notably, any topological modifications of the road network, such as introducing additional nodes or removing existing ones, would equally require retraining of the algorithm. Validation A series of experiments, where each node sequentially serves as a target, is carried out iteratively to test the consistency of the policy learned by the agent with the energy feasibility goal. Each experiment simulates an EV trip starting at a random node on the graph with the fully charged battery and finishing when the final destination is reached. For each target, the amount of experiments equals N−1, where N 66 is the number of nodes in the selected road network. Thus, the total number of experiments is 4,290. Besides verifying the EV's capability of arriving at the target without violating the bat min constraint, we aim to observe whether the EV stops to recharge only when it is strictly necessary. Although not accounted for in the reward function's design, excessive charging behavior is not preferable. Thus, observing the frequency of unnecessary charging stops contributes to further improving the solution. The results demonstrate that 100% of generated routes are energy feasible, while 92% of them represent near-optimal charging decisions. The latter means that recharging schemes suggested by the algorithm give the agent a possibility to arrive at the destination, otherwise unreachable without charging, and neglect to charge when it is attainable to arrive at the destination without violating battery constraints. Moreover, the results show that in 80% of cases, the optimal number of charging stops was selected, thus avoiding excessive charging. Such a number is calculated using a verification procedure that analyzes the route with all possible combinations of the charging stations proposed by the algorithm. Although we did not aim to optimize for the route length, an interesting observation occurred. In 83% of cases, the algorithm generated the shortest possible path when recharging is not required, which was confirmed by the Dijkstra algorithm. To summarize, we validated the possible use of a Q-learning algorithm to solve the proposed formulation of the EVspecific routing problem. The following section discusses the advantages and limitations of the suggested approach and defines the directions for future research. DISCUSSION The MDP formulation of the EV-specific routing problem and the proposed model-free RL approach have certain advantages in comparison to previous works in the literature. First, our method considers both main properties of EVs: a possibility to recharge on the way and an energy recuperation ability. Although these features are crucial to model the agent's behaviour that will be close to real-world driving habits, taking into account both of them is uncommon, as shown in Section 1. Moreover, compared to previous RL works, the prior was neglected in (Zhang Q. et al., 2020). The latter was considered in (Zhang C. et al., 2020) through estimating energy consumption from rarely available historical data. Second, a trained RL agent requires less computing effort and less memory space than model-based techniques and mixed integer non-linear programming formulations (Mocanu et al., 2018) of the EV routing problem such as (Pourazarm et al., 2014). Thus, it can be deployed for online applications if successfully transferred to the real world. Third, problem formulation in a graph setting and usage of the Q-learning algorithm that employs Q-matrix make results' interpretation more intuitive. Last but not least, the off-policy temporal difference continuously evaluates the returns from the environment and makes incremental updates using bootstrapping. Therefore, unlike the Monte-Carlo approach, it is not necessary to wait until the episode terminates to judge the agent's behaviour. Limitations Although the suggested approach has some inherent advantages, it also has certain limitations influencing performance. The first limitation comes from the choice of the algorithm. The Q-learning is suitable for problems with small to medium size of a state-action space as it stores information in the form of Q-tables. Once the dimensions of the problem increase, the algorithm scales poorly. In the proposed framework, the growth of a state-action space can come from the expansion of the road network and the state discretization procedure. The selected binning method represents a simple way to discretize a continuous battery variable, where the number of bins is chosen as a trade-off between the level of detail at which we model the problem and the size of the state space. With a large number of states and actions, the probability of visiting a particular state and performing specific action decreases dramatically, thus deteriorating the performance, slowing down the training process, and exhibiting higher memory demands. To solve the scaling issue, one can use function approximators, such as neural networks or tile coding, or switch to policy-based RL. The second limitation comes from fixing the minimum required battery charge at the target v f to bat min . As some destinations might not have charging stations, the EVs can get stuck without sufficient battery charge to start a new trip. Therefore, one has to introduce an additional parameter bat f that depends on v f and ensures that the battery charge at the destination is sufficient to arrive at the closest charging station. The third limitation of the method's applicability is the need to retrain the algorithm when the destination is changed or any topological modifications occur to the road network. Thus, it should be clearly addressed to improve the method's convenience for endusers. Finally, the agent's evaluation on the same environment model used for training questions its real-world performance and the ability to handle stochastic perturbations. Future Work The assumptions made in formulating the EV-specific routing problem define the directions for future improvements. First, the goal of the learning process can be tailored according to the desired application by altering the rewards scheme. One can diversify the routing problem towards minimizing travel time, travel distance, total energy consumption, and the number of recharging stops. Second, specific characteristics of the charging process, such as charging time and charging intensity, can be considered. Moreover, one can differentiate charging stations by their slot availability and suggested price of electricity, thus introducing additional decision variables. Another improvement can be realized by including partial recharges. Therefore, the agent will have to choose not only the charging station but the amount of recharge too. Third, one can consider dynamic traffic conditions to build an environment that resembles the real world. Inclusion of traffic will affect the actions' availability and the agent's energy consumption model. The latter can be improved by accounting for the type of terrain, use of auxiliary loads, and weather conditions. Fourth, the suggested approach to EVspecific routing can be extended towards the multi-agent RL problem. Although this area of artificial intelligence is still in its infancy, the attempts to modelling road networks with multiple agents can foster developments in the field and can help to build improved foundations for autonomous green mobility. Finally, one should devote the efforts to benchmark the suggested methodology against other popular approaches for solving the routing problem. Moreover, further investigation of the agent's validity in the real world, beyond simulations, is required, preferably supported by experimental results in practice. CONCLUSION In this work, we proposed a mathematical formulation of the EVspecific routing problem, and we demonstrated a possible solution using a model-free RL approach. We defined the problem as an incomplete MDP in a graph-theoretical context. To generate energy feasible paths, we implemented an off-policy temporal difference algorithm with one step ahead. Notably, our framework considers recharging possibilities at intermediary charging stations and the ability of EVs to recuperate energy. We demonstrated in a case study that the algorithm always produces energy feasible paths. The training procedure of the algorithm requires low computational and memory demands and is suitable for online applications. DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS MD and NW conceptualized and designed the research; MD designed and implemented the methodology; NW and CB supervised the work; MD wrote the paper; NW and CB reviewed and edited the paper.
6,519.8
2021-05-26T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Static and dynamic structure of liquid metals : Role of the different parts of the interaction potential The influence of different parts of the interaction potential on the microscopic behavior of simple liquid metals is investigated by molecular dynamics simulation. The role of the soft-core repulsive, short-range attractive, and long-range oscillatory forces on the properties of liquid lithium close to the triple point is analyzed by comparing the results from simulations of identical systems but truncating the potential at different distances. Special attention is paid to dynamic collective properties such as the dynamic structure factors, transverse current correlation functions, and transport coefficients. It is observed that, in general, the effects of the short-range attractive forces are important. On the contrary, the influence of the oscillatory long-range interactions is considerably less, being the most pronounced for the dynamic structure factor at long wavelengths. The results of this work suggest that the influence of the attractive forces becomes less significant when temperature and density increase. @S1063-651X~97!13108-7# I. INTRODUCTION A detailed knowledge of the role played by the different parts of the interatomic potential on the microscopic behavior of liquids is of great interest for understanding the basis of the liquid state properties and a useful guide for obtaining more refined potential models.This sort of information cannot be deduced from experimental measurements on real systems but can be obtained from suitable computer simulation ''experiments.''Molecular dynamics ͑MD͒ simulation is one of the most useful tools for this kind of investigation.MD studies about the influence of the repulsive and attractive forces on the microscopic properties of dense simple liquids were carried out since the early stages of the development of computer simulation.However, the vast majority of these studies were devoted to structural and single-particle dynamical properties ͓1-7͔.To our knowledge, the only papers about the effects of the different part of potentials on the dynamic collective properties of simple liquids are a paper on the dispersion of sound modes in liquid Rb and Lennard-Jones ͑LJ͒ fluids ͓8͔ and two recent papers on soft-sphere fluids ͓9͔ and hard-core fluids with a Yukawa tail ͓10͔. The aim of this paper is to analyze the influence of the different parts of the interaction potential on the properties of liquid metals close to the triple point, paying special attention to the dynamic collective properties.To this end, the MD results using the full potential ͑in practice, with a relatively long cutoff͒ are compared with those for identical systems but truncating the potential at shorter interatomic distances.The typical interionic potential functions for liquid metals show a soft repulsive wall, a deep attractive well, and a long-range oscillatory tail.Potential functions truncated either at the first minimum or maximum have been considered in this work.The former correspond to soft-core potentials whereas the latter include both the soft-core and the short-range attractive parts but not the long-range oscillations. MD simulations of Li atoms have been carried out by assuming two effective pair potential models.The former ͑LM1͒ is a potential with no adjustable parameters deduced from the neutral pseudoatom method ͓11͔.Both structural and dynamical properties resulting from MD simulations using this model are in satisfactory agreement with the available experimental data for liquid Li ͓12-14͔.Potentials for Li alloys ͑Li-Mg, Li-Na͒ deduced by the same method reproduce satisfactorily the experimental structure of these systems ͓15͔.The second potential ͑LM2͒ ͓13͔ was obtained by assuming an empty core pseudopotential with a core radius determined by fitting the calculated main peak of the static structure factor to the experimental value.These two potentials show marked differences.The attractive well of LM2 is shallower and located at higher r than that of LM1 ͑the two potentials are compared in Fig. 1 of Ref. ͓13͔͒.For a simple characterization of the potential functions two parameters are ordinarily used, i.e., the position of the first zero ͑͒ and the depth of the first minimum ͑⑀͒.The and ⑀ parameters corresponding to LM1 and LM2 are gathered in Table I.Since the effective atomic size for the second is larger, the atomic cores are notably closer.A quantitative measure of the atomic close packing is given by the fraction ϵ 3 /6. The packing fraction for LM1 (ϭ0,4) is markedly smaller than that for LM2 (ϭ0,6).It should be also noticed that for LM1, ⑀Ͼk B T while for LM2, ⑀Ͻk B T. Despite their marked differences, the majority of the properties obtained from LM1 and LM2 do not show considerable discrepancies ͓13͔.This fact will be the subject of discussion in this paper.We want to point out that a careful comparison of the MD results with experimental findings indicates that simulations using LM1 are more realistic than those using LM2 ͓13,14͔. Two effective pair potentials somewhat different from those considered in this work have been used in recent MD simulations of liquid Li. and LM2, though noticeably closer to those for LM1 ͑Table I͒.According to these findings, the LM1 and LM2 potentials may be considered as two extreme models of liquid alkali metals near the melting point, though MD simulations using LM1 are closer to actual systems and the resulting properties may be taken as being representative of simple liquid metals close to the triple point.Moreover, findings using LM2 provide information on the tendency of properties when T and increase. II. MD SIMULATIONS DETAILS We carried out MD simulations of systems made up of 668 atoms with the mass of 7 Li enclosed in a box with periodic boundary conditions.The density and temperature of the simulated systems ͑ϭ4.4512ϫ10Ϫ2 Å Ϫ3 ; T ϭ470 K͒ correspond to liquid 7 Li near to the triple point.The properties were calculated from the configurations generated over runs of about 10 5 time steps.The time step was of 3 fs.The k-dependent properties were calculated for ten different k values between 0.25 and 4.08 Å Ϫ1 . The potentials were truncated at r c ϭ9.25 Å and r c ϭ10.62 Å in the simulations corresponding to the full LM1 and LM2 potentials, respectively.In the simulations with the purely repulsive cores ͑RLM1 and RLM2͒ the potentials were truncated at their first minima r c ϭ3.05 Å and r c ϭ3.30 Å, respectively.In the simulations with the truncated potentials including a repulsive core and an attractive well ͑TLM1 and TLM2͒, the cutoffs were at the position of the first maxima r c ϭ4.75 Å and r c ϭ4.66 Å, respectively. III. STRUCTURE AND SINGLE PARTICLE DYNAMICS It is generally accepted that ''geometric'' factors associated with the packing of atoms have a dominant influence on the structure of dense simple liquids, which is largely determined by the repulsive part of the pair interactions.This fact is on the basis of the perturbation theories that have been successfully applied to the study of the equilibrium structural and thermodynamic properties of different liquids ͓22,23͔.The influence of the repulsive forces on the structure of liquid metals has been recently analyzed by integral equation calculations.Matsuda et al. ͓24͔ observed that the radial distribution function ͓g(r)͔ of liquid Cs is considerably influenced by the part of the potential beyond its first minimum; especially important is the contribution of the short-range attractive part, which enhances the oscillations of g(r) but does not modify their location.Similar findings were obtained by Bretonnet and Jakse ͓25͔ for expanded liquid Rb and Cs.The g(r)'s resulting from our simulations ͑Fig.1͒ corroborate these conclusions.The maxima and minima of the g(r) for TLM1 are more marked than for RLM1 whereas the g(r) for LM1 and TLM1 are quite close and the only noticeable difference is a slight shift of the oscillations after the first maximum.The effects of the short-range attractive forces in LM2 and TLM2 are markedly smaller.As may be observed in Fig. 1 the g(r) for RLM2 is very close to that for LM1 ͑it should be noted that the g(r)'s for LM2 and LM1 are almost identical ͓13͔͒.The differences between the g(r)'s corresponding to LM2 and TLM2 are still smaller than those between LM1 and TLM1. We have also determined the statistical distributions of the bond angle between a central particle and particles in both the first and the second coordination shells ͓26͔.It is expected that these functions can be more sensitive to the details of potentials since they are directly related to the three-body correlation functions.Nevertheless, the results for LM1 and LM2 do not show considerable discrepancies ͓13͔.Moreover, the influence of the different parts of the potential on the bond-angle distributions is not significantly different from that on the g(r).For the sake of brevity these distributions are not reported in this paper. The resulting velocity autocorrelation functions ͓C(t)͔ are displayed in Fig. 1.As with g(r), the C(t) functions corresponding to LM1 and LM2 are very close and only the former is shown in Fig. 1.According to earlier results of Schiff ͓2͔, we can observe that the changes in C(t) when the potential is truncated at its first maximum are very small.However, the C(t) negative region is considerably reduced when the attractive forces are left out.The discrepancies between the C(t)'s for TLM1 and RLM1 are much more important than those for TLM2 and RLM2.The self-diffusion coefficients (D) are proportional to the time integral of the C(t) functions.The qualitative dependence of the resulting D coefficients ͑Table II͒ on the different parts of the potential is consistent with that observed for the C(t)'s. The results discussed in this section corroborate that both g(r) and C(t) of liquid metals are noticeably influenced by the attractive part of the first well of the potential.This influence, which can be more clearly observed in the case of C(t), is much more marked for LM1 than for LM2 ͑it should be noted that the former model may be considered as representative of liquid metals close to the triple point͒.However, the oscillatory tail of the potential has a rather small influence on g(r) and C(t) although it can have considerable effects on the low-k limit of the structure factor ͑see Sec.IV B͒.If one compares the results for RLM1 and RLM2, one observes that the former produces a less structured g(r) and a C(t) with less pronounced negative values as corresponds to a system with a lower close-packing fraction.Nevertheless, as a result of the strong attractive forces in LM1 these differences vanish when the full potentials are considered. A. Transport coefficients The Green-Kubo relations have been used for the calculation of various transport coefficients.According to these relations the shear viscosity ( S ), longitudinal viscosity ( L ), and thermal conductivity ͑͒ coefficients are proportional to the time integral of the correlations of the nondiagonal and diagonal elements of the stress tensor and the energy current, respectively ͓23,27͔.These time correlation functions decay monotonically to zero, as may be observed in Fig. 2 for the functions corresponding to S and .The resulting coefficients are reported in Table II. The long-range oscillatory tail of the potential has a rather small influence on the values of the viscosity coefficients.However, the contribution of the short-range attractive forces to these properties is more important, especially in the case of LM1 and TLM1.When the attractive part of TLM1 is removed we can observe a substantial increase of S and L whereas in the case of TLM2 the changes are qualitatively the same but markedly smaller.We can observe small but noticeable contributions to of both the attractive and oscillatory parts of LM1, whereas these contributions are appreciably smaller for LM2.It is interesting to note that the different values of for LM1, TLM1, and RLM1 correspond to the integral of very close functions.As with other properties, the transport coefficients as well as the corresponding time correlation functions for LM1 and LM2 do not show noticeable discrepancies ͓13͔. B. Dynamic structure factor The spectrum of the density fluctuations ͓S(k,)͔, usually called dynamic structure factor, is directly related to the II ͓it should be pointed out that we cannot determine k L max very accurately since S(k,) was only calculated for ten k values, i.e., those in Fig. 3 and kϭ0.51, 0.88, 1.25, and 1.44 Å Ϫ1 ͔.A correlation between k L max and L can be observed, so that liquids with higher viscosity can sustain propagating longitudinal waves up to higher wave numbers.The persistence of the Brillouin peaks as k increases was related to the softness of the interaction potential and, consequently, to the isothermal compressibility of the liquid ͓23͔.However, the results of this work show that the attractive forces also play a significant role in the propagation of longitudinal modes in liquid metals.So, k L max for LM1 is markedly different from that for RLM1 despite the softcore repulsive part in the two potentials being the same.In the case of LM2, the influence of the attractive forces is less important.According to our results the long-range oscillatory tail does not have any noticeable influence on k L max .Moreover, we cannot observe any significant correlation between the k L max values for LM1, TLM1, RLM1 and S(0) ͑Table II͒ that is proportional to the isothermal compressibility coefficient. The S(k,) results for LM1, TLM1, and RLM1 are shown in Fig. 3.At small k, S(k,) for LM1 is quite close to that for RLM1 but markedly different for S(k,) for TLM1, which shows a considerably higher Rayleigh peak.These findings indicate that, in the hydrodynamic region, the contributions to S(k,) of both the short-range attractive part and the oscillatory tail of potential are important, are in opposite directions, but do not affect significantly the location of the Brillouin peaks.At large k, the influence of the potential tail is almost negligible whereas the short-range attractive forces produce a considerable increase of the S(k,0).S(k,) for LM2 ͑that is not reported in this paper but is shown in Fig. 11 II, respectively.As with other properties, the results for LM2 show little changes when different cutoffs of the potential used whereas the results for LM1, RLM1, and TLM1 show noticeable discrepancies.It is interesting to note that the short-range attractive and the long-range oscillatory part of LM1 produce similar changes but in the opposite direction on S(0).These results are in accordance with those from integral equation calculations ͓24,25͔, which showed that the low-k limit of S(k) in liquid metals at low densities is strongly dependent on the cutoff distance for the interaction potential.As in this work, it was observed that the attractive forces increase S(0).This is physically reasonable since S(0) is proportional to the isothermal compressibility coefficient. C. Transverse current correlation functions The power spectra of the transverse-current correlation functions ͓C T (k,)͔ provide information on how the system responds to shear disturbances with wave vector k and frequency .In the kinetic regime, C T (k,) approaches the free particle expression and, for a given k, has a Gaussian shape.In the hydrodynamic regime, C T (k,) approaches the expression for a continuum and, for a given k, has a Lorentzian form.In these extreme regimes the fluid cannot support the propagation of transverse modes but MD findings have shown that, in intermediate k ranges, C T (k,) for a given k can show a peak at nonzero frequency that can be associated with propagating shear waves.The appearance of shear waves can be accounted for by the incorporation of viscoelastic effects into the hydrodynamic model for the transverse current ͓23,28͔.For the systems in this work we have determined a k interval for which C T (k,) shows a noticeable peak.The extreme k values of these intervals ͑k T min and k T max ͒ are gathered in Table II.As may be expected, the propagation of transverse waves and the shear viscosity coefficients are correlated and the narrower k intervals correspond to the systems with lower s . We have not observed any significant influence of the long-range oscillatory forces on C T (k,).On the contrary, the effects of the short-range attractive forces may be important.So, the C T (k,) curves for LM1 and RLM1 show large discrepancies.The C T (k,)'s for RLM1 show higher initial values and maxima at lower than for LM1.Moreover, for RLM1 we can only observe a maximum for a narrow interval of k whereas for LM1 a maximum can be observed for all the values of k considered in this work ͑Table II͒.In the case of LM2 the influence of the short-range attractive forces is clearly smaller and we have only found noticeable discrepancies between the C T (k,) results for LM2 and RLM2 at the lowest k (0.25 Å Ϫ1 ).It should be pointed out that the results for LM2 have not been represented in Fig. 5 since they are almost identical to those for LM1 ͓13͔. The S(k,) and C T (k,) findings for RLM1 and TLM1 show that the contribution of the short-range attractive forces to the propagation of both longitudinal and shear waves in liquid metals is important.However, the results for RLM2 and TLM2 show that the effect of attractive forces in systems at high packing fractions is screened out and the propagation of longitudinal and shear waves should be mainly associated with the almost continuous collisions among the close atomic cores. V. CONCLUDING REMARKS The influence of the different parts of the potential on the collective dynamic properties of the systems analyzed in this paper is qualitatively similar to that observed for the structure and single dynamic properties.The results for the LM1 potential using different cutoffs indicate that the collective dynamics properties of simple liquid metals close to the triple point are considerably influenced by the short-range attractive forces corresponding to the first well of the effective pair potentials ͑which are very deep for these systems͒.However, the effects of the long-range oscillatory interactions corresponding to interatomic distances beyond the first maximum of the potential are considerably smaller.The only significant contribution of these interactions has been for S(k,) at small k values and, consequently, for S(0) and the isothermal compressibility.In the case of LM2, the influence of the part of the potential beyond the first minimum is markedly smaller than for LM1.As was commented in the Introduction of this paper, MD simulations with LM2 correspond to a system at higher reduced density and temperature than those for LM1 ͑see Table I͒.So, the results presented in this work suggest that the influence of the short-range attractive forces diminishes as the density and temperature increases. Finally, we want to point out that the similar results obtained for the majority of properties by using the full LM1 and LM2 potential models ͓13͔ indicate that the discrepancies between the RLM1 and RLM2 results mainly associated with the different effective atomic sizes are balanced by the effects of the strong attractive forces corresponding to the deep potential well of LM1. spectrum of the longitudinal-current fluctuations ͓23,28͔.At large k values ͑kinetic regime͒, S(k,) approaches its free particle expression and, for a given k, has a Gaussian shape.At short k values ͑hydrodynamic regime͒, S(k,) approaches the expression for a continuum and, for a given k, consists of three Lorentzian lines, the Rayleigh line at ϭ0 and two shifted Brillouin lines.The presence of the Brillouin peaks reflects the propagation of density fluctuations ͑longitudinal modes͒ and they disappear when the k values approach those for the kinetic regime.The maximum value of k for which S(k,) show noticeable Brillouin peaks (k L max ) has been determined for each system and the results are listed in Table FIG. 2. Normalized time correlation functions of the offdiagonal stress tensor elements ͑up͒ and the energy current ͑down͒.Solid circles, LM1 potential; solid line, RLM1 potential; longdashed lines, RLM2 potential. Torcini et al. ͓16͔ adopted the potential implemented by Price et al. ͓17͔ that considers an empty core pseudopotential adjusted to reproduce the microscopic properties in the solid phase.Nowotny et al. ͓18͔ proposed a potential based on a nonlocal pseudopotential and Kambayashi et al. ͓19͔ constructed a potential according to the quantal hypernetted-chain theory.These potentials also produce results in reasonable accordance with experimental data ͓16,18,19͔ and the corresponding and ⑀ parameters, are markedly closer to those for LM1 than to those for LM2 ͑see TableI͒.The reduced densities (*ϭ 3 ) and temperatures (T*ϭk B T/⑀) are useful quantities to compare the properties of different systems.Discussions on the possible application of the corresponding states picture to the analysis of the properties of liquid metals go back to the 1970s ͓20͔. TABLE II . Summary of the results from the MD simulations using different potential models.The definitions of properties and potentials are given in the text. a k T min may be smaller than 0.25 Å Ϫ1 , which is the lowest k value in this work.b k T max may be greater than 4.08 Å Ϫ1 , which is the highest k value in this work.
4,786.6
1997-01-01T00:00:00.000
[ "Physics" ]
Optimization of FSW parameters on bio-inspired jigsaw suture patterns to improve the tensile strength of dissimilar thermoplastics This study aimed to enhance the Tensile Strength (TSFSW) of dissimilar thermoplastic joints by utilizing a bio-inspired jig saw suture and optimizing the Friction Stir Welding (FSW) limits are Traverse Speed (TS) and Plunge Depth (PD) and Rotational Speed (RS) at three varied levels. Statistical analysis, response surface methodology (RSM), and experimental validation were involved in achieving the research objectives. The outcomes showed that the TS and PD parameters had a higher significance on Tensile Strength compared to RS. The RSM prediction results were validated through experiments, achieving an extreme Tensile Strength of 11.1 MPa with a low error percentage. The best values of the FSW limits were found to be Rotational Speed (RS) of 1200 rpm, Plunge Depth (PD) of 0.37 mm, and Traverse Speed (TS) of 49.39 mm min−1. The formulated mathematical model with regression co-efficient R2 of 0.96 and RSM proved effective in predicting the optimal FSW parameters and achieving superior TSFSW. These findings prove that combination design can be reliably applied to optimise with a 95% confidence interval. The optical microscope and SEM morphological results in this study make congruently accurate predictions for the joint of the tensile fracture zone. These findings contribute to the advanced FSW techniques for dissimilar thermoplastic joints, providing insights for industrial applications requiring strong and reliable joints. Introduction FSW has been applied to polymer materials in the laboratory to improve tool performance, optimize welding methods, and fabricate composite materials.It has the potential to produce defect-free joints and high-quality composite materials in the polymer industry.however, more research is needed to fully understand the materials' mixing and flow, microstructure, and properties [1].3D printed thermoplastic parts were successfully joined using FSW, with optimized process parameters to improve weld strength(WS) and geometric qualities [2].Using a stationary hot shoe, FSW of ABS sheets was performed by varying the RS, TS, and tool shoe temperature.A full factorial design of 33 experiment was used to study the effect of these parameters' effect on the welds' tensile strength [3].Tool temperature was the primary factor affecting tensile strength, and the most dominant parameter affecting elongation was RS.The welds with the highest elongation had no crazes on the fracture surface, while the welds with the minimum elongation had a fracturing surface with corrugations and crazes [4].A new technique called i-FSW is introduced for joining thermoplastic plates using induction heating.The technique was demonstrated using high-density polyethylene plates, and the optimal settings for achieving supreme joint strength were determined by keeping temperature of 45 °C in tool pin and a RS of 2000 rpm [5].Friction stir spot welding (FSSW) with a consumable tool has been studied to join similar or dissimilar thermoplastics.The tensile properties of the joints made using this process were comparable to those of virgin acrylonitrile butadiene styrene, making it suitable for maintenance and repair applications [6].The results of different FSW limits on the properties of polycarbonate sheets were thoroughly investigated.Under specific processing circumstances, the maximum tensile, flexural, and impact strengths were reached, and these factors also had an impact on the stir zone's size and structure.Divergent lines were visible on the surface, as revealed by fractographic examinations [7].The joint efficiency of FSW nylon-6 sheets was optimized by a RSM based regression model and the particle swarm optimization method.The supreme FSW joint efficiency of 49.68% was achieved using a square pin [8].The optimal parameters for avoiding defects in the polypropylene joints were found to be a threaded pin profile, a FSW speed of 40-50 mm min −1 , and a RS of 2250-1500 rpm [9].By applying pressure and heat with a pinless tool, a new FSW method is anticipated for attaching AA6082-T6 aluminium sheets to self-reinforced polypropylene.The joints were formed by changing hole diameter , pitch, and their mechanical resistance was evaluated [10].The consequence of dwell time, plunge rate of the tool, and RS on the forces and torque while FSSW of polycarbonate pieces with minimum thickness were analysed.The dwell duration had a nominal effect on generating forces but greatly influenced the temperature of material, weld area size, and mechanical characterstics of the weld joint [11].The goal of the study is to improve the mechanical properties of friction stir-welded polymethyl methacrylate joints by optimizing the process variables.The exterme TSFSW was attained at a tool RS of 1000 rpm and feed of 40 mm min −1 [12].FSW was used to join aluminium and thermoplastic sheets.The process produced mechanical locking between the materials and did not require special surface preparation.The impact of several parameters, such as RS, TS, and distance to backing were studied, and optimal values were identified [13].Friction stir welding was used to join 3D printed thermoplastic composites made of Al metal particulate reinforced with ABS and PA6.The process was optimized for mechanical, morphological, and thermal properties using a consumable tool at 1400 rpm, feed of 50 mm min −1 , and 4 mm PD [14].In this investigation, the influence of incorporating multi-walled carbon nanotubes (MWCNTs) on the structure of dissimilar thermoplastics FSW joints was explored.The results demonstrated that the introduction of MWCNTs effectively decreased the presence of defects within the joints, highlighting the promising potential of these nanotubes in enhancing the dissimilar joint quality and performance [15].RSM to optimize the flexural strength of thermoplastic joints made with friction stir welding.The WS was influenced by RS, TS, and the temperature of the tool shoe.The ANOVA and RSM showed that higher RS and lower TS resulted in stronger weld joints with fewer defects [16].The optimal mechanical attributes of the PMMA-PC welded joint were achieved at RS and TS of 2100 rpm and 8 mm min −1 and by maintaining the heater temperature of 120 °C.At these conditions, the welded joint had a strength upto to 98% of the PC material and a superior hardness value compared to PC material [17].Submerged FSW of dissimilar thermoplastics and the consequence of various process factors on the microstructure and TSFSW of the joints were studied.The researchers found that adding MWCNTs to the ABS/High-Density Poly-ethylene HDPE joint improved the TSFSW of the joint [18].The Yield Strength (YS) of the welded pipes was analyzed using Taguchi's approach, which examines the impact of several parameters of FSW like tool offset, RS and TS.The result showed that RS was the high impact creating parameter on YS, succeeded by tool offset and TS [19]. Friction stir welding's (FSW) impact on Cu-brass joint properties was explored through various tool rotational speeds.While higher speeds degraded properties, lower speeds (1000 rpm) showed improved microstructure and mechanical strength.Corrosion resistance and heat distribution were also investigated [20].This study investigates friction welding's impact on enhancing intermetallic surface morphology in Al7075-SiC metal matrix composites.Varying forging pressure and rotational speed improved mechanical properties, and artificial neural networks efficiently predicted outcomes [21].Bio-inspired interlocking structures are those that mimic the mechanically interlocking structures found in nature, such as those found in certain plant and animal tissues.These structures are characterized by their ability to lock together firmly and provide interfacial solid bonds.They can be divided into two primary categories: regulable interlocks, which provide tunable and reversible attachment, and static interlocks, which improve interfacial strength [22].The utilization of fractal design significantly enhanced the load-withstanding ability of the Koch fractal structures with interlocks, as discovered by the researchers.However, the researchers observed that deficiencies, such as gaps between rounded tips and interlocked pieces, had a notable impact on the mechanical responses of these structures [23]. To increase the reliability and strength of the metal-polymer joints in biomedical and dental prostheses by developing a mechanical interlocking technique that utilizes optimized AM characteristics on the surface of the metal.The process involved an optimization method and combining it with the mesh adaptive direct search algorithm and Finite Element Analysis (FEA) to identify the best magnitudes of the interlocking properties and tested through tensile experiments, it was identified that it increases the metal-polymer interface strength by 85% and significantly improved the overall performance of the prostheses [24].A new type of suture material was developed that utilizes jigsaw-like morphologies to create a bistable system that can lock into dual stable positions.AM and design exploration were used to fine-tune the architecture of the tabs to tailor the mechanical response of the material.The resulting materials exhibited up to ten times increased toughness compared to the base polymer.These materials showcased desirable characteristics, including significant, re-manufacturability, damage tolerance and reversible deformations.These materials have potential applications in the development of new architectured material [25].The utilization of carbon fiber reinforced PLA (Poly Lactic Acid) and bio-inspired interlock sutures led to enhanced strength and rigidity in 3D-printed components.By employing PSO optimization, the optimal printing parameters were predicted and tested, confirming that the combination of carbon fiber reinforced with PLA and spline-shaped interlock sutures enhances the Bending Strength (BS) of the AM components compared to PLA [26].This investigation explores a bionic approach inspired by natural sutured architectures, revealing that incorporating sinusoidal centre-lines in interlocking interfaces enhances the load-carrying capacity and toughness of joints, offering insights for robust engineering design [27].The effect of spline interlock angles on tensile strength was explored and the research reveals that the interlock angles play a crucial role in sutures and sutured materials.The optimization reveals that an interlock angle of 35 degrees offers the best pullout performance, enhancing the tensile strength and mechanical properties of inherently brittle materials [28].The study investigates the influence of nano silica addition and rectangular weave geometry on the mechanical and microstructural properties of friction stir welded nylon 6-6 thermoplastic.Optimal conditions of 1500 rpm, 2 mm step size, and 2 wt.% nano silica improve tensile strength, microhardness, and overall weld quality, offering the potential for efficient thermoplastic joining [29]. In this investigation, FSW was utilized to join thermoplastic materials, with a focus on optimizing the welding factors to maximize weld quality.The parameters studied included tool RS, TS, and axial force, and diverse tool profiles with differnet shapes such as square , cylinder and triangular shape with threaded pin were used.The weld excellence was estimated through TSFSW testing and microstructural investigation.It was found that proper control of the welding factors was essential in achieving strong, high-quality welds [30].This research explored the effect of several FSSW factors on the WS of HDPE sheets.The Taguchi method, an orthogonal array, and the SN ratio and ANOVA were used to decide the most significant factors and the best combination of welding factors for maximizing WS.The efficiency of the strategy for determining the factors that had the most effects on WS was supported by experimental data [31].This research focused on the influence of FSW factors on the WS of thermoplastic materials, such as HDPE and PE sheets.The Taguchi method was employed as a statistical DoE to recognise the optimal welding factors, and the SN and ANOVA were used to analyse the results.The experiments were organised by an orthogonal array, and the results were confirmed through further testing.The ultimate goal of the research was to improve the strength of the welds in order to increase the use and performance of thermoplastic joints [32].Effect of several input weld factors on the TSFSW of the welded samples using diverse pin profiles.The temperature and load-displacement behaviour of the welds were also considered to realise the welding behaviour of PLA [33].Taguchi method to optimize FSW process factors in order to increases the lap weld joints Tensile-Shear Strength (TSS) in composite PE sheets reinforced with carbon fibers.The results exhibited that WS, RS, and tilt angle had a vital impact on lap WS, and that an optimal TSS of 6.06 MPa was attained with a WS of 25 mm min −1 , a RS of 1250 rpm, and a tilt angle of 1 degree [34].FSSW to join polycarbonate (PC) sheets of 3 mm thickness.An artificial neural network (ANN) was also formulated to forecast the mechanical behaviour of the weld joints.The results of the study provide insights into the potential utilization of FSSW in the joining of thermoplastic materials [35]. The literature survey revealed that there is no established interlock pattern for joining dissimilar materials in friction stir welding (FSW), and no previous research has investigated how interlock patterns affect the strength of welded joints.This study focuses on implementing a jig saw interlock suture in FSW of dissimilar thermoplastics to determine its impact on the strength of the weld joints. Methodology A 3D model of a spline interlock suture was created using Fusion 360 software.This model was then used to fabricate 16 samples using desktop FDM with specific parameters such as an infill % of 30 and a wall thickness of 1.5 mm.The samples were then welded using FSW using a round pin tool according to a DoE.The TSFSW of all samples was tested, and the results were used to develop a regression model.The model was further optimized using RSM to predict the best FSW factor values for maximizing the TSFSW of the bio-inspired interlock sutures.The predicted results were then experimentally validated. Materials and setup 3.1.Materials FDM commonly uses thermoplastic filaments as raw material to convert 3D models into physical components.PETG (Polyethylene terephthalate glycol) was used for the sample fabrication in this research due to its strength, durability, and temperature tolerance. Specimen fabrication The specimen fabrication setup for the FSW process includes the Work-Bee CNC router 1010 machine with a work volume of (1145 ×1125 × 510 mm), maximum spindle RPM of 30000 and a customized fixture.The fixture for FSW is designed to securely hold the samples while welding in a singular-pass square butt welding configuration.In this study, dissimilar thermoplastic materials PLA (Polylactic Acid) and PETG material properties as showed in table 1 were subjected to FSW using a single tool named Tapered Cylindrical pin, as depicted in figure 1. Figure 2 shows the specimen with dimensions of 5 mm thickness, 100 mm length, and 40 mm width were fabricated with a jigsaw-like spline pattern created using FDM process.To improve the TSFSW for dissimilar thermoplastics PLA and PETG novel approach was proposed, which involved the implementation of a jig saw suture interlock pattern consisting of a radius of 2.5 mm with an angle of 35 degrees [27,28]. FSW of dissimilar thermoplastic A specialized fixture was designed to hold both PLA and PETG samples firmly in place and maintain precise alignment between the male and female spline patterns to weld the dissimilar thermoplastics.A custom circular tool was used to execute the welding process.The novel spline interlock pattern allowed for uniform distribution of both materials in the welded region, reducing the concentration of stress points and enhancing the strength of the joint.This research focused on investigating the spline interlock pattern's effect on the FSW process's tensile strength for dissimilar thermoplastics PLA and PETG.The Fabricated samples are shown in figure 3. Tensile test To measure the TSFSW of welded joints, a total of 16 samples were subjected to a gradual load using a Universal Testing Machine (UTM) with a capability of 100 kN (AILM 100 KN).The load was applied at a 1 mm min −1 rate until the specimens fractured. Design of experiment To attain the maximum TSFSW in the 3D-printed bio-inspired jigsaw suture pattern, significant process parameters of FSW, such as RS, TS, and PD were considered through a literature survey.The process parameters for the jigsaw suture interface varied at levels −1, 0, and +1.To avoid insufficient melting of PETG and void formation in PLA during FSW of dissimilar thermoplastics, the rotational speed (RS) is maintained between 800 and 1200 rpm.This provides sufficient heat generation for effective PETG melting and minimizes the risk of excessive heat leading to voids in PLA.TS of 30 mm min −1 to 50 mm min −1 allows for an appropriate interaction time between the tool and the samples.This ensures that both materials are adequately softened and mixed, facilitating good intermolecular bonding.It helps prevent incomplete melting or over-melting, which can compromise the quality of the weld joint. Similarly, the plunge depth in FSW is typically between 0.2 mm and 1 mm.This range ensures controlled material mixing and heat generation, minimizing the risk of void formation while maintaining a reliable weld of PLA and PETG.The table 2 range values of selected process parameters are identified through experiments.The DOE utilised a Central Composite Design, which included a 2 3 -factorial points, 2 * 3 axial points and two central points.This Design Expert software resulted in a total of 16 experiments.For the spline interlock jig saw suture interface, a total of 16 specimens were fabricated following the DOE, as displayed in table 3. Regression model A mathematical model was developed to forecast the TSFSW for dissimilar thermoplastic 3D-printed Jigsaw sutures.The model takes into account the process parameters of RS, TS, and PD.The input and ouput factors of the FSW process are illustrated in figure 4. The model allows for accurate estimation of the TS based on these process parameters. To predict the maximum TS FSW , a quadratic model is developed for the selected FSW factors RS, TS and PD is written as shown in equation (1) Where RS, TS, and PD are functions of response Y and are represented as The developed quadratic model of dissimilar thermoplastic with a jigsaw suture interface was given in equation (3) The developed model was assessed for its adequacy and demonstrated a high correlation with an elevated regression coefficient R 2 of 0.96.This indicates a strong relationship between the variables in the formulated model.Additionally, the F-value (16.46) of the model indicates its significance as showed in table 4. The The comparison chart depicted in figure 5 clearly demonstrates that the experimental and predicted values exhibit a similar pattern.This confirms that the formulated model possesses the capability to accurately predict TSFSW, as it aligns well with the observed experimental values. Result and discussion This research aimed to improve the TS (FSW) of dissimilar thermoplastic joints by incorporating a bio-inspired jig saw suture and optimizing the FSW parameters.The optimization was successfully done by employing RSM techniques.The maximum TS (FSW) value predicted by the optimization technique was evaluated through experiments to validate its accuracy [36]. RSM results RSM was utilized to determine the optimal FSW input variables for achieving enhanced tensile properties in the FSW joints of dissimilar thermoplastic bio-inspired jigsaw sutures.The predicted optimum FSW parameter values to maximize the TS(FSW) obtained through Design Expert software are presented in table 5. Figure 6 depicts the desirability ramp function graph, which provides insights into the optimization process for maximizing the TS (FSW) in the FSW of dissimilar thermoplastic joints.The graph showcases the predicted optimum values of the Rotational Speed (RS) at 1200 rpm, Traverse Speed (TS) at 49.39 mm min −1 , and Plunge Depth (PD) at 0.37 mm, as predicted by Response Surface Methodology (RSM).The desirability ramp function graph combines the optimized parameter values RS, TS and PD using the desirability approach.The graph in figure 6 visually shows how close the predicted parameter values are to the desired targets for maximizing Tensile Strength.The ramps on the graph represent how each parameter's desirability changes and the final desirability value provides an overall measure of how well the parameters align with the optimization goal.The ramps on the graph represent the optimization results and highlight the achieved output response Tensile Strength of 11.299 MPa.This graph provides a visual representation of the optimization process, indicating the optimal parameter values predicted by RSM for maximizing the TS in FSW. Influence of RS and TS on TS(FSW) The relatively lower F-value of Rotational Speed (RS) compared to other factors, such as TS and PD, suggests that RS has a less significant influence on the TSFSW joints of dissimilar thermoplastics.When the jigsaw suture is present, at a maximum rotational speed of 1200 rpm, greater friction arises where the tool shoulder and base plate contact.This enhanced friction leads to optimal temperature as an input in the weldment region, causing the thermoplastics (PLA and PETG) to reach a plastic state.During this stage, the stirring action of the selected tool promotes the easy flow of material around the tool pin, resulting in a uniform distribution.This uniform distribution contributes to a better-plasticized state of dissimilar thermoplastic materials, allowing the tool to travel easily near the trailing edge of the sample.Although the 3-dimensional surface graph in figure 7 indicates a slight increase in TSFSW with a slight increase in RS, the lower F-value of RS suggests that its influence on TS is relatively less significant compared to other parameters.The F-value represents the statistical significance of a parameter, and in this case, a lower F-value for RS indicates that other factors, such as TS and PD, have a more dominant influence on the overall TSFSW joints. Influence of TS and PD on TS(FSW) The statistical analysis reveals that the TS and PD have a higher significance on the TSFSW joints compared to Rotational Speed (RS).This is evident from the higher F-values of 24.62 for TS and 28.41 for PD.These findings indicate that TS and PD are more influential in determining Tensile Strength.The surface plot analysis shown in figure 8 provides further insights into the relationship between TS and Tensile Strength.It clearly demonstrates that increasing TS directly corresponds to an increase in Tensile Strength.This can be attributed to lower TS settings resulting in more friction and longer stirring times, leading to the formation of voids and reduced Tensile Strength.Conversely, higher Traverse Speed (TS) settings reduce stirring time, promoting a more uniform distribution of materials and consequently enhancing Tensile Strength.PD also significantly affects Tensile Strength, as indicated by its higher F-value.The surface plot illustrates that increasing PD initially enhances Tensile Strength.This is because greater PD allows for deeper penetration of the FSW tool, facilitating better material mixing and bonding.However, beyond a certain point (0.6 mm in this case), further increases in PD can have a negative impact on Tensile Strength.Excessive PD may also create voids in the weldment zone and reduce the thickness of the weld interface, ultimately diminishing Tensile Strength.Maintaining an optimal combination of these parameters is crucial for achieving superior Tensile Strength in dissimilar thermoplastic joints. Influence of RS and PD on TS(FSW) The surface plot showed in figure 9 illustrations that the RS, although less significant than other parameters, influences the Tensile Strength to some extent.It is increasing RS results in increased friction and generates more heat among the tool shoulder and base plate.This improved heat input optimizes the plasticization of dissimilar thermoplastic materials within the weldment zone of the jigsaw interlock suture.The enhanced plasticization facilitates the smooth flow and uniform distribution of the materials around the tool pin.As a result, better bonding between the materials is achieved, ultimately contributing to improved Tensile Strength. In contrast, PD is more prominent in influencing Tensile Strength, as indicated by its higher significance.Increasing PD initially enhances the Tensile Strength by enabling more extensive material mixing and improved bonding.The increased plunge depth allows for better penetration of the FSW tool, leading to a thorough blending of dissimilar thermoplastic materials.Consequently, the joints exhibit improved mechanical properties and higher Tensile Strength.However, it is crucial to note that excessively high PD can negatively affect Tensile Strength.Beyond a certain threshold, typically around 0.6 mm, excessive PD can result in the formation of blowholes in the weldment zone.These blowholes weaken the joint structure and reduce the overall Tensile Strength. Additionally, excessive PD may lead to a reduction in the thickness of the weld interface, further compromising the joint's mechanical integrity.Optimizing both RS and PD parameters is essential for achieving maximum Tensile Strength in FSW joints.Adjusting RS influences the plasticization and material flow, while controlling PD ensures proper material mixing and bonding.By setting the RS to a maximum of 1200 rpm and the PD to 0.37 mm demonstrates the ability to achieve robust and dependable joints with exceptional Tensile Strength (TS). Optical microscope and SEM analysis Figure 10(a) presents an optical microscope image demonstrating the successful mixing of PLA and PETG in the FSW joint, providing visual confirmation of a high-quality weld.This visual evidence solidifies the conclusion that the weld has achieved a favourable level of integrity and material blending.To further validate the mechanical performance, figure 10(b) displays an image from the cross-section of the fracture zone obtained during the tensile test.The presence of clear evidence of pull-out signifies a robust tensile strength, reinforcing the notion that the weld has attained satisfactory mechanical properties.In addition, figure 10(c) Showcases the top surface of the FSW joints captured using scanning electron microscopy (SEM), revealing a uniform and wellblended composition of the dissimilar thermoplastics PLA and PETG.This SEM image offers a higher level of detail, confirming the homogeneous mixture of the materials in the joint.Furthermore, figure 10(d) Exhibits the fractured surface of the Friction Stir Welded samples under the optimized parameters, also captured through SEM.The occurrence of distinct dimples on the fracture surface signifies a ductile fracture mode, indicating high tensile strength properties.The presence of microvoids observed in the SEM image (d) indicates a pull-out mechanism, which can contribute to increased tensile strength.These voids act as stress concentrators, causing energy dissipation and hindering crack propagation, thus enhancing the overall mechanical performance.The identification of cleavage facets in the image suggests a brittle nature, indicating potential areas of weakness.However, it is essential to note that the presence of cleavage facets does not necessarily indicate a decrease in tensile strength.In fact, these facets can help absorb and distribute stress, leading to improved fracture resistance and higher energy absorption capabilities.The detection of oxide precipitation spots in the SEM image reveals the formation of oxides on the fracture surface.While oxide precipitation can indicate a chemical reaction or environmental exposure, its effect on tensile strength may vary.In some cases, oxides can act as strengthening agents by reinforcing the material's structure.However, excessive oxide formation may lead to localized weakening and reduce the overall tensile strength.These figures, obtained from both optical microscopy and SEM, collectively provide strong evidence supporting the successful mixing and favourable mechanical performance of the FSW joints. Evaluation of RSM result with experiment result The best FSW factors obtained from RSM are presented in table 6. Experimental validation of these parameters resulted in a maximum TSFSW of 11.1 MPa.Furthermore, less than 1% of the experimental data deviate from RSM predictions, indicating high accuracy.These findings strongly support the effectiveness of the formulated model and RSM in predicting the optimal FSW process parameter values for achieving superior Tensile Strength in the jigsaw suture interfaced dissimilar thermoplastic joints. Conclusion This research focused on enhancing the tensile strength of dissimilar thermoplastic joints by utilising a bioinspired jig saw suture and optimising Friction Stir Welding (FSW) parameters. • The bio-inspired Jig saw suture interface played a crucial role in improving the tensile strength of the joints.The Jig saw suture facilitated enhanced material flow, bonding, and distribution during the FSW process. • By creating an optimal heat input and promoting plasticization of the thermoplastic materials, the Jig saw suture interface enabled better mixing and blending of the materials.This resulted in a more uniform distribution of dissimilar thermoplastic materials and ultimately contributed to increased tensile strength in the joints. • In addition to the Jig saw suture, the optimization of FSW parameters also played a significant role in enhancing the tensile strength.The statistical analysis revealed that the Traverse Speed (TS) and Plunge Depth (PD) parameters had a higher significance on the tensile strength compared to the Rotational Speed (RS). • Increasing the Traverse Speed (TS) led to improved tensile strength by reducing stirring time and promoting a more uniform distribution of materials.On the other hand, the Plunge Depth (PD) influenced tensile strength by facilitating better material mixing and bonding.However, excessive PD beyond a certain point had a negative impact on tensile strength due to the creation of voids and reduced thickness in the weld interface. • Through the utilization of Response Surface Methodology (RSM), the optimum values of the FSW parameters were predicted.The experiments conducted to evaluate the predicted FSW parameters resulted in achieving a maximum tensile strength of 11.1 MPa, with an error percentage of less than 1% compared to the RSM predicted result.This validates the accuracy and reliability of the developed mathematical model and demonstrates the effectiveness of RSM in predicting the optimum FSW process parameter values. In summary, in conjunction with the optimized FSW parameters, the bio-inspired jigsaw suture interface successfully enhanced the tensile strength of dissimilar thermoplastic joints.This research contributes to advancing the understanding and application of FSW techniques in various industries and applications, offering improved mechanical properties and performance in joint structures. Figure 2 . Figure 2. Schematic of spline suture interfaces and geometric parameters. Table 1 . PETG and PLA material properties. Figure 5 . Figure 5.Comparison of Experimental value and Predicted value of TS (FSW). Figure 7 . Figure 7. Surface Plot of RS and TS on TS(FSW). Figure 8 . Figure 8. Surface Plot of TS and PD on TS(FSW). Figure 9 . Figure 9. Surface Plot of RS and PD on TS(FSW). Figure 10 . Figure 10.An optical microscopic image showcasing the (a) Top surface of the FSW joint, (b) Revealing the fracture surface after conducting a tensile test SEM image depicting, (c) the top surface of the FSW joint, (d) The fracture surface after the tensile test. Table 2 . FSW parameters and levels. Table 3 . Experimental, Predicted TS (FSW) value and Error (%) derived from the regression model. S.No RS (Rpm) TS (mm/min) PD (mm) Experimental -TS FSW (Mpa) Predicted-TS FSW (Mpa) Error (%) P-Values of the model terms, including TS, PD, RS * TS, RS * PD, TS * PD, RS2, TS2, and PD2, were found to be less than 0.0500, further confirming their significance.The effectiveness of the formulated model was confirmed by comparing it with experimental values.The Experimental and predicted TS (FSW) values with average error difference in equation (4) clearly demonstrate the model's ability to predict the response accurately.
6,920.2
2023-09-27T00:00:00.000
[ "Materials Science" ]
Efficient Deployment of Conversational Natural Language Interfaces over Databases Many users communicate with chatbots and AI assistants in order to help them with various tasks. A key component of the assistant is the ability to understand and answer a user’s natural language questions for question-answering (QA). Because data can be usually stored in a structured manner, an essential step involves turning a natural language question into its corresponding query language. However, in order to train most natural language-to-query-language state-of-the-art models, a large amount of training data is needed first. In most domains, this data is not available and collecting such datasets for various domains can be tedious and time-consuming. In this work, we propose a novel method for accelerating the training dataset collection for developing the natural language-to-query-language machine learning models. Our system allows one to generate conversational multi-term data, where multiple turns define a dialogue session, enabling one to better utilize chatbot interfaces. We train two current state-of-the-art NL-to-QL models, on both an SQL and SPARQL-based datasets in order to showcase the adaptability and efficacy of our created data. Introduction Chatbots and AI task assistants are widely used today to help users with their everyday needs.One use for these assistants is asking them questions on various areas of knowledge or how to accomplish different tasks (Braun et al., 2017;Cui et al., 2017).Because data is usually stored in a structured database, in order to answer a user's questions, it is essential that the system should first understand the question, and convert it into a structured language query, such as SQL or SPARQL, to fetch the correct answer. While much research has focused on translating natural languages into query languages (Ngonga Ngomo et al., 2013;Braun et al., 2017;Dubey et al., 2016;Giordani and Moschitti, 2009;Finegan-Dollak et al., 2018;Giordani, 2008;Xu et al., 2017;Zhong et al., 2017), the state-of-the-art systems typically involve a large amount of training data.Therefore, in order to fully utilize these models that translate a natural language (NL) question into query language (QL), one would need to collect large amounts of both NL-QL pairs.Although there are works which involve the collection of NL-QL pairs in different domains (Hemphill et al., 1990;Zelle and Mooney, 1996;Zhong et al., 2017;Yu et al., 2018Yu et al., , 2019b)), data is still not available in most domains, and thus this collection process can be both time-consuming and expensive. In this work, we address the problem of having insufficient data collection methodologies by proposing a novel approach that accelerates the data collection process for use in NL-to-QL models.Additionally, our approach focuses on generating conversation data, where the context of a dialogue turn is used to generate a subsequent pair.In this way, we better simulate the data necessary for real world chatbots and voice assistants, as exemplified in Figure 1.Our contributions are as follows: • We develop a novel approach that accelerates the creation of NL-to-QL data pairs.Primarily, our approach tackles the problem in the conversational domain. • We showcase our data collection system on two different QLs, SQL and SPARQL, demonstrating the flexibility of our system. • Finally, we demonstrate the use of current single-turn state-of-the-art approaches on these two domains to prove the adaptability of our system to current models. Though our data collection implementation focuses on conversational data, the models we deploy are single-turn.Our main focus here is to give a demonstration of the generated data.Section 3 and Section 4 show the adaptability of our data collection scheme to these kinds of models. The rest of this paper is structured as follows: Section 2 surveys prior work in both the NL-to-QL and data collection space, Section 3 details our novel conversational data collection approach, Section 4 walks through examples in both the SQL and SPARQL domain, Section 5 describes the current models we have trained and tested on the generated data, Section 6 gives the results on the data and models, and Section 7 concludes our work. Related Work In the field of natural language interfaces for structured data there are bodies of work that 1) focus on translating natural language to a specific query language and that 2) relate to collecting semantic parsing data for natural language interfaces. NL-to-QL NL-to-QL models have worked to transform natural language queries into their respective logical form (LF) representations (Dong and Lapata, 2016), SQL queries (Xu et al., 2017;Zhong et al., 2017;Finegan-Dollak et al., 2018;Cai et al., 2018), or SPARQL queries (Ngonga Ngomo et al., 2013;Dubey et al., 2016).While work in the SPARQL domain first normalize and match the queries, stateof-the-art work in translating NL to SQL involves neural architectures.Dong and Lapata (2016) utilize and encoder-decoder framework to translate NL questions into their LF representation.Xu et al. (2017) propose a sketch-based model where a neural network predicts each slot of the sketch.The architecture built by Zhong et al. (2017) uses policybased reinforcement learning in order to translate NL to SQL.While Finegan-Dollak et al. (2018)'s main takeaway is how different evaluations effect the generalization problem in translating NL to SQL, they approach the problem with a seq2seq model.Because of the volume of data needed to fully utilize these models, it can be difficult to adapt to different domains. In the multi-turn domain, Saha et al. ( 2018) first approach the problem of complex sequential question-answering (CSQA) by first building a large-scale QA dataset made to answer questions found in Wikidata1 .However, their data collection process was extremely laborious, as their process required in-house annotators, crowdsourced workers, and multiple iterations.Additionally, their approach was end-to-end, meaning the output was an expected answer.Nevertheless, because their approach incorporate the query representation, we plan to further incorporate their approach into our data collection process in future work.Yu et al. (2019a) also develop the first general-purpose DB querying dialogue system.However, their system dialogues focus on clarifying a NL question for user verification, before returning an answer.Our work focuses on generating conversational data about specific database entities and properties. Data Collection for Semantic Parsing NL question semantic parsers have been developed for single-turn QA in order to translate simple NL questions into their respective LFs (Wang et al., 2015).In their approach, Wang et al. (2015) first begin with a domain, building a seed lexicon of that domain.Next, they find the LF and canonical utterance templates corresponding based on the lexicon.Wang et al. (2015) then paraphrase their canonical utterances via crowd-sourcing.Iyer et al. (2017) learn a semantic parser via an encoderdecoder model by using NL/SQL templates.This model is tuned through user feedback, where incorrect queries are annotated by crowd-workers.Paraphrasing is accomplished through the Paraphrasing Database (PPDB) (Ganitkevitch et al., 2013). While the two previously mentioned works are single-turn semantic parsers, Shah et al. (2018) An overview of our conversational data collection deployment system.Blue shapes denote the input/output data at each stage, while green diamonds denote the processes of the system.The "plus" sign denotes the concatenation of both seed templates and paraphrase templates. begins with a task schema and API which is used to create dialogue outlines for the provided domain.These dialogue outlines involve a user and system bot that simulate a scenario.The dialogues are then paraphrased via crowd-sourcing.However, Shah et al. (2018) use the logical-form representation of the utterances rather than their query language representation.In our work, we re-incorporate the paraphrases into the dialogue generation phase. Data Collection System Our conversational data collection strategy is developed to efficiently collect NL/QL pairs for training data in models which translate the NL into QL in a multi-turn setting.Because domain data is required when training a chatbot to query a database when converting from NL to QL, our approach is generalized so that one can easily collect data for their respective domain. Overview Our approach in collecting data is made of the four following steps: 1) First we generate the dialogue represented as LFs, forming the abstract representations of NL questions, 2) Next, we convert the LFs into an NL template and QL templates 3) We then collect paraphrases of the natural language templates, and 4) Finally, we use these paraphrases to further develop our dialog generator.In generating our dialogue, the context of each previous turn is taken in order to develop the current turn.Figure 2 presents our data deployment system.We divide and expand upon the steps further in the next sections. Definitions We first define the following notations in our data collection system: • U n : an utterance in the dialogue. • LF n : the LF n in the dialogue. • N L n : the NL utterance corresponding to LF n . • QL n : the QL utterance corresponding to LF n . Input Module The input to our data collection system consists of a domain ontology, lexicon, and database.These should be provided by the user and vary depending on the type of data one requires.The domain ontology defines the <object, relation, property> triples of a given dataset, where each object has a set of properties connected through a relation, e.g.<ACL 2020, has location, Seattle>.The lexicon file defines each data field, along with its NL and QL representation, important in the NL-QL Generator step.The database is the data in structured form. Logical Form Dialogue Generator In order to appropriately simulate a conversation between a user and chatbot, the synthetic dialogue must first be generated.This is done by first outlining the dialog via LFs, where the system generates, LF 1−n .These outlines are an abstract but understandable representation of the dialogue taking into account the type, entity, and relation of a question.Thus, our parser builds a dialogue based on a domain ontology, lexicon, and domain database. The LFs take the form of three predicates: Retrieve-Objects, Inquire-Property, and Compute, each taking on their own arguments.For the Retrieve-Objects predicate, the LF fetches an instance that satisfies a condition.As arguments, Retrieve-Objects takes an entity type, t i n from the ontology, a boolean condition c i n , and a property value,p i n , from the DB.For the Inquire-Property predicate, given an anchor entity ae i n , target instance, ti i n , and an inference path ip i n from the entity to that instance, the LF finds the property in that path of the anchor entity.The Compute predicate denotes a computation comp i n over a set of given objects, thus its arguments are comprised of Retrieve-Objects arguments and an operation to be performed.2 .For our work, we focus on using the COUNT aggregate function.Future work can easily adapt more aggregate functions into our model such as MAX or MIN depending on the values contained in the database. More formally, each LF can be described as follows: At the start of a dialogue, a random LF predicate is selected, given the database schema, lexicon, and domain ontology.The subsequent turns in the dialogue are built conditionally on the previous turn.Therefore, given a LF n−1 , when generating LF n the context of LF n−1 is further taken into consideration including its arguments, type, and answer.The subsequent predicate is also chosen at random, however its values are conditional on the arguments and answer(s) of the current predicate.For example, if LF n−1 is an Retrieve-Objects predicate and another Retrieve-Objects predicate is chosen as LF n , this LF can further filter the answer of LF n−1 by using an additional condition.Table 1 summarizes the types of LFs, along with an explanation and example of each both in LF and NL, which we discuss in the next section. NL-QL Generator Once the LF generator is complete, the data collection system generates an NL utterance along with its corresponding QL.To generate such pairs, the NL-QL generator takes in each LF from the LF Dialog as input.Based on the predicate type, an NL-QL pair is selected and filled with corresponding arguments of the predicate.Thus, the system uses NL seed templates for the Retrieve-Objects, Inquire-Property, and Compute predicates to create the initial training data for the conversational dialogue.For example, one NL template for turns after N L 1 can be "How about <entity>?" The aforementioned seed templates are handcrafted based on the type of data and are thus left to the user to create.These data are hand-crafted to increase the quality of the seed templates in terms of coherency and utility, important features not only for quality training data, but also when performing the paraphrase task.Because we hand-crafted the query language templates, we also guarantee that the queries are executable for their corresponding QLs, SQL or SPARQL in this work.For the QL, we fill in slots for field names, aliases, and values, utilizing the information in the domain ontology, lexicon, and database schema.Note, 'field' refers to column names in relational DBs (queried with SQL) and type names in graph DBs (queried with SPARQL).To reiterate, the NL-QL generator takes each LF n , with its respective arguments, and seed templates as input, and outputs a N L n − QL n pair, where U n → (N L n , QL n ).Section 4 goes through detailed examples of various NL-QL pairs. Paraphrase The final step involves the paraphrasing of the seed NL templates given in the NL-QL Generator step.To paraphrase the seed NL templates, we first provide crowdworkers from Amazon Mechanical Turk (AMT) 3 with the instantiated templates, the output from the first iteration of the NL-QL generator.We ask the workers to paraphrase the seed templates while keeping the meaning/intent of the original questions.After collecting these paraphrased questions, we further abstract them and link them to their respective predicate representation.In this way, the paraphrases can be utilized in further iterations of the NL-QL Generator step and instantiated when generating new dialogues for training data.While abstracting the templates, we manually scan them for quality control purposes.Furthermore, we ran multiple trial runs in presenting the problem to the AMT workers.Previous work (Wang et al., 2015;Shah et al., 2018) 2018), we input the paraphrases back into our NL-QL generation step.Figure 2 illustrates this through the "+" symbol, signifying that the paraphrases are appended to the seed templates mapping to LF and creating the final NL-QL pairs.This approach can take multiple iterations, as the user sees fit to the NL question generation task in their data domain. Data Examples In this section we will showcase examples in both the SQL and SPARQL domain and traverse through each stage of our Data Collection System.We first begin with SQL, used to query relational databases, and then demonstrate our system with a graph querying language, SPARQL.By doing so, we show the extendability of our approach to various structured QLs.Moreover, we confirm the importance of generating executable queries in a conversational data collection system. SQL Through our data collection system for conversational QA, we are able to produce contextual dependent NL-SQL pairs.For the SQL example, suppose a user wants to produce data for an employee directory relational database.Figure 3 gives an example of possible input files needed to produce this kind of conversational data with our data collection system, including a domain ontology with two entities Employee and Department, a lexicon to map NL and QL instances, and a database containing Employee and Department data. Thus, given the input files in Figure 3, possible LF n values with each predicate are: (i) Retrieve-Object(employee(ALL), (employee.deptname,'=', Marketing)) (ii) Inquire-Property(James,dept name) (iii) Computation(COUNT,employee(ALL), [('works in', 'IT')]) In (i), the logical form represents a retrieval of employee objects who work in the Marketing department.(ii) asks about the department name of James. (iii) computes the total number of employees who work in the IT department.During the generation of LF 1, one of these LFs can be generated.Then for LF 2 -LF n, the context is passed along to generate the LFs.The n denotes the number of turns a dialogue can take.As an example, given LF 1 is (1) from the aforementioned LFs, LF 2 can be Inquire-Property(Answer,phone num), where Answer denotes the objects returned by LF 1.Our dialogue generation system allows one to tune the number of turns and number of dialogues generated from the given input. For the NL-QL step, our input includes the dialogues represented as LFs along with NL-QL seed templates described in Section 3.5.Possible templates are given in Table 2. Note, that we refer to a column in a relational DB as a field.Taking our previous Retrieve-Objects example, the filled seed template would read: "Which employee have department equal to Marketing?"The Lexicon from Figure 3 is utilized here, as the instance name is mapped to its NL name.Similarly, its QL name (table name) is mapped in the SQL query. Finally, in the final step, as explained in 3.5, the NL seed templates are paraphrased via crowdsourcing, e.g."Which employee have depart- Table 2: Examples of seed templates with their respective predicates.<entity>refers to an entity type.<field name>corresponds to a column in a relational DB or a relation in a graph DB. <instance>refers to the value of that field in the DB.<entity value>is an instance of an entity in the DB. Figure 4: An example of a subgraph in the Photoshop Knowledge Graph.The Layer (red node), can be seen connected to its objects (blue nodes) through relations.Here we can see that the Layer entity is connected to the various actions connected to "Photoshop Layers", such as "flatten", "lock", and "use", where the object nodes show how they can be performed. ment equal to Marketing?" can be paraphrased into "Who works in the marketing department?". SPARQL SPARQL is used to query graph databases, where entities are linked together through relations.These graph databases usually take the form of triples in the form: <subject,relation,object>. Because both LF-Generator and NL-QL Generator remain the same as in Section 4.1, here we examine the main differences in the system data when utilizing SPARQL instead of SQL.As a guide, we refer to the example give in Figure 4. Figure 4 gives an example of a subgraph found in the Photoshop Knowledge Graph (KG).This KG contains the various tools, dialogs, shortcuts, and options found in Photoshop, connected to their options and definitions through relations.The KG is extracted from the Photoshop Wiki.Similarly to the SQL example above, we input a domain ontology, lexicon, and database to the conversational data collection system.However, in the case of a graph database, the entities found in the ontology are more clearly defined in a graph database.Additionally, instead of a table structure, the database is in the form of <subject,relation,object> triples, where each entity belongs to a type defined in the ontology. While the the types of LFs generated in the LF-Generator are equivalent, a property now refers to the relation found in the triple, while a property refers to the object of a KB triple.For example, an entity such as the one found in figure 4 may have various properties, including "has shortcut" and "has option".When generating NL-QL pairs, the generator again takes from the out of the LF-Generator, lexicon, and seed templates, where the QL template is SPARQL-based instead of SQL-based.Paraphrases are collected in the same way.Thus, an example Photoshop Retrieve-Object LF template question, and paraphrase may look like: "LF: Retrieve-Objects(tool(ALL), (tool.hasshortcut, =, H))", "Template: Which entities have relation equal to object ?", and "Paraphrase: What's the tool with the H shortcut?" Experiments We will now examine our experiments with a relational and graph database setting.We first briefly discuss the data used in constructing the converstational dataset and then describe the various models utilized in translating the NL questions into their respective structured queries. Data For our experiments involving SQL data, we construct an NL-QL conversational dataset on data based on a proprietary web analytics tool.In our results table, we refer to this dataset as Web-Analytics.For the graph-database, we construct an NL-QL conversational dataset based on the Photoshop KB, as the one exemplified in Section 4.1.As previously noted, this KB contains various entities found in Photoshop, connected to their properties, through predicates which define the properties.In total, the KB contains 15,381 triples, with 3,410 triples that correspond to how-to type queries. After running our conversational data collection system on both set of data, we collected 288 and Photoshop Web-Analytics Templates 288 73 73 NL-QL pairs of templates for the Photoshop and Web-Analytics datasets, respectively.Table 3 summarizes these statistics.Additionally, we configured our system to give 3 turn dialogues. Models In our experiments we utilize single-turn NL-QL models.Specifically, we utilize the baselines defined by Finegan-Dollak et al. (2018). The first baseline is a seq2seq model with attention-based copying, originally proposed by Jia and Liang (2016).This model takes an NL utterance as input and outputs a structured query.Included in the output is a COPY token, which signifies the copying of an input token.In the copying mechanism model, the loss is calculated based on the accumulation of both the probability of distribution of the tokens in the output and the probability of copying from an input token.This copying probability is calculated as the categorical cross entropy of the distributed attention scores across the input's tokens, where the token with the max attention score is chosen as the output token. The second baseline is a template-based model developed by Finegan-Dollak et al. (2018).This model takes in natural language questions, along with query templates to train.Since our data collection system directly utilizes templates to generate the data, this model is easily adaptable to our setting.We simply use the templates we collect from both the seed-templates and paraphrasing tasks, as well as the slot values extracted from the source DB when creating the dialogue data to train the model.In the template-based model, there are two decisions being made.First the model selects the best template to choose from the input.This is done by passing the final hidden states of a bi-LSTM through a feed-forward neural network.Next, the model selects the words in an input NL-question which can fill the template slots.Again, the same bi-LSTM is used to predict whether an input token is used in the output query or not.Thus, given a natural language question, the model jointly learns the best template from the given input, as well as the values that fill the template's slots.Please note, SELECT ?entity ?propertywhere { ?entity rdf:type ontology:ps_entity .?entity ontology:sharpen ?property .?entity rdfs:label "ps_entity0"@en} 2018), where the blue boxes represent LSTM cells and the green box represents a feedforward neural network.'Photos' is classified as a slot value, while the template chosen (Tempalte 42), is depicted above the model.In the template, the entity slot is highlighted in yellow and the properties which make the template unique are in red. that while this model is best fitted for our dataset, it does not generalize well to data outside of the trained domain due to the template selection task.Although our dataset collection system generates multi-turn data, because of the immaturity of multi-turn NL-to-QL models, we leave the use of multi-turn models for future work.We do however, mention the model developed by Saha et al. (2018), which answers complex sequential natural language questions over KBs, which can be further integrated in future work. Settings We experimented with both the seq2seq and template-based models on the SQL-based and SPARQL-based datasets previously discussed.For the Photoshop SPARQL dataset, we generated 2,100 single-turn data pairs utilizing our data collection system, while generating 3,504 single-data pairs for the web-analytics dataset.Experiments all used a 90/10 train/validation set split. Results We evaluated the models on our generated datasets for exact-match accuracy of the SQL/SPARQL output queries.The results (shown in We also investigate how the accuracy of the models increase, as the number of samples generated by our data collection system increase.Figure 6 shows that for our best performing model (seq2seq), as the number of dialogue sessions (or data points) increases, the accuracy increases.While this is expected, it also shows that through out dialog creation system, one can improve their NL-to-QL application's performance by configuring the data creation system with more dialogues and templates. Though the models use synthetic data generated by our system, our system allows one to accelerate the data collection process and quickly deploy an NL-to-QL system that gives reasonably accurate results.This deployed system can then later collect data collected from real application users, where the application logs where a correct or incorrect response may have been returned.Iyer et al. (2017) explore this kind of work which learns from user feedback, where users marked utterances as correct or incorrect, and the accuracy of the semantic parser increased as a result. Conclusion In this work, we propose a conversational data collection system which accelerates the deployment of conversational natural language interface applications which utilize structured data.We describe the three main processes of our system, including the LF Dialog Generator, the NL-QL Generator, and the Paraphrase component.By taking in a domain ontology, lexicon, and structured database as input, our system generates NL-QL multi-turn pairs which can be used to train systems that translate NL to QL.Each component of our system is examined in both the SQL and SPARQL QL domain.We then validate our data by training state-of-the-art NL to QL models on single-turn utterances.Our experiments show promising results in both the SQL and SPARQL domains, while providing an efficient method to generate data for the development of multi-turn models. Turn 1 :Figure 1 : Figure1: Example illustrating a three-turn dialogue, featuring the natural language (first column) and query language (second column) representations. Figure2: An overview of our conversational data collection deployment system.Blue shapes denote the input/output data at each stage, while green diamonds denote the processes of the system.The "plus" sign denotes the concatenation of both seed templates and paraphrase templates. Figure 3 : Figure 3: Example ontology schema, lexicon, and database.The two tables in the Database are used throughout our SQL example. Figure 5 : Figure 5: The template-based model developed by Finegan-Dollak et al. (2018), where the blue boxes represent LSTM cells and the green box represents a feedforward neural network.'Photos' is classified as a slot value, while the template chosen (Tempalte 42), is depicted above the model.In the template, the entity slot is highlighted in yellow and the properties which make the template unique are in red. Figure 5 , inspired byFinegan-Dollak et al. (2018), shows an example of the template-based model with our own input in the SPARQL domain. Figure 6 : Figure 6: The above graphs show that as the dialogue session count increases for both the Photoshop SPARQL (left) and Web-Analytics SQL (right) dataset, the accuracy also increases.The y-axis of each graph marks the accuracy, while the x-axis marks the number of dialogue sessions for each dataset. Table 1 : also use similar crowd-LF predicate summary with an explanation and example of each, both in NL and LF. Table 3 : Number of templates for each dataset, where the Photoshop dataset is SPARQL-based and Web-Analytics dataset is SQL-based. Table 4 ) indicate that in both cases the seq2seq model outperforms the template-based model.While the seq2seq gives Table 4 : Results on the accuracy of the NL-to-QL task on the generated single-turn Photoshop and Web-Analytics datasets.anaccuracy of .726and .738, the template-based model results in .305and .641accuracy.Furthermore, the template-based model performs better on the Web-Analytics SQL-based dataset.This may be because the number of templates contained in the SQL dataset is almost four times greater than the number of templates contained in the Photoshop SPARQL dataset, 73 compared to 288.
6,319.6
2020-05-01T00:00:00.000
[ "Computer Science" ]
Protein biomarkers in cystic fibrosis research: where next? Cystic fibrosis is one of the most common life-limiting inherited disorders. Its clinical impact manifests chiefly in the lung, pancreas, gastrointestinal tract and sweat glands, with lung disease typically being most detrimental to health. The median age for survival has increased dramatically over the past decades, largely thanks to advances in understanding of the mechanisms and consequences of disease, leading to the development of better therapies and treatment regimes. The discovery of dysregulated protein biomarkers linked to cystic fibrosis has contributed considerably to this end. This article outlines clinical trials targeting known protein biomarkers, and the current and future contributions of proteomic techniques to cystic fibrosis research. The treatments described range from those designed to provide functional copies of the mutant protein responsible for cystic fibrosis, to others addressing the associated symptoms of chronic inflammation. Preclinical research has employed proteomics to help elucidate pathways and processes implicated in disease that might present opportunities for therapy or prognosis. Global analyses of cystic fibrosis have detected the differential expression of proteins involved in inflammation, proteolytic activity and oxidative stress, which are recognized symptoms of the cystic fibrosis phenotype. The dysregulation of other processes, such as the complement and mitochondrial systems, has also been implicated. A number of studies have focused specifically on proteins that interact with the cystic fibrosis protein, with the goal of restoring its normal proteostasis. Consequently, proteins involved in synthesis, folding, degradation, translocation and localization of the protein have been identified as potential therapeutic targets. Cystic fibrosis patients are prone to lung infections that are thought to contribute to chronic inflammation, and thus proteomic studies have also searched for microbiological biomarkers to use in early infection diagnosis or as indicators of virulence. The review concludes by proposing a future role for proteomics in the high-throughput validation of protein biomarkers under consideration as outcome measures for use in clinical trials and routine disease monitoring. therapy trials are underway aiming to deliver functional CFTR genes to the epithelial cells of the CF airway. In phase 1, treatment with aerosolized compacted DNA nano particles containing the CFTR gene (Copernicus Therapeutics, Cleveland, Ohio, USA) induced nasal chloride current changes in CF patients, suggesting increased CFTR functionality, but gene expression was not detected [6]. The UK Cystic Fibrosis Gene Therapy Consortium [7] is currently performing a phase 1/2 safety study using pGM169/GL67A [8] and will proceed to a multidose study in July 2011. This system utilizes lipo somes to promote the aerosolized delivery of a DNA plasmid containing the CFTR gene. Two drugs from Vertex Pharmaceuticals (Abingdon, UK), VX770 and VX809, aiming to promote the activity of mutant CFTR by increasing channel opening and trafficking to the membrane, respectively, are currently in clinical trials [6]. In phase 2, VX770 improved measures of CFTR function such as nasal potential difference and sweat chloride concentration [6]. Ataluren (formerly PTC124; PTC Therapeutics, South Plainfield, New Jersey, USA), which is designed to increase synthesis of full length functional CFTR, improved CFTR function for some patients in phase 2 trials [9] and is currently in phase 3 [6]. Sildenafil, which corrects F508delCFTR traffick ing and increased chloride transport in F508delCFTR mice [10], is the subject of phase 1/2 clinical trials [8]. Lung disease, resulting from chronic infection and inflammation, is the most common cause of death in the CF population and thus its treatment is a key goal of CF therapy. In the CF lung, activation of the nuclear factor (NF)κB signaling pathway leads to enhanced production of proinflammatory mediators, including interleukin (IL)8. IL8 is a potent neutrophil chemoattractant result ing in neutrophil recruitment and accompanying tissue damage through the release of neutrophil proteases and reactive oxygen species. Drugs are being developed to treat various proteins involved in this inflammatory cycle. Digitoxin has been shown to suppress hyper secretion of proinflammatory IL8 by CF lung epithelial cells in vitro [11] and its effect on sputum IL8 and neutrophil counts is currently being assessed in a phase 2 clinical trial (ClinicalTrials.gov Identifier: NCT00782288 [8]). GSK SB 656933 (GlaxoSmithKline, Uxbridge, UK) is an antagonist of the neutrophil IL8 receptor CXCR2, which mediates neutrophil migration. It has demon strated safety in a phase 1 trial [6] and is now being evaluated in a phase 2 study (ClinicalTrials.gov Identifier: NCT00903201 [8]) for pharmacodynamics and efficacy, including the reduction of sputum neutrophil elastase and neutrophil counts. Pioglitazone, already approved for treatment of other clinical conditions, is being assessed for safety and antiinflammatory action in phase 1 clinical trials against CF lung disease [6]. Its target, peroxisome proliferatoractivated receptor γ, which is reduced in CF [12], exerts an antiinflammatory effect by negatively regu lat ing NFκB activation [13]. The sputum protease matrix metalloproteinase9 has also been linked to poor lung function and airway inflammation in CF children [14] and its activity is being targeted by the antibiotic doxycycline in a current trial (ClinicalTrials.gov Identifier: NCT01112059 [8]). Various proteins and protein degradation products have been explored as candidate biomarkers of clinical outcome, such as neutrophil elastase and IL8 [15], degra dation of lung surfactant protein SPA [16], urinary desmosine [17] and prolineglycineproline [18]. However, as yet, none of these markers has been proven sufficiently robust for routine adoption in clinical trials [19]. Proteomic contributions to CF research Preclinical medical research is increasingly adopting a systems rather than a reductionist approach to under standing and treating disease, with clinical proteomics contributing to the characterization and measurement of pathophysiological stages. Proteomicbased CF research has employed techniques such as twodimensional gel electrophoresis (2DE), liquid chromatography, mass spectrometry (MS) and antibody/protein microarrays to analyze secretions, cells and whole tissues from in vitro or in vivo disease models, human subjects and infecting microorganisms. Laser capture microdissection, cell fractionation and coimmunoprecipitation have been used to limit analyses to the subproteomes of interest. Global comparative analyses of CF versus nonCF samples have been used to identify differentially ex pressed proteins in human bronchoalveolar lavage fluid (BALF) [20,21], sputum [22], bronchial biopsy tissue [23], serum [24] and cultured epithelial cells [25,26], and in mouse lung and colonic tissue [2729]. Many of the proteins highlighted by global analyses can be related functionally to biological processes and pathways known to contribute to CF disease pathogenesis, including chronic inflammation, proteolytic activity and oxidative stress response proteins. Chronic neutrophilmediated inflammation typifies the CF lung, and comparative proteomic studies have provided data to support and improve our understanding of the mechanisms involved. Srivastava et al. [24] have detected in CF serum differential levels of proteins belonging to the NFκB pathway, which is known to enhance production of inflammatory mediators, while Sloane et al. [30] have found that sputum from adults with CF is characterized by inflammationrelated proteins, including increased production of IL8. Neutrophil proteins, including αdefensins and S100 proteins, have been shown to be differentially expressed in CF BALF [31] and sputum [22]. Also, lower levels of antiinflammatory proteins Clara cell secretory protein [22] and annexin A1 [29] have been observed in CF nasal epithelial cells and sputum, respectively. Additionally, the absence of annexin A1 has been associated with upregulation of the proinflammatory cytosolic phospholipase A2 in the colonic crypts of CF mice [29]. Chronic inflammation of the CF lung is thought to induce the overexpression of proteases, thus perturbing the protease/antiprotease balance and resulting in tissue damage and disease. Through the application of shotgun proteomic methods, Gharib et al. [20] detected increased levels of 22 proteases and peptidases in human CF BALF, including neutrophil elastase, cathepsin G and proteinase 3. They also identified increased expression of human monocyte/neutrophil elastase inhibitor [20], which when applied as an aerosolized treatment to rats has been shown to reduce inflammation [32]. Extensive proteolytic degradation, including truncation of the antiprotease α 1 antitrypsin and degradation of IgG, has been observed in CF sputum [30]. High levels of toxic reactive oxygen species and oxidative stress are characteristic of the CF lung. Reduced glutathione, which acts as an antioxidant and in detoxifi cation, has been observed at a lower level in CF lung lavage fluid [33]. In support of this finding, RoxoRosa et al. [26] have detected, via 2DE comparative proteomics of CF and nonCF mouse nasal epithelial cells, reductions in the levels of glutathionerelated proteins: glutathione Stransferase, which catalyses the glutathionemediated detoxification of oxidative stress products; peroxiredoxin 6, a glutathionedependent peroxidase involved in defense against oxidative stress; and Hsp27, a heat shock protein that can increase intracellular levels of gluta thione and acts as a chaperone for detoxification. Other proteomic studies have identified differential expression of myeloperoxidase, superoxide dismutase, catalase and glutathione reductase in CF BALF [20], and increased levels of myeloperoxidase in CF sputum [30]. Together these data help elucidate mechanisms that are likely to contribute to oxidative stress in the CF lung. Other biological processes and proteins where func tional links to CF disease are less well established or absent have also been implicated by global comparative proteomics. Differential expression of mitochondrial proteins has been reported in human CF nasal epithelial cells [26] and bronchial tissue [23], implicating a CF associated reduction in mitochondrial metabolism, and the recent mapping of the CF BALF proteome [20] has implicated dysregulation of the complement system as a novel CF phenotype that may impact lung disease pathogenesis by impairing response to chronic infections. Investigation of the response of murine CF airway epi thelial cells to injury detected reductions in enzymes involved in prostaglandin and retinoic acid metabolism; this implicates these pathways in the CF abnormal injury response, although no functional role has been determined. More focused comparative proteomic studies have concentrated on specific protein subgroups, such as those involved in pathways of interest or executing certain roles. An investigation by Chen et al. [34] of the mechanisms triggering the overproduction of cytokines IL6 and IL8, associated with excessive CF lung inflammation, identified a regulatory pathway that is significantly reduced in CF. Moreover, they demonstrated that correction of the pathway reduced IL6 and IL8 production [34]. Individual protein families have also been studied, such as lung surfactant proteins and mucins, both of which are involved in pathogen clearance from the airways. The structural modification of lung surfactant proteins SPA and SPD [35] and degradation of mucins MUC5B and MUC5AC [36] have been detected in CF BALF and sputum, respectively, and are thought to be relevant to lung disease. Additionally, mucin glycosylation has been highlighted as a possible predictor of lung condition [36]. Particular attention has been directed towards identifying proteins that interact with CFTR with the goal of understanding and restoring normal CFTR proteostasis through the correction of CFTR synthesis, folding, aggregation, degradation, trafficking and stable localization. F508del, the most common mutation of the CFTR gene, gives rise to incorrectly folded CFTR that is translocated from the endoplasmic reticulum to the cytosol for proteosomal degradation. An investigation by Goldstein et al. [37] of proteins that coprecipitate with F508delCFTR has identified interaction with valosin containing protein (VCP)/p9, a component of the trans location machinery, as being associated with inefficient processing of the mutant CFTR. GomesAlves et al. [38] used 2DE to compare protein profiles of cell lines expressing wildtype or F508delCFTR at 37°C and 26°C, and have identified mechanisms, including the induction of the unfolded protein response and downregulation of degradative proteins, which may contribute to the cold shockinduced rescue of F508delCFTR. By comparing the CFTR interactomes of bronchial epithelial cells expressing chemically and genetically repaired F508del CFTR, Singh et al. [39] identified a set of Hsp70 family proteins as implicated in rescue of the mutant protein. Additionally, Wang et al. [40] showed the importance of Hsp60 cochaperones in CFTR folding and demonstrated rescue of F508delCFTR by partial small interfering RNA silencing of the Hsp60 cochaperone Aha1. Understanding the mechanisms that can contri bute to F508delCFTR rescue may suggest potential thera peutic targets. Also, study of the CFTR interactome has led to elucidation of the molecular defect of S13FCFTR [41] as relating to defective interaction with filamins, which anchor plasma membrane CFTR to the actin cytoskeleton [41]. Repeated or chronic microbial infection is thought to be a major contributor to the excessive inflammation that precipitates CF lung damage, and a variety of proteomic approaches have been exploited to discover bacterial antigenic biomarkers that could provide potential candi dates for infection diagnosis, prognosis indicators or vaccine development. Pedersen et al. [42] used a novel enrichment technique employing CF patient antibodies as capture ligands prior to proteomic analysis to enhance the identification of Pseudomonas aeruginosa antigens. The antigens detected by this method included stress, immunosuppressive and alginate synthetase pathway proteins. Using proteomic analysis of nonenriched serum samples from CF patients with different stages of infection, Rao et al. [43] identified outer membrane protein OprL as associated with initial P. aeruginosa infection and thus proposed serum reactivity to OprL as an early diagnostic. Montor et al. [44] generated protein microarrays displaying all predicted outer membrane and exported proteins expressed by P. aeruginosa reference strain PAO1 and used these to interrogate serum samples from CF patients infected with P. aeruginosa. They identi fied 48 antigenic proteins, 12 of which were common to approximately 50% of the samples. Alterna tively, whole cell MS has been proposed for the rapid identification of commonly misidentified bacterial species [45]. Proteomics has helped elucidate factors pertinent to virulence, adaptation and in vivo survival of the pathogen P. aeruginosa [4649], which is particularly indicative of a poor prognosis [50]. These may present candidate drug targets for treatment of CF infections. The quorum sensing intercellular communication systems have received particular attention as they largely coordinate bacterial virulence [51,52]. The future potential of proteomics The constant advance of proteomic strategies, instrumen tation and data analysis provides an everincreasing set of tools available for expanding CF research. Two strands for future studies are envisaged: the continuing investi gation of disease pathology aiming to discover prog nostic, diagnostic and therapeutic biomarkers; and the translation of existing knowledge into clinical appli cations of benefit to the CF population. The recent drift from traditional 2DE to gelfree methods for protein and peptide separation is still underrepresented in CF bio marker discovery, although their potential for extending shotgun (that is, global) proteome coverage has already been demonstrated for BALF. By complementing traditional gel separation of proteins with the two dimensional liquid chromatography separation of tryptic digests, Guo et al. [53] were able to improve the number of proteins detected in mouse BALF from 212 to 297, although they noted that their methods did not permit quantitative analyses. More recently, Gharib et al. [20] used quantitative shotgun proteomics to compare human CF and nonCF BALF from 12 subjects, and were able to distinguish the differential expression of hundreds of proteins, including those involved in pathways implicated in CF pathophysiology. Wider adoption of these approaches will enable protein detection over a larger dynamic range, thus including in the pool of potential biomarkers proteins of lower abundance that could not be detected using traditional proteomic techniques. Future biomarker discovery will also greatly benefit from the recent generation of pigs with mutated CFTR genes [54,55], which provides a model that more closely resembles human disease than current mouse models, and thus enables further investigation of biomarkers relevant to longterm disease progression and treatment efficacy. The major shortfall in current proteomics research is the gap between the discovery of biomarkers and their clinical application. One hindrance has been the lack of tools for highthroughput validation. Advances in MS selected reaction monitoring have enabled the concur rent measurement of multiple researcherdesignated proteins in a sample and may in future bypass the need for the development of a separate antibodybased assay for each individual proteins to be quantified [56]. This increases the feasibility of assaying large sample numbers with sufficiently high specificity and sensitivity to enable the simultaneous statistical validation of multiple candi date biomarkers [57]. The validation of a panel of CF specific protein biomarkers could precipitate the produc tion of novel biomarker arrays or tests for individual proteins. Such tools would permit the quantification of new outcome measures for assessing disease progression and/or response to treatment in clinical trials, and may be applicable to future routine clinical practice [58]. Conclusions Proteins and their interactions ultimately steer CF disease, making their study invaluable for improving our understanding of pathophysiology and potential treat ment opportunities. Considerable knowledge has been gained so far by the application of proteomics to CF research and rapid advancements in this field are expected to augment its future contribution towards improving the prognosis of CF patients.
3,844.4
2010-12-16T00:00:00.000
[ "Biology", "Medicine" ]
Thrombospondin1 Deficiency Reduces Obesity-Associated Inflammation and Improves Insulin Sensitivity in a Diet-Induced Obese Mouse Model Background Obesity is prevalent worldwide and is associated with insulin resistance. Advanced studies suggest that obesity-associated low-grade chronic inflammation contributes to the development of insulin resistance and other metabolic complications. Thrombospondin 1 (TSP1) is a multifunctional extracellular matrix protein that is up-regulated in inflamed adipose tissue. A recent study suggests a positive correlation of TSP1 with obesity, adipose inflammation, and insulin resistance. However, the direct effect of TSP1 on obesity and insulin resistance is not known. Therefore, we investigated the role of TSP1 in mediating obesity-associated inflammation and insulin resistance by using TSP1 knockout mice. Methodology/Principal Findings Male TSP1-/- mice and wild type littermate controls were fed a low-fat (LF) or a high-fat (HF) diet for 16 weeks. Throughout the study, body weight and fat mass increased similarly between the TSP1-/- mice and WT mice under HF feeding conditions, suggesting that TSP1 deficiency does not affect the development of obesity. However, obese TSP1-/- mice had improved glucose tolerance and increased insulin sensitivity compared to the obese wild type mice. Macrophage accumulation and inflammatory cytokine expression in adipose tissue were reduced in obese TSP1-/- mice. Consistent with the local decrease in pro-inflammatory cytokine levels, systemic inflammation was also decreased in the obese TSP1-/- mice. Furthermore, in vitro data demonstrated that TSP1 deficient macrophages had decreased mobility and a reduced inflammatory phenotype. Conclusion TSP1 deficiency did not affect the development of high-fat diet induced obesity. However, TSP1 deficiency reduced macrophage accumulation in adipose tissue and protected against obesity related inflammation and insulin resistance. Our data demonstrate that TSP1 may play an important role in regulating macrophage function and mediating obesity-induced inflammation and insulin resistance. These data suggest that TSP1 may serve as a potential therapeutic target to improve the inflammatory and metabolic complications of obesity. Introduction The worldwide obesity epidemic is a major risk factor for type 2 diabetes and cardiovascular disease. Obesity is now recognized as a state of chronic low-grade systemic inflammation which promotes the development of insulin resistance and other metabolic complications [1]. Obesity is associated with macrophage infiltration into adipose tissue and the dysregulated production of adipokines [2]. Adipose tissue macrophages (ATMs) are the primary source of inflammatory cytokine production in adipose tissue and play a key role in obesityinduced chronic low-grade inflammation and insulin resistance [2]. Although there have been some advances in the study of ATMs in obese conditions [2,3,4], the mechanisms underlying inflammatory cell recruitment and activation are not completely understood. Thrombospondin1 (TSP1) is a major component of platelet alpha granules [5,6]. TSP1 acts as an immediate early response gene, exhibiting rapid but transient induction by growth factors and stress in many cell types including adipocytes and macrophages [7,8,9,10]. TSP1 exists as both a component of the extracellular matrix and as a soluble molecule found in various body fluids and in the cell culture conditioned medium. TSP1 is a 420-450 kDa homotrimer with individual subunits of approximately 145 kDa. The diverse biological activities of TSP1 have been mapped to specific domains of the molecule by interaction with different cell surface receptors [11,12,13,14,15,16]. TSP1 is a major regulator of latent TGF-b activation, a well-known endogenous angiogenesis inhibitor, and a regulator of cell proliferation and adhesion [9,11,17,18,19,20,21]. TSP1 also plays a role in inflammation and obesity. TSP1 has been shown to be expressed in visceral adipose tissue of rats and humans [22,23]. Its expression is markedly regulated during the differentiation of preadipocytes into mature adipocytes [24,25]. TSP1 is up-regulated in developing adipose tissue of mice with diet or genetically induced obesity [26]. In obese, insulin resistant humans, TSP1 was recently reported to be up-regulated and associated with adipose inflammation and insulin resistance [27]. However, in vivo studies examining the role of TSP1 in regulating macrophage function and obesityassociated inflammation and insulin resistance are lacking. In the current study, we utilized TSP1 knockout and wild type mice to investigate the role of TSP1 in high fat diet induced obesity, inflammation, and insulin resistance. Using this dietinduced obesity paradigm, we demonstrated that TSP1 deletion had no effect on obesity development. However, these obese TSP1 deficient mice had improved glucose tolerance and insulin sensitivity. This improved glucose-insulin homeostasis was found to be associated with significantly decreased macrophage accumulation and inflammation in the adipose tissue. In vitro studies further supported the effect of TSP1 on macrophage mobility and function. Together, these data demonstrate that TSP1 may play an important role in obesity-associated insulin resistance partially through regulating macrophage function and inflammation. Ethics Statement All experiments involving mice conformed to the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the University of Kentucky Institutional Animal Care and Use Committee. This protocol was approved under application number 00966M2005. Experimental animals and protocols Eight week old male TSP1-/-mice (on C57BL6/J background, purchased from Jackson Laboratory) and age-matched littermate controls were used in the study. Mice were housed in a temperature controlled room with a 12 hour light/dark cycle. Mice were fed a LF (10% kcal as fat; D12450B; research Diets, Inc, NJ) or a HF diet (60% kcal as fat; D12492, research Diet, Inc, NJ) for 16 weeks. Each group contained 10-15 mice. Metabolic measurements The blood was collected from animals after 6 hr fasting. Plasma glucose levels were measured using a Glucometer. Plasma total cholesterol and triglyceride levels were measured using kits from Wako Chemicals. Plasma insulin, leptin, IL-6, MCP-1, TNF-a, PAI-1, resistin and adiponectin concentrations were measured using a mouse adipokine assay kit (Millipore). Glucose tolerance and Insulin sensitivity tests After 15 weeks of LF and HF-feeding, glucose tolerance was analyzed in animals after 6 h fasting. Following an intraperitoneal injection of glucose (1 g/kg body weight), blood glucose concentrations were measured using a Glucometer at 0, 15, 30, 60, and 120 minutes after injection. For insulin sensitivity assessment, Insulin (0.5 unit/kg body weight) (Novolin R, Novo Nordisk InC.) was injected into mice intraperitonealy. Similarly, blood glucose levels were measured at 0, 15, 30, 60, and 120 minutes after injection to assess insulin's effect. Assessments of body composition, food intake and energy expenditure At 15 weeks, mice were put in TSE LabMaster chambers (TSE systems) individually for 5 days for measurement of food intake, water intake and indirect calorimetry. In addition, EchoMRI (Echo Medical System) was used to evaluate body fat and lean content in mice after 16 weeks of LF or HF feeding. Real-time PCR Total RNA was isolated from epididymal fat tissue of TSP1-/and wild type control mice using TRIZOL reagent (Invitrogen, Carlsband, CA) and treated with DNaseI (Roche, Indianapolis, IN). The treated RNA was cleaned up using RNeasy kit (Qiagen, Valencia, CA). Total RNA of 2 mg was used for cDNA synthesis using High Capacity cDNA Reverse Transcription Kit (Invitrogen, Carlsband, CA). Real-time PCR analyses were performed using SYBR Green PCR Master Mix kit with a MyiQ Real-time PCR Thermal Cycler (Bio-Rad). All reactions were performed in triplicate in a final volume of 25 ml. Dissociation curves were run to detect nonspecific amplification and we confirmed that single products were amplified in each reaction. The quantities of each test gene and internal control 18S RNA were then determined from the standard curve using the MyiQ system software and mRNA expression levels of test genes were normalized to 18S RNA levels. The primer sequences are shown in Table 1. Immunohistochemical staining Epididymal adipose tissue was fixed and embedded in paraffin. Paraffin fixed adipose tissues were cut into 4-5 mm sections and placed onto slides. Sections were deparaffinized in xylene, and were rehydrated in graded mixtures of ethanol/water. Endogenous peroxidase activity was blocked with 3% H 2 O 2 for 30 min at room temperature (RT). The slides were placed in PBS buffer containing 5% BSA for 30 min. A rat anti-mouse F4/80 antibody (AbD Serotec, Raleigh, NC) was applied for 1 hour at RT. A negative control was included by substituting control IgG for the primary antibody. After washing with PBS, biotinylated secondary antibody was applied for 30 min. After another 15 min washing, an avidin-biotin-peroxidase complex was applied to the slides for 30 min. The slides were washed once again with PBS before color development with DAB using Vectastain ABC system (Vector Lab). Macrophage function studies Studies using bone marrow derived cells. Bone-marrow derived cells were isolated from femurs and tibias of male WT and TSP1-/-mice as described previously [3]. For seven days, these cells were cultured in RPMI-1640 media containing 20% FBS, 25 ng/ml M-CSF (Sigma), and penicillin/streptomycin to allow proliferation and differentiation into mature macrophages. Macrophages were then plated and treated with or without lipopolysaccaride (LPS: 100 ng/ml) for 3 hr. After treatment, cells were harvested and expression of proinflammatory cytokines was determined by real-time PCR. Macrophage migration and adhesion assay. Mice (male TSP1-/-, CD36 -/-, and wild type control mice) were sacrificed and macrophages were harvested by lavage of the peritoneal cavity with sterile PBS [28]. The cells were washed once with serum-free DMEM media, counted and used immediately in a migration or cell adhesion assay. For migration assay: Peritoneal macrophages (1610 6 ) from male WT mice or TSP1-/-mice were loaded into the upper chambers, while the lower chambers were filled with DMEM media containing either purified TSP1 (5 mg/ml, from R&D system) or MCP1 (50 ng/ml). Transwell plates were then incubated at 37uC for 5 hours. Media was removed from the upper chamber, and the cells in the bottom chamber were then fixed in methanol and stained with Giemsa solution (Dade Behring, Marburg, Germany). Cell counts were performed by two different observers who were blinded to the study design. For adhesion assay. To assess cell spreading, macrophages were plated into four-chambered LAB-TEK slides (Nalge Nunc International; Naperville, IL) uncoated or precoated with purified TSP1 or fibronectin for 6 hours. The cells were then washed with PBS, fixed with 4% par formaldehyde, permeabilized with 0.1% Trixton X-100, and blocked with 1% BSA for 30 min before staining with Alexa-Fluo 568-conjugated or FITC conjugated phalloidin (molecular Probes, Eugene, OR). The slides were mounted in prolong anti-fade reagent (Molecular Probes). Random images of at least 25 cells from three or more independent experiments were digitally captured using a Leica TCS SP confocal microscope (UK imaging center). Individual cells were outlined and total cell area was quantified using Metamorph software. Statistical analysis Data are the mean6SE. Differences between groups were determined by ANOVA followed by Turkey's post hoc tests or Student's t-test as appropriate. The significance level was p,0.05. TSP1 deficiency does not affect the development of diet induced obesity To determine whether TSP1 deficiency affects the development of obesity, male TSP1-/-mice and wild type controls were fed with a low fat (LF, 10% fat) or high fat (HF, 60% fat) diet for 16 weeks. The body weight was measured weekly. Prior to the end of the study, body composition was analyzed using EchoMRI. The results showed that body weight and fat mass were similar between the TSP1-/-and wild type control mice under LF or HF feeding conditions ( Figure 1). In addition, high fat feeding significantly increased plasma triglyceride levels in WT and TSP1-/-mice. However, plasma triglyceride levels were less in HF-fed TSP1-/mice than those in HF-fed WT mice ( Table 2). Total cholesterol levels were similarly increased in both HF-fed WT and TSP1-/mice ( Table 2). We also measured metabolic parameters such as food intake, oxygen consumption and physical activity, and did not observe a difference between HF-fed WT mice and HF-fed TSP1-/-mice (data not shown). Together, these data suggest that TSP1 deficiency does not affect the development of obesity. TSP1-/-mice exhibit improved glucose tolerance and increased insulin sensitivity as compared to WT controls under HF feeding conditions Recent studies suggest that adipose TSP1 levels are inversely associated with insulin sensitivity in obese subjects [27]. Although our data indicates that TSP1 deficiency does not affect the development of obesity, it is not known whether TSP1 deficiency affects obesity associated insulin resistance. Therefore, fasting blood glucose and insulin levels were measured. Glucose tolerance test (GTT) and insulin sensitivity assay (ITT) were performed in LF and HF fed mice. The results showed that fasting blood glucose levels were similarly increased in both HF feeding TSP1-/-and wild type control mice. HF feeding also increased the insulin levels in both genotypes. However, the insulin was increased to a significantly lower extent in TSP1-/-mice ( Table 2). Furthermore, GTT and ITT tests demonstrated that HF-fed TSP1-/mice had improved glucose tolerance ( Figure 2A) and insulin sensitivity ( Figure 2B). Recent studies suggest that adipose tissue macrophages (ATMs) play a critical role in obesity associated chronic inflammation and insulin resistance [29]. Therefore, we determined the effect of TSP1 deficiency on ATM accumulation in adipose tissue using both immunohistochemical staining and real-time PCR. As shown in Figure 3, high fat feeding significantly increased F4/80 positive macrophage accumulation and crown like structure (CLS) in adipose tissue of wild type mice. However, ATM accumulation was increased to a lower extent in the HF-fed TSP1-/-mice. This immunohistochemical staining result was confirmed by real-time PCR showing that F4/80 mRNA levels were increased to a significantly lower extent in HF-fed TSP1-/-mice ( Figure 4). We also observed this relationship in the expression of CD11c in the adipose tissue of two genotypes. Again, obese TSP1-/-mice had significantly lower CD11c levels in adipose tissue as compared to obese WT control (Figure 4). CD11c is a marker for a subset of proinflammatory immune cells that have been shown to play an important role in obesity-induced insulin resistance [30]. Other inflammatory cytokines such as iNOS, IL-6, TNF-a, PAI-1 and TGF-b were also reduced in the adipose tissue from the obese TSP1-/-mice ( Figure 4). Furthermore, in WT mice, plasma PAI-1 levels were significantly increased and IL-6 levels had a trend in increase in HF-fed WT mice as compared to LF-fed WT mice. However, in TSP1-/-mice, neither plasma PAI-1 levels nor IL-6 levels were changed in HF-fed TSP1-/-mice as compared to LFfed TSP1-/-mice. In addition, as compared to HF-Fed WT mice, both PAI-1 and IL-6 levels were significantly decreased in HF-fed TSP1-/-mice ( Figure 5). Together, the data indicate that obese TSP1-/-mice have significantly decreased macrophage accumulation in adipose tissue and reduced systemic and local inflammatory cytokine levels. Macrophages from TSP1-/-mice have a reduced inflammatory phenotype and migratory ability Obesity is associated with increased systemic concentrations of fatty acids and endotoxin (lipopolysaccharide) that are able to induce an inflammatory response [31]. To further determine the role of TSP1 in macrophage function, bone marrow derived cells were isolated from wild type and TSP1-/-mice. These cells were differentiated into macrophages and treated with lipopolysaccharide (LPS) for 3 hr. Inflammatory cytokine gene expression was determined by real-time PCR. There was a significant decrease in the gene expression of IL-6, TNF-a, MCP-1 and PAI-1 in LPS treated macrophages from TSP1-/-mice ( Figure 6). This suggests that the macrophages from TSP1-/-mice had a reduced inflammatory phenotype. We also determined the effect of TSP1 on macrophage migration and adhesion. As shown in Figure 7A and B, addition of purified TSP1 significantly increased migration and adhesion of wild type macrophage cells. Macrophages from TSP1-/-mice showed decreased migration and adhesion ability compared to WT macrophages (Figure 7 C and D). The effect of TSP1 on macrophage migration might be MCP-1 independent since we did not find difference of MCP1 levels either in plasma or in adipose tissue between WT and TSP1-/-mice ( Figure 8). In addition, CD36, a receptor of TSP1, may not be involved in TSP1 mediated macrophage migration ( Figure 9). Together, the data suggests that TSP1 is an important regulator of macrophage function. Discussion TSP1 is a multifunctional matricellular protein that is upregulated in inflamed adipose tissue of obese mice and humans [26,27,32]. Previous studies suggest that TSP1 plays a role in obesity and insulin resistance [27]. In the present study, we examined the effect of TSP1 deficiency on the development of obesity and insulin resistance in a high fat diet induced obese mouse model. Using this diet-induced obesity paradigm, we first demonstrate that TSP1 deletion reduces inflammation and improves whole body insulin sensitivity in the obese state. The improved glucose-insulin homeostasis is associated with significantly decreased macrophage accumulation in adipose tissue and decreased adipose inflammation. In vitro studies further support the effect of TSP1 on macrophage mobility and function. Together, these data demonstrate that TSP1 is a key regulator of macrophage function and influences the inflammatory state, contributing to obesity-associated insulin resistance. Our current study demonstrates that TSP1 deficiency does not affect the development of high fat diet induced obesity, which is in agreement with the report from Voros et al [32]. However, in contrast to their study, we found that the obese TSP1-deficient mice have significantly improved glucose tolerance and insulin sensitivity as compared to obese wild type control mice. This discrepancy may be due to several factors including differences in the length of feeding, age of mice, high fat diet composition, and methods to measure glucose tolerance and insulin sensitivity. In our study, we fed eight week old male TSP1deficient mice and wild type littermates with low fat diet (LF, 10% kcal as fat; D12450B; Research Diets, Inc, NJ) and a high fat diet from Research Diet (HF, 60% kcal as fat; D12492, Research Diet, Inc, NJ) for 16 weeks; whereas Voros et al fed five week old male TSP1 deficient mice or wild type mice with a high fat diet from Harlan (TD 88137, containing 42% Kcal as fat) for 15 weeks. In addition, we performed glucose tolerance and insulin sensitivity tests on animals after 6 hr fasting followed by an intraperitoneal injection of glucose (1 mg/g body weight) or insulin (0.5 unit/kg body weight); whereas Voros et al performed these tests in mice after overnight fasting followed by an intraperitoneal injection of glucose at 3 mg/g body weight. A recent report from Andrikopoulos et al indicated that different fasting periods and varying concentrations of glucose injections can dramatically affect the results of glucose tolerance test in mice [33]. Moreover, they demonstrated that blood glucose concentrations after 6 hr fasting are a better representation of blood glucose levels throughout the day. Therefore, varying fasting times and/or glucose concentrations may explain the difference between our study and Voros' report. One important finding of our study is that the obese TSP1 deficient mice have improved glucose tolerance and insulin sensitivity. Importantly, improvement in glucose-insulin homeostasis in obese TSP1 deficient mice was observed even though mice exhibited similar levels of obesity as wild type controls. Moreover, our results suggest that the improved metabolic profile of TSP1 deficient mice is partially due to the effect of TSP1 gene deletion on inflammation. We found that systemic and adipose tissue inflammation is significantly reduced in obese TSP1 deficient mice compared to obese wild type controls. This is associated with decreased accumulation of macrophages in fat tissue. Rodent and human studies suggest that adipose tissue macrophages play a critical role in obesity associated chronic inflammation and insulin resistance [29]. Other studies found that obesity is strongly associated with the accumulation of proinflammatory macrophages (F4/80 + cells) that express CD11 c (a dendritic cell marker) in adipose tissue [4]. These F4/80 + CD11c+ cells are bone marrow derived adipose tissue macrophages that selectively localize to the crown like structure surrounding dead adipocytes. These cells play an important role in obesity associated metabolic profiles [30,34]. Our data suggest an important role of TSP1 in regulating CD11c + macrophage infiltration and inflammation based on the following observations: 1) Immunohistochemical results showed that the frequency of the crown like structure and F4/80 + macrophages were dramatically decreased in adipose tissue of obese TSP1 deficient mice; 2) Gene expression of F4/80 and CD11c were significantly decreased in adipose tissue of obese TSP1 deficient mice; 3) mRNA levels of proinflammatory cytokines such as IL-6, TNF-a, and PAI-1 were significantly decreased in adipose tissue of obese TSP1 deficient mice; 4) Bone marrow derived macrophages from TSP1 -/-mice exhibited a reduced inflammatory phenotype. Furthermore, our in vitro data demonstrate that TSP1 stimulates macrophage migration and adhesion. This result is in agreement with previous studies showing that TSP1 can act as a monocyte chemoattractant [20,21]. Previous studies have shown that TSP1 did not stimulate MCP-1 release from differentiated U937 human monocytic cells. However, they found that PAI-1 levels in monocytes or murine macrophages were significantly increased by TSP1 [21], suggesting that the effect of TSP1 on macrophage migration might be MCP-1 independent but PAI-1 dependent. Consistently, we did not find a difference of MCP1 levels in either plasma or adipose tissue between WT and TSP1-/mice (figure 8). By using CD36 deficient macrophages, we demonstrate that CD36 (a receptor of TSP1) may not be involved in TSP1 mediated macrophage migration ( figure 9). In addition to regulating macrophage migration, another study demonstrated that TSP1 deficient murine macrophages exhibit an increased capacity for FccR-mediated phagocytosis [35]. Therefore, in an obese state, it is possible that TSP1-/-deficient macrophages could rapidly clear dead adipocytes contributing to decreased inflammation. TSP1 may also influence other immune cells such as T cells contributing to obesity-induced adipose tissue inflammation. Future studies will explore this possibility. TSP1 is a major regulator for latent TGF-b activation in vitro as well as in vivo [36,37,38,39,40]. Studies have demonstrated that increased TGF-b activity and its downstream target PAI-1 are associated with obesity, inflammation and insulin resistance [41,42,43]. In this study, we found that TGF-b downstream molecular-PAI-1 levels in plasma and adipose tissue were significantly decreased in the obese TSP1-/-mice. This suggests that decreased TSP1 dependent TGF-b activity may contribute to the reduced systemic and local inflammation and improved insulin sensitivity that was observed in the obese TSP1-/-mice. An ongoing in vivo study is currently exploring this mechanism using an antagonist of TSP1-dependent TGF-b activation. In summary, results from this study demonstrate an important role for TSP1 in regulation of macrophage function and in obesityinduced inflammation and insulin resistance. TSP1 depletion in an obese state prevents the accumulation of macrophages in adipose tissue and pro-inflammatory cytokine expression in peripheral tissues, resulting in improved insulin sensitivity. A direct effect of TSP1 on macrophage motility and function was also demonstrated in our current studies. The results of this study together with the report of increased TSP1 in human obesity [27] suggest that TSP1 may be a potential target of the inflammatory and metabolic complications of obesity.
5,229
2011-10-24T00:00:00.000
[ "Biology", "Medicine" ]
Astronomical Constants and Universal Code in Holy Book At the beginning of 1995, I was looking to produce a new concept of the Astronomical Period (AP) which may be determined by the shortest period of Lunar years which this period includes leap years and common years, just to get a simple formula for calculating the average length of the lunar year where I finally deduced the first formula about this average by using the simple math (the four elementary arithmetic operations). By this rule, I methodically educed what I essentially considered it as an acceptable consequence which, indeed, encouraged me to do more research about the best resources that required to deal with the concept of (AP) where I found something like the hidden signals in Islamic Holy Book (The Great Qur’an) which led me by the elicitation method to get the perfect astronomical constants besides of an evolving conclusion about what I considered it as a scientific guide to the universal code. What the exiting in this research is: these (perfect astronomical constants) had successfully passed the test of three physical laws in motion which means that the hypothesis of this research (elicitation method) is not arbitrary, and the conclusions of this research had truly deduced by innovative scientific basis. Introduction If you observed the motion of Moon regarding to Earth's motion around the Sun with respect to fixed star, you will find that we have a specific period that is formed by the shortest AP which equals 19 Lunar years. And then we know that this period has two types of years: Leap Lunar year equals 355 days, and the common Lunar year equals 354 days, and when I tried to find any resources that confirm this observation by anyway, I found the Great Qur'an (Note 1) had mentioned the word of (year) in specific arrangement; seven times as singular form (year) and twelve times as plural form (years). Then I tried to use these details to figure out the average length of the Then, by substitution, I got the first astronomical constant, as follows: When I tried to compare this result with Synodical lunar year (which equals 29.530588 days (Note 2) × 12 months = 354.367056 ) (Note 3), I found that the difference is less than two minutes; 354.368421 − 354.367056 = 0.001365 = 1.9656 . Purpose of This Research In this research, I try to get the answers for these questions, scientifically: a) Do we have an acceptable scientific resource, outside of the usual scientific resources, to get the scientific data or astronomical constants? b) Can we use the elicitation method to get the perfect scientific data? c) Can we refer to holy books to formulate physical equations? Research Hypothesis I will depend on the elicitation method (Note 4) to reach out to the perfect astronomical constants from The Great Qur'an, and try to test these constants on the basic physical laws in motion. The Difficult Mission My mission in this research is: how can I find out the average length of the Solar year ( ) as I found out the ( ) before? Because this constant will become the best key to go further in the hypothesis of this research. 2) Using the Astrophysical constant which is not meeting the purpose of this research. 3) Referring to the Holy Book (The Great Qur'an) and try to find out if there is an accurate constant or not. Indeed, when I referred to The Great Qur'an (Note 5), I found that the word of (year) or (years) had been mentioned in 16 different chapters (Soras) within 19 verses had been labeled by specific serial numbers that may be arranged in specific forms (Note 6) to keep something like a secret in its relationships together. See Table-1 where I tried to use these relations to get what I assumed as the difference by the minutes (∆ ) between Lunar day and Solar day by using the following formula: Year 29 th (Al-`Ankabut) 14 Years 30 th (Ar-Rum) 4 Year 32 nd (As-Sajdah) 5 Year 46 th (Al-'Ahqaf) 15 Year 70 ) by using the following formula: By substitution, (∆ ) becomes available, as follows: apr.ccsenet.org Vol. 12, No. 4;2020 You note that I used the total minutes of one day (1440 ) to convert those minutes to a day, and I used the total days (6733) of the shortest astronomical period ( ) which equals 19 years, to find (∆ ) as shown in Eq (3) which is more accurate than other predictions (Note 7). Now, I can use this result to find out the average length of the solar year, as follows: Then, when you initially compare these results with astronomical constants, we find that; − = 1.9656 , − = 25.4736 , and you'll finally find that these data are examinable, as I will show you soon. But before that, let me refer you to Table-2 just to check how much the hypothesis of this research is running well or not, it is acceptable or not, or how much it is accurate or not? Hence, I think, it is the best way to check these results by referring to these laws; Earth's orbit = 2 , and the velocity law ( = ), where we find that the Earth's orbital speed by using ( ) Anyway, if you have any fractions in any result of these transformations, just pay attention to do the following: a) If the fraction less than 50%, just remove it from the result. b) If the fraction more than 50%, make it as (+1) and add it to the result. c) The accurate result depends on the full perfect results of Eqs (1-4). Mysterious Code Maybe these transformations seem as universal symphony, especially when we try to apply it for more than forty thousand years as shown in figure-2 or Table-3 where we have some consequences worth to focus on it, like: a) The solar calendar still bigger than the lunar calendar until a specific date (20800 A.C) which this date is the end of first Great Astronomical Period ( ). b) When the first ( ) is completed, the solar and the lunar calendars are shown as the same (become equaled). c) After that date (20800 A.C), the lunar calendar is going to be bigger and bigger than the solar calendar. d) The difference between the solar calendar and the lunar calendar seems as harmonious pulse as shown in column (D) of Table 3. e) The relationship between the solar and the lunar calendars still harmonious until 41000 A.C. Compare Figure 1 with Figure 2 where the present time is the best examine of these transformations; these two calendars have different starting, different time intervals, but they have the same shared point which is the present time (Now). Synchronization is an important scientific measurement in this case. See Figure 3. f) The surprise appears suddenly as a missed period when these calendars cross the second great AP (41600 A.C). g) But, when I tried to apply these transformations for a long period of these calendars, I found some conclusions as shown in Table-3, where it led me to what I considered as a coding language or mysterious code. Note; by using transformation (6) I show you how solar calendar and Lunar calendar are going together ( ≡ ) along the time where column (C) shows you the differences between these calendars ( − = ; 1000 − 390 = 610), but if we want to see the average of the changes on column (C) I deduct the initial ( ) from the updated ( ′); (610 − 580 = 031) to get some results, like: (030, 031, …etc) which seem along of column (D) as universal pulses where this way is applied in the last column ( ); ( = − ') also, to discover what I considered as a mysterious code; (000,000,000,000,001,001,000,000,001, 001, 000, 000, 001, 001, 000, 000, 000 …….etc). Eventually, if we can scientifically imagine this real and strong relationship between the Hijri calendar and the Lunar calendar along the Solar calendar, then we are strongly invited to study that curve of the Hijri calendar as a reflection of Lunar calendar along the Solar calendar, to get more knowledge about our universe. See Figure 3 where we have to remember that: a) These three calendars had not started together, but they are together, crossing the present time (now), at the same time. b) The missed period seems like a confusing era, when this era has (1000 = 2294 ) which is impossible, depending on the transformations of this research, because these transformations said that: (1000 = 1031 1030 ) which means that these transformations had predicted that we have something unusual when the second (GAP) will be completed. c) The difficult mission is made by concluding the synchronization of the three calendars, and foresight the outlook of our galaxy, at least. Conclusion In this research the elicitation method had scientifically proved: a) The accurate constants of the length of Lunar day, Lunar year, Solar day, Solar year and the difference between Solar and Lunar year as shown in Eqs (1-4). b) Perfect astronomical transformations as shown in Eqs (5)(6). c) Other consequences as shown in Table 3 like; universal pulses (shown in column D) and mysterious code (shown in column E). d) The missed period as shown in Figures 2, 3 and Table 3.
2,209.2
2020-07-31T00:00:00.000
[ "Physics", "Mathematics" ]
A Smart Agricultural System Based on PLC and a Cloud Computing Web Application Using LoRa and LoRaWan The increasing challenges of agricultural processes and the growing demand for food globally are driving the industrial agriculture sector to adopt the concept of ‘smart farming’. Smart farming systems, with their real-time management and high level of automation, can greatly improve productivity, food safety, and efficiency in the agri-food supply chain. This paper presents a customized smart farming system that uses a low-cost, low-power, and wide-range wireless sensor network based on Internet of Things (IoT) and Long Range (LoRa) technologies. In this system, LoRa connectivity is integrated with existing Programmable Logic Controllers (PLCs), which are commonly used in industry and farming to control multiple processes, devices, and machinery through the Simatic IOT2040. The system also includes a newly developed web-based monitoring application hosted on a cloud server, which processes data collected from the farm environment and allows for remote visualization and control of all connected devices. A Telegram bot is included for automated communication with users through this mobile messaging app. The proposed network structure has been tested, and the path loss in the wireless LoRa is evaluated. Introduction The fourth industrial revolution, commonly known as "Industry 4.0", has emerged as a hot topic of research and discussion among industry and academia, especially in the fields of management and engineering [1,2]. Industry 4.0 refers to a new industrial paradigm that encompasses various technologies like Artificial Intelligence (AI), Augmented Reality (AR), big data, remote sensing, and the Internet of Things (IoT) that play a crucial role in increasing productivity and reducing costs across different industries including the agricultural sector [3][4][5]. The integration of Industry 4.0 with agriculture has led to the emergence of smart agricultural and farming systems. In agriculture, relying on traditional farming methods to meet the growing demand for food is no longer sufficient. The production process of planting, sowing, reaping, irrigation and cultivation must now respect certain climatic conditions, such as air temperature, humidity, and precipitation. These conditions affect the spread of pests and diseases, which cause significant losses in global food production [6]. In fact, around 40% of global food production is lost due to pests and diseases infecting the plants [3]. In addition, overirrigation and leakages in water channels result in the wastage of around 20% of water reserved for agricultural activities due to many reasons such as leakages and line-losses in the waterway channels and the over-irrigation [7]. To address these issues, smart agricultural and farming systems that integrate IoT and communication models are being introduced. These systems automate farm operations in a collaborative and intelligent manner, improving the reliability of crop production management. They provide better control of planting conditions and natural resource utilization without human interaction. The smart farm environment enables decision making by gathering information from different sensors and analyzing it according to needs. The automation of certain processes using IoT sensors, communicating devices, control units, and computers replaces manual work schedules and improves production. As smart farming systems help the interoperability of multiple heterogeneous devices, information sharing and processing is important for controlling the operations of the farm. Multiple sensors are deployed in the smart farm such as humidity and temperature sensors forming a Wireless Sensor Network (WSN). These sensors collect the environment information and send it to the server where data are processed. To deploy a large-scale WSN, Low-Power Wide-Area Network (LPWAN) technologies such as LTE, SigFox and LoRa are better suited due to their wide transmission range, scalability and low power consumption [8,9]. LoRa technology is among the most commonly used technologies in smart farming, owing to its long range and low power consumption, as well as its use of a free-licence band (863 to 870 MHz in Europe) [10] which allows transmitting data over several kilometres in rural areas without expensive infrastructure or cellular connectivity. LoRa devices also have low power requirements, meaning they can operate on battery power for years, making them ideal for remote and hard-to-reach areas of a farm. Additionally, LoRa devices are relatively inexpensive, making them a cost-effective solution for smart farming applications. They are easy to install and configure, and can be integrated with a wide range of sensors and devices. This paper presents a novel approach to integrating LoRaWAN communication with traditional Programmable Logic Controllers (PLCs), which have long been used in agriculture for automating various processes and controlling machinery. The proposed system facilitates the creation of smart farms by enabling the integration of LoRa connectivity with the existing automated processes that have been in use for decades, without the need for replacing the old control systems. To accomplish this integration, the proposed system employs a LoRa shield on a Simatic IOT2040, which communicates with the existing PLC using the Modbus-TCP protocol. The resulting system allows us to control the operation of farming machines like water pumps and collects data from different sensor nodes located throughout the farm. A cloud server is utilized for data processing, offering a secure, flexible, and scalable web-based platform that provides a user-friendly interface for remotely managing all the devices in the smart farm system. This innovative approach has the potential to significantly benefit the agriculture industry. By incorporating LoRa connectivity into the existing PLCs, the system can leverage previously automated processes and thus reduce the need for expensive replacement of old control systems. The remote management capabilities can help reduce labor costs and increase productivity. In addition, the real-time monitoring of climatic conditions, such as temperature and humidity, can assist in detecting and preventing the spread of pests and diseases, thereby contributing to the overall improvement of global food production. Literature Review Certainly, recent studies have highlighted the growing interest in using LoRa technology for smart farming applications. Placidi et al. demonstrated in [11] the use of a LoRa-based soil moisture monitoring system for precision agriculture in smart cities. Another study of Boursianis et al. published in [12] investigated the development of a smart irrigation system for precision agriculture based on LoRaWAN technology. Another study presented by Behjati et al. in [13] explored the use of drones towards large-scale livestock monitoring in rural farms. Widianto et al. in [14] presented a Systematic Review of Current Trends in Artificial Intelligence for Smart Farming to Enhance Crop Yield. Jiang et al. [15] proposed a fully customized low-cost and low power smart farming network structure enabled by LoRa and ANT radios. Yoon et al. [16] proposed a smart farm based on LoRa & MQTT. In [17], Escolar et al. proposed a LoRa-based network of energy-harvesting devices for smart farming. Kodali et al. [18] described a smart irrigation system based on LoRa technology. Furthermore Ramli et al. [19] presented an adaptive network mechanism for a smart farm system by using LoRaWAN and IEEE 802.11ac protocols. These studies demonstrate the growing interest in using LoRa technology for smart farming applications and the potential benefits that it can offer. Material, Methods and Experimental Tests A smart farm can be defined as a new type of automated farming system using IoT infrastructure. Our proposed system consists of four main parts: the end-node sensors, an IoT LoRaWan gateway, the control equipment, a cloud server hosting a web-based platform for control and monitoring, and a bot for a Telegram messaging application in mobile devices. In this section, we present a brief overview of LoRa and LoRaWan, and then we describe the proposed smart farm system. LoRa and LoRaWan Overview LoRa and LoRaWAN are global de facto standards of Low-Power Wide Area Networks (LPWANs), with LoRa being the physical layer and LoRaWAN the Media Access Control (MAC) layer. LoRaWAN is an open specification developed by the LoRa Alliance [20]. LoRa technology is based on Chirp Spread Spectrum (CSS) modulation, which offers high sensitivity for the receiver and robustness against data corruption through the use of forward error correction messages [21]. LoRa uses unlicensed radio spectrum in the Industrial, Scientific, and Medical (ISM) band, specifically the 863-870 MHz range in Europe. The communication range and robustness of LoRa signals are affected by various parameters such as transmission power, Spreading Factor (SF), which is the ratio between the data symbol rate and chirp rate, and Code Rate (CR), which is the forward error correction rate that affects packet transmission airtime and bandwidth [22]. LoRa modulation uses six orthogonal Spreading Factors, ranging from 7 to 12, with a trade-off between a higher data rate and a longer range or lower power consumption. Lower SF results in faster chirps, higher data transmission rates, shorter active times for the radio transceivers, and longer battery life. However, like any technology, LoRa has its disadvantages. One of the main drawbacks is its limited bandwidth, which can result in slow data transfer rates. LoRa's data rate is also fixed, which can be a problem when trying to transmit large amounts of data. Additionally, LoRa suffers from interference issues, as it operates in an unlicensed spectrum that is shared with other wireless technologies. This can cause communication problems, particularly in urban environments with high levels of radio frequency activity. Table 1 summarizes the pros and cons of the LoRa technology: The network of LoRa uses encryption, integrity and authentication and it is secured with two security layers [23]: • Network layer: an AES-128 secret key named network session key (NwkSKey) is shared between the end-device and the network server for authentication. • Application layer: an AES-128 secret key named application session key (AppSKey) protects the payload transmission between end-devices and the application server. In general, the architecture of a LoRa-based network consists of a hierarchical topology, formed by LoRa nodes, gateways network servers, and application servers. The devices can transmit the data to the gateways which belong to the same LoRaWan network. However, all gateways within the range of a LoRa node can receive messages and the duplications (when existing) will be filtrated in the network server responsible of processing the incoming packets. Smart Farm Infrastructure The proposed smart farm system consists of two main networks as described in Figure 1: The monitoring network (Farm): a wireless network of different LoRa sensors distributed over the farm to collect to the required information such as moisture and airflow sensors. 2. The control network (Warehouse): The reaped vegetables and fruits are stored in the warehouse. Controlling the climatic conditions of this place is a must, thus it is equipped with different environmental sensors (i.e., temperature sensor). The warehouse contains all the control equipment such as air conditioner and irrigation water pump. These devices are controlled with a PLC. LoRa end-node sensors The LoRa sensors scattered in the farm send the sensed information (pH, moisture, air flow, temperature, etc.) to the LoRa gateway. The gateway is connected to the internet and it is responsible for forwarding the sensor data to the cloud server to be further analyzed and stored in the database. The gateway is also responsible of receiving the control commands from the server and forward them to the PLC located in the warehouse. These commands can be manually instructed by the user through the web-based platform or automatically executed if the sensed received data require an action (e.g., the PLC will activate the water pump if the soil needs more water). The automatic commands can be set and configured by the user in the web platform by defining the target device and the threshold values that trigger the actuating device. For better communication coverage, the LoRa gateway should be placed in a high altitude and all the LoRa end-nodes have to be distributed properly. In IoT, a gateway acts as a bridge between devices and the cloud or server. The gateway is responsible for collecting data from sensors and devices, processing it, and sending it to the cloud or server for storage and analysis. Here are some of the gateway protocols used in IoT such as MQTT, CoAP, HTTP and DDS [24]. In the case of LoRaWan technology, the gateway uses the LoRaWAN protocol, which is responsible for managing the communication between the end devices and the LoRa gateway, as well as providing security, data encryption, and authentication. It also provides the network architecture that enables communication over long distances with minimal power consumption, making it ideal for IoT applications. In order to establish the LoRa communication in our system, we used the "WiMOD LoRa Lite Gateway" from IMST. The warehouse contains the PLC to control the devices in the farm such as the water pump. This PLC should be wirelessly connected via LoRa so it can establish a connection with the gateway to receive the commands from the cloud server. To achieve this, we have connected the Simatic IOT2040 from Siemens [25] to a regular S7 Siemens PLC via Modbus TCP. The LoRa Wimod Arduino shield board from IMST is inserted in the Simatic IOT2040 as additional communication board, serving as LoRa end-node in the same way as other LoRa end-node sensors in the farm (e.g., Mote II board or custom-made LoRa end-nodes). This solution makes it possible to connect existing PLCs to the LoRa network. The devices used in our test bench are illustrated in Figure 2. The inputs/outputs of the devices of the farm are simulated with the SIMATIC Step7 software and controlled with the S7 Siemens PLC. The Siemens Simatic IOT2040 operates with Yocto Linux and it can be easily expanded with Arduino shields in a compact industrial design. The applications can be easily programmed using Node-Red visual programming tool as described in Figure 3a. The WiMOD Shield is a expansion board that enables users of Arduino-compatible boards to use WiMOD radio modules based on LoRa. The Shield includes everything that is needed for connecting a WiMOD module to an Arduino board by using the WiMODLoRAWAN library. The shield offers two UART connections, able to communicate with the IOT2040 main board. The LoRa communication shield can be programmed using the Arduino IDE software as described in Figure 3b, and then writing the code directly to the IOT2040 using a USB cable thanks to the Intel Galileo firmware which should be added to the Arduino IDE first. In the NodeRed dashboard, we have added the "node-red-node-arduino" package in order to read the LoRa payloads exchanged between the IOT2040 and the gateway through the LoRa shield board. The "node-red-contrib-s7" package is used to read/write commands to the Siemens S7 PLC via Modbus-TCP (any other PLC can also be connected using standard Modbus-TCP wired communication). A part of the Arduino code and the NodeRed scheme of the smart farm are represented in Figure 3. The Siemens S7 PLC is controlled using the IOT2040 via Modbus-TCP, which also exchanges commands and data with the LoRa gateway. The rest of sensors installed in the farm act as LoRa end-nodes and directly communicate with the LoRa gateway. In this system, a bot was used to integrate Telegram instant messaging using Telegram Bot API in both the IOT2040 and the web application. The Telegram bot was made by registering to @bot f ather (https://telegram.me/BotFather accessed on 2 February 2023). There are steps that must be completed in @bot f ather, such as creating a bot name, bot username, and command with command /newbot. A bot token was used to communicate with our system via Telegram once the bot was created. The first integration of the Telegram bot with the IOT2040 was done by installing the NodeRed package "node-red-contrib-telegrambot". This integration aims to receive direct control commands from users, for example, turning on the lights. The Telegram node was programmed with a JSON code to define several commands and their actions. Thus, the user can send his command via telegram, the IOT2040 will analyse the request, make it, and send a confirmation message to the user. The second integration of the Telegram bot is done with the web application described in Section 5 using the Telegram Bot API-PHP SDK [26]. This integration is done using the same token as the IOT2040 in order to use the same telegram bot for both integrations. The aim of this second integration is to analyse the user requests related to the database (e.g., request the last irrigation water usage, request the time that the lights were switched off, etc.), or the request related to the sensor nodes in order to receive the instant measured value of a sensor without waiting the duty cycle time needed to send the last measured value to the database. In this work, The Things Network (TTN) is used for LoRaWan communication. The LoRa end-nodes and gateways based on LoRaWan standards can exchange data using the TTN for free since it is an open source infrastructure [27]. Nowadays, many users around the world have registered thousands of gateways in the TTN platform to broaden the network in a collaborative way. The users can connect their IoT LoRa devices to the already existing gateways in the TTN network. Also, establishing a connection with a private server is also possible if a user wants to use his own specific platform. Experimental Test The experimental test was conducted throughout a farmland in Valencia city, Spain. In this test, we used a LoRa Mote II from IMST as end-node, which incorporates an accelerometer, an altimeter, a temperature sensor and a GPS module. The LoRa gateway was located in the balcony of an apartment, in the 6th floor, with an altitude of 24 m, approximately. The configuration used for the LoRa end-node device was: • The LoRa end-node was placed in several locations with different distances from the gateway. From each transmit location, 10 messages were exchanged between the end-node and the gateway. The location of the LoRa gateway and the different measurement positions (P1 to P11) are represented in Figure 4. The Spreading Factor (SF) was fixed in 7 in order to reach the maximum distance. However, in similar applications it is preferred not to fix the SF and set Adaptive Data Rate (ADR) mechanism for optimizing data rates, airtime and power consumption, and consequently improve the range and capacity of the network. The Received Packets Percentage (RPP) was calculated in each location as described in Equation (1), where NACK denotes the number of packets with 'Acknowledgement' signal received, and NAP denotes the number of all transmitted packets. Studying Path Loss (PL) is important because it helps to understand how the strength of a LoRa signal decreases as it travels through the environment. This information is useful for predicting the range of a LoRa network and for designing efficient communication systems. By understanding the factors that contribute to LoRa path loss, such as the type of environment, the distance between the transmitter and receiver, and the frequency of the signal. Modeling the PL predict the reduction in power of a signal as it propagates through a medium, and it can help for better distribution of the end-nodes and identify potential issues with a LoRa network and suggest ways to improve its performance. The Large-Scale Fading (LSF) is characterized by its Path Loss parameter (PL). The PL has been evaluated in outdoor environments on the basis of the measured RSSI and signal-to-noise ratio (SNR) using Equation (2) [28], where P t is the transmission power G t and G r are the gains of the transmitting and receiving antennas, respectively. The ratio between the received power P r and the transmitted power P t in a free space environment is given by the Friis law (Equation (3)), where G t and G r are the gains of the transmitter and the receiver, respectively; λ is the wave length, and d is the distance between the receiver and the transmitter. For a non-free space environment, a path loss exponent γ and a reference distance are introduced. Then, Equation (3) becomes Equation (4). Using the first-order fit [29], PL can be estimated by modelling the experimental PL. Then, the Estimated Path Loss (EPL) can be obtained by Equation (5). The PL 0 in microcellular systems is the PL intercept at a reference distance d 0 , typically ranging from 1 m to 100 m. In this paper, we consider d 0 = 1 m. The path loss exponent γ can be estimated by analyzing the measurement results of the propagation environment. This can be done by fitting a model to the measured data and extracting the relevant parameters, including the path loss exponent. The specific method used may vary based on the type of propagation environment and the information available [30]. Typically, in free-space, gamma is 2. In urban environments, it ranges from 2.7 to 3.5, while in building environments, it can vary from 1.6 to 6 based on building structure, materials, and obstacles [23]. There may be a difference between the actual path loss (PL) and estimated path loss (EPL) due to shadow fading deviation, as shown in Equation (6). When presenting PL data, there are several statistical measures that can be used to provide a more complete picture of the data. Some of the most commonly used statistical measures include Mean, Standard deviation and Range. The statistical measures and the EPL parameters are represented in Table 2. Results and Discussion In the previous test, measurement points with RPP below 50% are ignored. The Received Signal Strength Indicator (RSSI), the signal-to-noise ratio (SNR) and the percentage of the received packets (RPP) corresponding to each measurement point are represented in Table 3. The RSSI and SNR variation as a function of the distance are illustrated in Figure 5. As seen, the RSSI decreases as the distance increases. The first packet loss was at position P6 (550 m); from position P5 and above, the RSSI decreased under −90 dBm. Concerning SNR, it followed a similar trend: the SNR value stayed above 0 dB until P9 (720 m) and then, it decreased to −4 dB at P11 (795 m). The lowest rate was registered at P8 (640 m), whereas at P11 (795 m) the received packets reached 60%. This test shows that distances under 500 m can guarantee successful communication between end-nodes and the gateway. Using the experimentally collected data of Table 3, the PL was calculated using the Equation (2). The results are compared with the Estimated Path Loss (EPL) along with the Free-Space Path Loss (FSPL) model and are illustrated in Figure 6. These results shows he proposed model to estimate the path loss appears to be more valid as the distance increases. For distances shorter than 70m, the proposed model shows instability that mainly return to the shadowing effects [28]. Packets with a PL lower than 126 dB can be received. As would be expected, the free space model FSPL shows the lowest attenuation. As a result, the PL model parameters are typically only valid in specific environments, frequency ranges, and antenna configurations. Thus, for similar condition to our experimental test, the proposed PL estimation model can be applied in the design of a LoRa mesh network in a farm to improve the energy efficiency and reliability over existing LoRa systems. LoRa is designed to operate at very low power levels, which is one of the reasons why it is able to achieve long-range communication. However, this also means that the signal is more susceptible to interference and noise as it propagates through the environment, which can cause a sudden and significant increase in PL. The LoRa signal may encounter a variety of environmental factors that can cause a sudden increase in path loss, such as obstacles (e.g., buildings, trees) and interference from other wireless devices. These factors can cause the signal to be absorbed, scattered, or reflected in different directions, resulting in a significant increase in PL. The transmission range of LoRa in agriculture is an important factor to consider when implementing smart farming solutions. The range of LoRa depends on various factors, such as the frequency used, transmission power, terrain, and obstacles present. In general, LoRa has been found to have a transmission range of several kilometers in rural areas and up to several hundred meters in urban environments. This range is suitable for most smart farming applications, which typically cover a large area [31]. However, in some cases, the transmission range of LoRa may need to be extended to cover a larger area. This can be achieved by using multiple LoRa gateways, which act as intermediate nodes between the end devices and the central control system. LoRa gateways can receive and forward messages over long distances, thereby extending the transmission range of the end devices. Recent studies have demonstrated the potential of LoRa for smart farming [31,32]. These studies have highlighted the use of LoRaWAN for remote monitoring of large agricultural areas [32], as well as the development of systems based on LoRa for monitoring the agricultural sector [33]. Additionally, Semtech's LoRa technology has been used to show significant improvements in smart agriculture use cases, such as a 50% water savings [34]. In conclusion, LoRa is a promising technology for smart farming applications. It offers long-range, low-power communication that can enable the collection of data from various sensors and devices over a wide area, making it an ideal solution for precision agriculture. Web-Based Monitoring Platform The presented smart farm system requires a tailored application to customize the control and analysis of data and provide more flexibility in its management. To meet this need, we have developed a personalized web application called 'Mi granja (My Farm)', using the Laravel framework [35] for back-end operations and a MySQL database. We decided to implement a relational database due to its ability to provide easier management and maintenance of data quality and integrity over time. This is due to features such as data consistency, query flexibility, transaction support, and scalability, which can be beneficial for the long-term sustainability and robustness of the application. Alternatively, a non-relational database could also be utilized for this purpose. The front-end of the web application is developed using the Bootstrap framework, which implements a responsive design that is supported by multiple screen sizes. The web application allows users to easily access and analyze the data collected by the sensor nodes, as well as remotely control the machines connected to the PLC. It also provides a user-friendly interface for configuring the system settings and defining custom alerts and notifications. The web application is hosted on a cloud server, enabling users to access it from anywhere with an internet connection. In Laravel, the Model-View-Controller (MVC) architecture is used to separate the application logic from the presentation layer. The work process of the MVC architecture. Routing is the process of accepting a request and directing it to the appropriate controller is described in Figure 7. Route model binding provides a convenient way to automatically inject the model instances directly into your routes. Controllers are responsible for handling user requests and retrieving or storing data in the database through Models which are used also to pass this data off to a view. Implicit route model binding is a smart feature in Laravel that can resolve Eloquent models defined in routes or controller actions and whose values are passed as parameters to controller methods. In this work, the database is running MySQL to store the data in the different tables and the user interface is based on Bootstrap framework and Angular Js that consist of a set of Cascading Style Sheets (CSS) classes and JavaScript functions, providing a responsive design supporting different screen sizes. The interface includes a navigation system and a graphical content displayed in the form of charts and tables to display historical data. The open-source library CanvaJs was used to create the charts. The platform also allow users to export data in different formats for further analysis by off-line software tools. The LoRa gateway communicates with the TTN server, which will forward the received payloads directly to our cloud server to be stored in the database, so that it can be visualized and monitored from the user interface. The user commands also take the same path: they are transmitted to the TTN server and then forwarded to the LoRa gateway, which will finally communicate with the LoRa end-node. The web application allows different users to create their accounts, each user can register several networks corresponding to his associated farms (multiples farms can be managed from the same web interface), and each network (each farm, in fact) contains a number of connected devices. In order to register a device in the network, the user should register this device in the TTN server first, and then register it in the dashboard page of the web application using its identifier, because all the data go through the TTN LoRaWan network before receiving/sending it to our cloud server. The home page and dashboard of the web application is illustrated in Figure 8. On the dashboard, the user can define multiple parameters related to each LoRa end-node, such as maximum and minimum values for specific devices. If the received value is outside the defined range, the user will receive an alert notification via email and Telegram. These notification and alert messages can be customized in the dashboard. The data collected by the sensor nodes can be viewed in the form of tables and charts and can be downloaded as an Excel or CSV file for manual and advanced analysis if needed. The web application also has the flexibility to easily add additional features thanks to its MVC design based on the Laravel framework. Overall, the web application provides a comprehensive and user-friendly interface for managing and analyzing the data collected by the smart farm system. The web application is hosted in a cloud server and its main features are: Export data for advanced analyses as Excel and CSV files. Conclusions This paper outlines the development of a comprehensive IoT system for a smart farm. The system aims to improve food product storage by monitoring and regulating humidity and temperature levels. It also enables remote monitoring and control of various devices through a web-based application, automating tasks such as irrigation and temperature adjustments. The developed web application is designed to simplify the process of creating and managing IoT networks. It provides users with the ability to add devices and access to data analysis and downloading features. The use of the MVC design pattern makes the application easy to modify and update, ensuring scalability and the ability to add new features as needed. The responsive design of the app makes it accessible on different screen sizes, ensuring a seamless user experience on any device. The low server resource consumption allows the app to support a large number of concurrent users, making it ideal for managing multiple sensor networks. The introduction of new additions to the traditional LoRa IoT network infrastructure, such as the LoRa link for exchanging data with automation PLCs commonly found in farms and the Telegram link for direct messaging with users via popular messaging apps, further enhances the functionality of the system. The general illustration of the presented system is described in Figure 9 [3]. The smart agricultural system utilizes LoRa end-node sensors scattered throughout the farm to collect data about the environment. The sensors transmit this data to a LoRaWAN gateway, which then sends it to a cloud server for analysis. The system also includes a Programmable Logic Controller (PLC) that is connected to various machines in the warehouse, such as water pumps and lights. The IoT2040 is connected to the PLC via Modbus-TCP and programmed with Node-RED. Users can access the system through a web-based monitoring application, which allows for remote control of the warehouse machines through the IoT2040. Additionally, users can send commands and requests via a Telegram bot. The cloud server analyzes the received information and stores useful data. Figure 9. Illustration of the smart farming application. The proposed IoT system integrates different devices and technologies: PLC controllers and LoRa nodes connected via a gateway. The gateway forward and receive data from the cloud server that hosts the web application. To conclude, the proposed infrastructure represents a smart solution for farmers to integrate the IoT in their already existed farming systems which mainly are relying on a regular PLC. Currently, we are working on the development of some Artificial Intelligence (AI) data analysis tools that can be added to the presented systems for better farm management. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
7,557.2
2023-03-01T00:00:00.000
[ "Agricultural and Food Sciences", "Computer Science", "Engineering", "Environmental Science" ]
Reconfigurable Optical Wireless Switches for On-Chip Interconnection Optical Wireless Networks on- Chip have been recently proposed as alternative paradigm to overcome the communication bottleneck in computing architectures based on electrical networks. In this paper, we propose the design of a $3\times 3$ switching matrix for optical wireless on- chip interconnection. The design exploits integrated optical phased arrays to guarantee the communication among three transmitters and three receivers. In this work, the effect of multipath propagation in the on- chip multi-layer structure is taken into account, and the impact of the cladding layer thickness is evaluated. The proposed device is intended to interconnect multiple nodes assuring reconfigurability and high bandwidth. of reaching bandwidth densities in the order of tera bit per second and high communication power efficiencies that are not achievable with conventional electronics [1], [2]. Optical Networks-on-Chip are based on the integration of an optical layer, housing signal routing and processing components such as switches and filters [8], [9], [10], [11], [12], into computing architectures. The interconnection in the optical domain promises extremely low-latency and bandwidth densities in the order of tens of Tb/s [4]. Different ONoC implementations have been recently proposed for providing optical communication among multiple cores or chiplets stacked on silicon photonic interposers. They mainly exploit Micro Ring Resonator (MRR)-based wired optical interconnections [13], [14], [15], [16]. For example in [16], for modularity and ease-of-integration of different technologies, dedicated electro-optical (E/O) chiplets are introduced as network nodes, taking care of buffering, arbitration, serialization, driving and thermal tuning of both filters and modulators. The basic components of the optical links in these network solutions, i.e., modulators, filters and wavelength division multiplexing (WDM) routing elements, exploit the resonant behavior of MRRs, which need a fine tuning of the resonant wavelengths (e.g. by thermal tuning). Unfortunately, ring tuning results in a significant increase of the overall power budget and is responsible of a remarkable growth of the device complexity, caused by the electrical connections to the ring electrodes. It is also worth mentioning that, in MRR-based networks, an increase in optical parallelism (i.e. number of WDM channels, each one associated to a different wavelength) leads to an impairment of the overall power budget due to the high number of MRRs required. Another state-of-the-art solution for photonic switching consists of arranging 2 × 2 Mach-Zehnder Interferometers (MZIs) into higher order switching topologies [17], [18], [19]. Due to their operating principles, MZIs are able to switch multiple wavelengths simultaneously at ns time and without being affected by the data rate carried by individual wavelengths. Such a bandwidth transparency of MZI-based photonic switching elements can be leveraged to adopt dense wavelengthdivision multiplexed links, reducing the individual data rate per wavelength and increasing signal quality and energy efficiency, This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ while maintaining high aggregate data rates. 2 × 2 MZI switches are typically organized into larger N × N switching topologies through carefully optimized connectivity patterns such as Benes or dilated-Benes networks. For example, in [18] the switch fabric is composed of 56 2 × 2 silicon MZIs, with average on-chip insertion loss of 6.7 dB and 14 dB for the "allcross" and "all-bar" states, respectively, and useful bandwidth limited to 30 nm. A recently proposed alternative paradigm, that can compete with ONoCs, is based on Wireless Networks-on-Chip (WiNoCs) [7], [20], [21]. Wireless communication can potentially alleviate the intricacy and overhead of a wired network topology. Moreover, the use of very high frequencies (e.g. mm or THz-waves) in principle allows on-chip integrability, while avoiding inter-router hops and guaranteeing lowlatency broadcasting. Miniaturized graphene antennas promise to allow on chip communication in the THz range [22], but the technological challenges to reach such integration are still open. In this paper, we focus on optical wireless interconnect technology which can enable the implementation of Optical Wireless Networks on-Chip (OWiNoCs), to exploit the best of both wireless and optical communications. Despite pioneering research efforts [23], [24], [25], optical wireless interconnection is still in the early stage of development and its potential is not yet completely explored. However, the feasibility of the approach is supported by the consolidated optical fabrication process and by the demonstrated integrability with CMOS (complementary metal oxide semiconductor) technology. In [25], we proposed the concept and a first design of an Optical Wireless Switch (OWS) based on transmitting and receiving Optical Phased Arrays (OPA). In this paper, after an initial focus on the architecture of the proposed device, we discuss in detail the characteristics of the antenna element used in these Phased Arrays and then illustrate the radiation patterns as a function of the applied phase shifts, highlighting the variation of the gain in the different configurations. We then report further results obtained by optimizing the design of a 3 × 3 switching matrix. Differently from [25], in this work we exploit the use of five antennas in each OPAs. Moreover, the effect of multipath propagation in the multi-layer on-chip structure is taken into account, and the impact of the cladding layer thickness on the final performance is evaluated. The proposed device is intended to interconnect multiple nodes assuring reconfigurability and high bandwidth, e.g. chiplets in 2.5D manycore systems. II. ON-CHIP WIRELESS OPTICAL SWITCH Fig. 1 reports the proposed implementation of a 3×3 OWS, in which three input and three output nodes are interconnected through reconfigurable OPAs. The input and output waveguides, together with the OPAs of each communication node, are lying on a dedicated optical layer of the interposer. In the design, Silicon on Insulator (SOI) technology has been considered, but implementations with different approaches are certainly possible. To maximize the performance of the OWS, each antenna should radiate in parallel M different data channels, thus implementing a WDM signal with M wavelengths. Beam steering is obtained by phase-shifting the input signal at each OPA using suitable Optical Phase Shifters (OPSs), therefore allowing the communication with a specific receiver, as schematized by the colored arrows between the transmitters TX i and the receivers RX i . To guarantee WDM communications, the needed bandwidth requirements of each component of the OWS (nanoantennas, OPS, couplers and splitters, etc.) should be carefully considered in the design phase. This OWS is intended to be used as a building block for multi-chiplet wireless interconnection networks, an example of which is shown in Fig. 2. In the conceptual scheme of this figure writing and reading chiplets, acting as network nodes, are connected through electro-optical converters (E/O, i.e., optical transmitters), optical and wireless paths, and optoelectrical converters (O/E, i.e. optical receivers). Fig. 2 also highlights the electronic control logic that is necessary to solve OWS contention when multiple transmitters intend to route optical packets to the same receiver(s). In fact, photonic switching fabrics are intrinsically bufferless, and contention should be managed. The control logic should be designed with the goal to optimize active interposer area and power. The network-level design space exploration is, however, beyond the scope of this paper, which focuses on the OWS design and optimization, and will be the object of future work. III. DESIGN AND OPTIMIZATION OF THE ANTENNA ARRAY A. Radiation Characteristics of the Single Antenna The proposed 3 × 3 OWS exploits OPAs made by N = 5 taper antennas, both at the transmitters and at the receivers. This configuration with five elements, different with respect to the one proposed in [25], has been chosen as it allows addressing separately all the three nodes of the switch, as detailed below, thus minimizing the crosstalk and maximizing the device performance. Each node is therefore equipped with 5 taper antennas, schematized in the inset of Fig. 3. The geometry is similar to the one proposed in [23] and obtained by inversely tapering a standard SOI waveguide (cross-section height h = 220 nm and width w = 450 nm), terminated on a small tip (length l = 1 μm, and width w T = 130 nm). Radiation of the optical signal is taking place along the direction of the mode propagation (x axis of the considered reference system: see Fig. 1). Correct tapering of the structure guarantees excellent impedance matching (back-reflection at the input port of the antenna is less than −35 dB) and suitable radiation properties. The radiation pattern of this antenna, and its dependence on the geometry of the taper, have been investigated by three-dimensional Finite Difference Time Domain (3D-FDTD) simulations with standard near-to-far field transformation [26]. In these simulations, as required by the near-to-far field projection approach, embedding of the antenna in a homogeneous medium was the considered scenario. Radiation patterns have been obtained through evaluation of the antenna gain G(θ, ϕ) by computing the radiation intensity I (θ, ϕ) in spherical coordinates and in the far-field region, and then normalizing it with respect to the average radiated power on the overall solid angle 4π, according to the definition: Being P in the input power launched into the silicon (Si) waveguide [27], in this evaluation we take into account also the efficiency of the antenna. The radiated beam can be characterized by considering the maximum gain and the Half Power Beam Width (HPBW), which quantify the capacity of the antenna of focusing the radiated beam in the main radiation direction. A design parameter, that influences the radiation performances of the taper antenna, is the length of the taper. Figure 3 shows the maximum gain (solid curve) and the HPBW (dashed curve) calculated, for the taper antenna, as a function of the taper length L T . The HPBW reported in Fig. 3 is defined as the angular separation , in which the gain decreases by 3 dB. As it can be seen in this figure, the gain increases with the taper length and, accordingly, the radiated beam becomes narrower, as shown by the corresponding decreasing of the HPBW. Indeed, the radiation characteristics of the single antenna influence the radiation performances of the OPA, as it will be described in the following. B. Radiation Characteristics of the OPA The OPA configuration analyzed in this paper exploits N a = 5 taper antennas, aligned along the y axis (see Fig. 1). The optical signal in input to each antenna in the OPA can be phase-shifted to steer the radiation beam in the xy plane. The phase shift necessary for the OPA operation can be obtained, in Si waveguides, by using Optical Phase Shifters (OPS) based either on thermo-optic or plasma-optic effect [28]. The radiation diagram of an alignment of N a identical antennas can be obtained, when they are uncoupled, through multiplication of the electromagnetic field radiated by the single antenna by the corresponding array factor (AF) [27]. To evaluate the array factor, N a point sources are considered where the antennas of the array are originally positioned, and the total far-field radiated by this array of point sources is analytically calculated. This allows having an easy tool for the design of the array pattern. The overall radiation diagram of the OPA, in fact, can be suitably designed by choosing the distance d between the antennas in the array. In particular, given a fixed operating wavelength, a single main radiation lobe is obtained when d ≤ λ m , being λ m the signal wavelength in the surrounding medium [27]. Conversely, by suitably choosing d > λ m , multiple main radiation lobes (i.e. grating lobes) can be exploited to connect the transmitting OPA with the different receivers. This latter approach was adopted in [25] to design 1 × 5 and 3 × 3 optical wireless switches based on OPAs with N a = 3 antennas. As anticipated, suitable phase shifts of the input signal to each antenna allow obtaining the desired beam steering. Differently from [25], in this paper the OPAs exploit N a = 5 antennas with distance d = λ m . This choice allows to increase the maximum gain of the OPA and to address 5 different receivers. The 1 × 5 interconnection is obtained by varying the phase shift α of the excitations of the N a = 5 antennas in the OPA and, consequently, steering of the main lobe in the xy-plane. As an example, Figs. 4 show the three-dimensional gain radiation diagram of an array of N a = 5 taper antennas, with antenna distance d = λ m and taper length L T = 5 μm, for Different receiving nodes can be addressed by steering the main radiation beam. Moreover, to minimize the crosstalk, the main radiation beam should be steered on the same positions of the nulls of the broadside (α = 0 • ) array. The phase shifts necessary to satisfy this requirement can be calculated as α = ±p360 • /N a , with p = 0, 1, 2. To better describe the array behavior, Figs. 5 show the gain as a function of the angle (measured on the xy plane starting from the x axis -see Fig. 1 As shown in Figs. 5, by changing the phase shift of α = ±p360 • /N a , the main radiation lobe is steered in the position of the nulls of the radiation diagram of the broadside array (α = 0 • ). Five different receivers, identified by Rx i with i = 0, ±1, ±2 in Figs. 5, can be efficiently addressed. The reduction of the maximum gain, that occurs when a phase shift is applied with respect to the case of null phase shift, is due to the radiation diagram of the single taper antenna (black curve in Figs. 5). In fact, the magnitude of the main radiation lobes, for the different values of the phase shift α, follows the envelop of the single antenna radiation diagram. Coherently with the radiation characteristics of the taper antenna shown in Fig. 3, exploiting a longer taper gives higher gain for the OPA (Fig. 5 (b)). At same time, the maximum gain of the steered beam varies more significantly with respect to the case of a less directive single antenna ( Fig. 5 (a)). To better quantify this behavior, Fig. 6 shows maximum gain as a function of the taper length L T of an array of N a = 5 taper antennas with antenna distance d= λ m , for the input phase shift values: α = 0 • , α = 72 • , and α =144 • . By increasing the taper length L T , the difference between the maximum gain at α = 0 • and that of the steered beams becomes more pronounced, especially in the case of α = 144 • corresponding to the most lateral receiver. Given the proposed application of the OPA for optical wireless switching, it could be advisable to equalize the power received by the different nodes while maximizing the gain of the steered beams. The value of the taper length that compromises well between the aforesaid conditions is L T = 5 μm, and it will be used in the following to simulate the full device. The design approach proposed, exploits uncoupled antennas to assess simple design criteria while assuring reconfigurability. In fact, when the antennas are uncoupled, a desired radiation pattern can be synthesized by separately feeding the array elements. The reconfigurability of the OWS is simply achieved by tilting the radiated beam in the propagation plane, through the phase shift of the signal in input to the antennas. The three-dimensional FDTD simulations of the next sections confirm the validity of this simple design approach. IV. OPTIMIZATION OF THE 3 × 3 OPTICAL WIRELESS SWITCH The proposed OPA configuration can address up to five different receivers. These receivers should be placed along the y axis at y = 0 and at the best suited positions given by: where d link is the distance between the transmitter TX 0 , and the receiver RX 0 , and i with i = ±1, ±2 is the angular position of the nulls of the radiation diagram obtained with the array in the broadside configuration, i.e. for α = 0 • . Actually, the on-chip wireless communication occurs in a multilayered structure, typical of photonic integrated circuits. The medium discontinuities in on-chip optical wireless scenarios, can lead to multi-path propagation phenomena, as shown by the authors in [29]. Multiple reflections cause fluctuations on the received power (increasing where interference is constructive, fading where destructive interference is taking place), requiring simulations of the complete device to optimize the configuration of the OWS. In particular, here we consider the multilayer structure shown in Fig. 7(a), which corresponds to the sample fabricated and characterized by the authors in [29] for the evaluation of point-to-point wireless links. It consists of a standard SOI sample, where the antennas and the waveguides are patterned, covered by cladding layers that maintain the index contrast with the silica layer limited. In this way, the radiation diagrams of the antennas are not influenced by close index discontinuities. Both the bottom bulk Si layer and top air layer are considered as semi-infinite, by using Perfectly Matched Layer (PML) boundary conditions. As shown in Figures 7 (b), (c), and (d), by changing the phase shifts of the input signals applied to the antennas of an OPS, it is possible to address different receivers. When no phase shift is applied (Fig. 7 (b), α = 0 • ) the central transmitter Tx 0 can efficiently address Rx 0 . On the contrary, when α = 72 • Tx 0 addresses Rx −1 (Fig. 7 (c)) whereas the configuration with α = 144 • (Fig. 7 (d)) is best suited to address Rx −2 . These figures represent the 3D-FDTD-calculated electric field patterns in the horizontal (xy) plane located in the middle of the antenna layer previously described (plane located in the middle of the waveguide cross-section). The simulated device exploits reconfigurable OPAs made of 5 taper antennas with taper length L T = 5 μm. The link distance was arbitrarily chosen equal to d link = 70 μm. The field patterns shown in Figs. 7 follow the behavior of the gain radiation diagram in Fig. 5 (a) for the three different phase shift values, but the effect of the propagation in the multilayer is visible since it induces oscillations in the field pattern. However, also in this condition, the direction of the main beam is always maintained. To guarantee the interconnection between the transmitters and the receivers, it is also necessary to virtually steer the beam of the receiving OPAs in the direction of the maximum incoming radiation, by applying suitable phase shift α at the receiving OPAs. Given the operation principle of the 3 × 3 OWS and the symmetry of the device, its behavior can be fully described by considering the link between a lateral transmitter, e. g. TX −1 , and the three receivers RX −1 , RX 0 , and RX 1 , schematized in Fig. 1. The scheme of the 3 × 3 OWS is also recalled in Fig. 8 (d) to ease the reading of the transmittance graphs. As shown in Fig. 8 (a), when the phase shift is α = 0 • at the TX −1 OPA, the receiver RX −1 is connected with an insertion loss about equal to IL −1 ≈ −1.3 dB. The power captured by RX 0 and RX +1 is a spurious signal, representing a possible source of crosstalk for the system. The crosstalk can be quantified as: where T RXj is the transmittance of the addressed port and T RXi is the transmittance of a non-addressed one. The arrows in Figs. 8 (a)-(c) highlight, for each phase-shift, the curves from which the maximum XT is calculated as the difference in dB between the transmittances. The worst-case insertion loss (i.e., IL i = −T RXi ) and crosstalk correspond to the connection between the further nodes TX −1 and RX +1 (Fig. 8 (c)), as expected from the lower gain of the main lobe in the radiation diagram of Fig. 5 for α = 144 • (green curve). In this case, the insertion loss and the crosstalk are, respectively, equal to IL 1 = 6 dB and XT 0,1 = −21 dB at the wavelength λ = 1.55 μm. Coherently with the broadband behavior of the taper antenna [25], the transmittance spectra in Figs 8 do not change significantly with the wavelength. Therefore, the large bandwidth of the device fully covers the C-band. As mentioned before, in order to guarantee the connection between the transmitter and each of the addressed nodes, a suitable phase shift must be applied also at the receivers (i.e. α = 0 • , α = −72 • , and α = −144 • for the connections TX −1 → RX −1 , TX −1 → RX 0 , and TX −1 →RX 1 , respectively) to virtually steer the beam of the receiving OPAs in the direction of the maximum radiation. This is feasible by properly phase-shifting the fundamental TE modes in each waveguide at the receiving OPAs. For a fixed link distance, a parameter that can influence the performance of the OWS is the distance y along the y axis between adjacent receivers. Figures 9 show the transmittance in dB, calculated at the receiving OPAs, i.e. RX +1 , RX 0 and RX −1 , as a function of the distance y between adjacent receivers. The transmitting OPA TX −1 is excited with phase shifts: α = 0 • (Fig. 9(a)), α = 72 • (Fig. 9 (b)), and α = 144 • (Fig. 9(c)). Considering Figs. 9 (a) and (b), which correspond to the connections TX −1 → RX −1 , TX −1 → RX 0 , respectively, the performances of the OWS in terms of insertion loss do not change significantly with y. Moreover, in both cases, the crosstalk remains below −20 dB. Considering the link between the transmitter and the furthermost receiver TX −1 →RX 1 (Fig. 9 (c)), the transmittance (green curve) is maximized, i.e., the insertion loss is minimized, when y = 15.5 μm. The y value, obtained by the 3D-FDTD parametric analysis of the full device, is very near to the one y≈15 μm evaluated through Eq. 2. Therefore, Eq. 2 gives a good estimation of the receiver positions. Also from the point of view of the crosstalk, the distance As analyzed in [29] for point-to-point links between single antennas, the behavior of the electromagnetic propagation in a multilayered medium depends on the layer characteristics. Here, we investigate the effect of the variation of the cladding layer thickness (UV26 layer), which is a deposited polymer used to increase the distance between the radiator and the interface with the air. The constructive or destructive interference, caused by the phenomenon of multiple reflections and transmissions at the interfaces, is sensitive to the layer thickness variation. The layer thickness can be, therefore, considered as a degree of freedom available for engineering the propagation channel and for improving the link performances. induces oscillations of the transmittance curves at the three analyzed receivers. In order to verify the performances of the OWS in the whole considered wavelength range, Figs. 11 (a), (b), and (c) report the insertion loss as a function of the wavelength and of the cladding thickness h T calculated at the addressed receivers: (a) Rx −1 for the connection TX −1 → RX −1 , (b) Rx 0 for the connection TX −1 → RX 0 , and (c) Rx +1 for the connection TX −1 →RX 1 . As it can be seen from Figs. 11, for a fixed value of the layer thickness h T , the insertion loss is almost constant, with a maximum variation of less than 3 dB. Similarly For a fixed value of the cladding thickness, the variation of the crosstalk with the wavelength is more pronounced than that of the insertion loss, but the crosstalk remains in general well below −14 dB, with a maximum variation with the wavelength of 7 dB. Considering Figs. 10 and 11, the cladding layer thickness that maximizes the transmittance at the further receiver, which is the most critical one, is h T = 1 μm. In this case the worstcase insertion loss is equal to IL 1 = 3 dB, and the crosstalk is −21 dB. The fabrication of the proposed OWS requires the design of the network feeding the antennas in the OPA and of the phase shifters. A possible implementation of the feeding network that brings the signal to the OPA antennas can be made by cascading multiple beam splitters. For example, 1 × 2 Y junctions can be cascaded to increase the number of outputs, starting from a single input waveguide. A 1 × 2 Y junction keeps the two outputs in phase, while equally dividing the input power into the two waveguides. Another possible implementation of a 1 × 2 beam splitter can be made by using multi-mode interference (MMI) devices. Both 1 × 2 Y beam splitter and 1 × 2 MMI exhibit a broadband behavior and are not expected to significantly alter the OWS bandwidth. More elaborated solutions can also be implemented such as 1×N MMIs, following the design criteria reported in reference [30]. In this case, the beam can be split into multiple outputs in a single stage. The phase shift between the outputs of the 1×N MMI, and eventual additional phase shifts coming from different lengths of the optical paths, can be compensated by a calibration of the phase tuning. Phase shifters can be implemented exploiting either plasmaoptic or thermo-optic effect. Thermally controlled waveguide phase shifters could be preferred because they are based on relatively simple and robust structures and their fabrication is less prone to errors. These phase actuators need to be calibrated and thermal crosstalk must be taken into account in the design of the circuit. To overcome this issue, an approach to cancel out the effects of the phase coupling induced by thermal crosstalk in photonic integrated circuits, with thermal phase actuators, can be applied [31]. A further issue related to fabrication is the tolerance to fabrication errors. The most significant error that can affect the behavior of the OWS is a variation d of the antenna distance in the OPAs. A change in the distance between the antennas due to fabrication errors can cause a change of the beam shape of the OPA radiation diagram. In particular, we verified that the zeros of the radiation diagram shift of less than 2 • when 0 < d < 100 nm. In order to verify if the change of the shape of the radiated beam can cause an effect on the insertion loss and on the crosstalk, it is necessary to simulate the overall device, considering the propagation in the multilayer structure and the physical size of the OPAs. For this purpose, we simulated the overall OWS for different values of the distance between the antennas in the OPAs. In all the simulations, the receivers were placed along the y axis in the optimal design positions, i.e. with distance y = 15.5 μm between adjacent receivers. By this parametric analysis, we verified that a change of d = 100 nm of the antenna distance causes a maximum variation of the insertion loss lower than 2 dB. This is due to the variation of the beam shape and of the multipath contribution and it occurs, in particular, when the further receiver R x−1 is addressed. In all the considered cases, the worst-case crosstalk remained below −18 dB. V. CONCLUSION The design of a 3 × 3 optical wireless router allowing on-chip optical wireless interconnections has been proposed and discussed. The OWS exploits reconfigurable OPAs made of five taper antennas with taper length L T = 5 μm, either at the transmitting and at the receiving nodes. The antennas in the arrays are aligned along the y axis with distance equal to the wavelength in the propagation medium (d = λ m ). The interconnection among the different nodes is obtained by steering the beam of the transmitting and receiving antennas, through the variation of the phase difference between the elements of the arrays. The proposed configuration improves the connection performances with respect to the one reported in [25] in terms of crosstalk and insertion loss of about 4 dB and 10 dB, respectively, to parity of multilayer structure. This improvement is mainly due to the design choice of using N a = 5 antennas with distance d = λ m in the OPAs, which Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. increase the maximum gain of the antennas and avoid the use of grating lobes for communication. A further degree of freedom, investigated to optimize the device, is the thickness of the cladding layer which influences the multi-path propagation. A minimum value of the worst-case insertion loss IL 1 = 3 dB is achieved, thus improving the device performances of further 3 dB. An interesting feature of the proposed OWS is the large bandwidth, with respect to MRR resonators or MZIs switches. For example, if a WDM signal is used for communication, with channel spacing λ = 0.8 nm virtually about 120 channels can be allocated in the simulated 100-nm bandwidth of the OWS. Given the broadband behavior, all the allocated WDM channels can be switched at the same time, thus making the power requirement for signal routing independent from the number of WDM channels. Consequently, the required energyper-bit, given by the power over the aggregated bit rate (i.e. bit rate per channel multiplied by the number of WDM channels), decreases with the number of WDM channels. Even though a direct comparison of performances is not straightforward, the proposed OWS can be a promising alternative to MRR-and to MZI-based networks. Thanks to its broadband operation, it can allow WDM schemes, as in MZI networks. Moreover, thanks to its non-resonant behavior, it does not require an extremely fine tuning as in MRR networks. Loredana Gabriele received the bachelor's degree in electronic and telecommunication engineering from the Polytechnic University of Bari, Bari, Italy, in 2021, where she is currently pursuing the master's degree in telecommunication engineering. Her main research interests include on-chip wireless communication and integrated nanoantennas. Since 2020, she has been an Associate Professor at Bologna University. Her research interests are on propagation models for mobile communications systems, with focus on wideband channel modeling for 5G systems, investigation of planning strategies for mobile systems, broadcast systems and broadband wireless access systems, analysis of exposure levels generates by all wireless systems, and for increasing spectrum efficiency. The research activity includes the participation to European research and cooperation programs (COST 259, COST 273 COST2100, COST IC004, and COST IRACON) and in the European Networks of Excellence FP6-NEWCOM and FP7-NEWCOM++. From 1994 to 1999, he was a Researcher with the National Research Council, CSITE, University of Bologna. In 1999, he joined the Department of Engineering, University of Ferrara, Ferrara, Italy, where he is currently an Associate Professor. He has authored or coauthored more than 150 articles in refereed journals, including IEEE TRANSACTIONS and international conferences. He has participated in several national and European research projects addressing short-range communications systems, 3G/4G/5G wireless networks, wireless video communications, and on-chip optical wireless networks. His research interests include digital transmission and coding and wireless communications, with emphasis on radio resource optimization and cross-layer design. He served as the Co-Chair for the Wireless Communication He is the coauthor of more than 30 articles on international journals and more than 70 papers on conference proceedings, two book chapters, and five patents. Among his past and actual research interests, there are all-optical signal processing, fiber-optic transmission systems, reconfigurable nodes for optical networks, applications of microwave photonics techniques to radar systems and wireless communications, including optical beamforming for 5G and photonics-assisted coherent MIMO radars. Gaetano Vincenzo Petruzzelli was born in Bari, Italy, in 1955. He graduated in electrical engineering from the University of Bari in 1986. He is currently engaged as an Associate Professor of electromagnetic at the Department of Electrical and Electronic Engineering, Polytechnic University of Bari. He is a member of Electronic Engineer Doctorate Courses. Over the years, he has dealt with various research topics, such as integrated plasmonic nanoantennas for wireless on-chip optical communications, innovative optical devices for the optical interconnects on chip, periodic structures for laser cavities based on the optical self-collimation property of mesoscopic structures, plasmonic periodic nanostructures for the realization of plasmonic sensors. He has coauthored over 330 publications, 132 of which published on international journals and 155 presented at international conferences. He was a member of the Management Committee of the MP0805 COST action "Novel Gain Materials and Devices Based on III-V-N Compounds." He acts as a Reviewer of European and National Projects. Open Access funding provided by 'Università degli Studi di Ferrara' within the CRUI CARE Agreement
7,684.6
2023-06-01T00:00:00.000
[ "Computer Science" ]
Prevalence of prediabetes and diabetes among economically backward tribes, Tamilnadu, India Geetha K*1, Kanniammal C2, Kanmani S3 1Research Scholar, Department of Community Health Nursing, SRM College of Nursing, SRM Institute of Science and Technology, Kattankulathur– 603203, Chengalpattu District, Tamilnadu, India 2Department of Medical Surgical Nursing, SRM College of Nursing, SRM Institute of Science and Technology, Kattankulathur–603203, Chengalpattu District, Tamilnadu, India 3Department of Community Medicine, SRMMedical College, SRM Institute of Science and Technology, Kattankulathur–603203, Chengalpattu District, Tamilnadu, India Blood Sugar, Glucometer, Finger Prick Method, Substance Abuse A India has the second largest concentration of tribal population in the world. Indian tribes constitute around 8.3% of nation's total population. To assess the prevalence of Prediabetes and diabetes mellitus among tribal population of Kancheepuram district. Cross sectional study design, Multi stage cluster sampling technique was used, house to house data collection was done for 85 irula tribal people. The Irula are a Scheduled tribe that lives in northern Tamil Nadu and the Nilgiri Hills. They are sort of like a cross between tribals and ordinary southern Indians .structured questionnaire were used to assess demographic variables (gender, age, educational quali ication, marital status, family status, occupation, monthly salary and religion). Measurements taken were height, weight, and blood sugar by inger prick method with glucometer. Above 140 to 199 mg/dl considered as prediabetes and 200mg/dl is considered as diabetes. Prevalence of prediabetes and diabetes mellitus among tribes were 49.4%, 25.9%, poor literacy, poverty and substance abuse makes the tribes more prone to prediabetes and diabetes. INTRODUCTION Emerging trend of diabetes mellitus (DM) is observed worldwide, as by 2025, its prevalence is projected to be 6.3%, which is a 24.0% increase compared with 2003. There will be 333 million (a 72.0% increase) diabetics by 2030 in individuals of 20 to 79 years of age. The developing world (mainly central Asia and Sub-Saharan Africa) accounted for 141 million people with diabetes (72.5% of the world total) in 2003 (Narayan et al., 2006). Environmental factors like obesity (central or general), physical inactivity, and diet (saturated fats and transfatty acids) and socioeconomic factors are responsible for development of DM (Qiao et al., 2007;Hu et al., 2003). Diet rich in polyunsaturated fats and long chain omega-3 fatty acids reduces the risk for DM (Adler et al., 1994). The global prevalence of diabetes in 2014 was estimated to be 9% in adults aged 18+ years (World Health Organization, 2007). According to a study by Mohan et al., the overall prevalence of diabetes in India is found to be 12% (Mohan et al., 2007). MATERIALS AND METHODS The Irula are a Scheduled tribe that lives in northern Tamil Nadu and the Nilgiri Hills. They are sort of like a cross between tribals and ordinary southern Indians. They have many animist beliefs but have had enough contact with Hindu s to embrace many orthodox Hindu beliefs. The Irula live in villages with special "pollution hut" for menstruating women, lots of mango and jackfruit trees, and ancestral temples with stones in them that represent the dead. Many live in two-room houses with a separate room with a sacred ire. They are known as collectors of honey and hunt with nets and spears (Tribals food society, 2020). Now the scenario was entirely changed, we have done a study in Kancheepuram district. Here the people settled in plain areas but away from other villagers. We have conducted the study in Kollam village, Anjur, karanai Puducherry and Nallambakkam. After getting consent from the participants, 85 subjects who ful illed the inclusion criteria were selected by non-probability convenient sampling technique. Cross sectional study design was adopted for this study. Structured questionnaire was used to assess the demographic variables like age, marital status, religion, educational status, occupation, monthly salary, family status and religion. Measurements taken were height, weight, and blood sugar by inger prick method with glucometer. Above 140 to 199 mg/dl considered as prediabetes and 200mg/dl is considered as diabetes. Weight of the subjects was measured with the help of standard weighing scale and height was measured with the help of inch tape. Two blood pressure measurements was measured with the standard digital spygmomanometer interval of 10 minutes. Random blood sugar was measured with the help of a glucometer. Major Findings of the Study According to the Table 1, The indings depicted that among 85 subjects most of them women (70.6%). 29.4% belongs to the age group between 30-34 Years. Most of them between the age group between 35-49 yrs. Nearly 95.3% don't have formal education, 91.8% got married. Nobody is working in Government sector. Most of them in poor socioeconomic status(94.1%). Their salary is between Rs.1500-4500. According to the Table 2, the prevalence of prediabetes is 49.4% and diabetes is 25.9%. Table 3, the p values corresponding to the demographic variables are not signi icant since they are not less than 0.05 hence we can say that there is no signi icant association between the demographic variables and blood sugar level. According to From the above table, the p values corresponding to the demographic variables are not signi icant since they are not less than 0.05 hence we can say that there is no signi icant association between the demographic variables and blood sugar level. DISCUSSION The study was carried out among tribal population in Athanavoor Primary Health Centre in Yelagiri hill station of Vellore District, Tamilnadu. Individuals aged 25 to 65 years were selected for the study. Blood samples were collected to estimate Fasting Blood Sugar and Serum Cholesterol levels. Out of 104 participants, 29(27.9%) are males and 75(72.1%) are females. The proportion of participants who had diabetes mellitus is 3.8% (Nikkin and Stanly, 2016). The study was conducted in 410 adult Katkaris (women 219) of both sexes of ≥18 years of age in three adjoining tehsils of the district. Information was obtained on socio demographic parameters, educational level, dietary pattern, and substance abuse. Prevalence of overweight, hypertension, and diabetes was measured using standard ield-based procedures and techniques. Katkaris, who are mostly landless manual laborers, subsist on a protein-poor, imbalanced diet. About half of women and one-third of men have body mass index (BMI) <18.5 kg/m 2 , an indication of undernutrition. On the other hand, about 2% of participants were obese (BMI ≥30 kg/m 2 ). The overall prevalence of hypertension and diabetes was 16.8% and 7.3%, respectively. Subjects were recruited from ive districts of Kashmir valley using multistage cluster sampling by probability proportional to size (PPS) technique. A total of 6808 subjects were recruited in this study including 2872 (42%) men and 3936 (58%) women with mean age of 39.60 ± 20.19 years and 35.17 ± 16.70 years, respectively. About 1.26% (0.5% males and 0.9% females) had DM and 11.64% had prediabetes based on HbA1c cutoffs. Increasing age, body mass index and family history portend signi icant risk factors while smoking and sedentary lifestyle increased the risk marginally. Although the prevalence of DM among tribals of Kashmir valley is lower than general population, the higher prediabetes to DM ratio may indicate a future trend of increasing DM prevalence in this disadvantageous subpopula- tion (Ganie et al., 2020). Prevalence of diabetes among indigenous groups varies and it is high in some groups like New Zealand Maori, Greenland Inuit while it is low in some traditional populations like Orang Asli of Malaysia (Roglic, 2016). Highest incidence of diabetes was reported among Pima Indians living in Arizona in United States of America (Wild et al., 2004). Prevalence of diabetes and prediabetes among tribal population of India is low when compared to the general population. The study reported that 4.6 percent of the Raica community in Rajasthan has diabetes and it was absent among camel milk 19 consuming people from the same community in 2002 (Agrawal et al., 2007). A study done in tribal population of Arunachal Pradesh in 2012 showed that the prevalence of diabetes was 8.3 percent and of impaired glucose tolerance was 21.8 percent (Yajnik, 2009). A cross sectional study done in Himachal Pradesh demonstrated that migration of traditional tribes into an urban community increases their cardiovascular risk factors. The prevalence of diabetes among urban tribals was 9.2 percent whereas among traditional tribes it was 6.7 percent (Kapoor et al., 2014). High prevalence of diabetes among tribal people of northeast India where 19.8 percent of the people had diabetes with another 12 percent pre diabetes (Zaman and Borang, 2014). A study documented that around 5 (Radhakrishnan and Ekambaram, 2015). According to a systematic review, the prevalence of diabetes in tribal India was 5.9 percent, ranging from 0.7 percent to 10.1 percent. Prevalence of impaired fasting glucose was 5.1 percent -13.5 percent and impaired glucose tolerance was 6.6 percent -12.9 percent (Oommen et al., 2016). Prevalence of diabetes mellitus among the study population was 3.3% and prevalence of pre diabetes was 7.6%. This study showed low prevalence of diabetes mellitus among the tribal population as compared to that of general population. This may be due to less prevalence of insuf icient physical activity reported among this population (Ford et al., 1997 (Upadhyay et al., 2013). According to this study, the prevalence of prediabetes and diabetes mellitus among tribes were 49.4%, 25.9%, it gives alarm to the society Prevalence of diabetes is almost equal among two sexes which is comparable to the study done (Kandpal et al., 2016). In this study overweight (≥23 kg/m2) and hypertension were found to be signi icantly associated with diabetes mellitus after adjusting for other factors which is similar to other studies (Zaman and Borang, 2014;Upadhyay et al., 2013). Prevalence of diabetes mellitus and pre diabetes were 3.3% and 7.6% respectively. Overweight and hypertension were found to be signi icantly associated with diabetes mellitus (Ruban, 2017). Two stage cluster sampling method was used. Modi ied WHO STEPS instrument/Questionnaire was administered by the principal investigator. Following variable were collected. CONCLUSION Poor literacy rate, poverty plays a major role in developing the prediabetes and diabetes among tribals. Awareness is very poor among tribal people. Repeated awareness programmes needed to control the prediabetes and diabetes. Many people don't have formal education. Updating the health information is very dif icult among them. Effective Information education communication packages should be developed to improve awareness among irular tribes and measures to be taken to improve their educational status. ACKNOWLEDGEMENT We thank all the participants for their cooperation. Competing Interest There is no competing interest Funding Support No funding support from funding agency Authors Contribution Mr Kanmani helped in data collection; Mrs Geetha prepared the manuscript and the suggestions given by Dr.C.Kanniammal.
2,454.2
2021-02-23T00:00:00.000
[ "Medicine", "Materials Science" ]
Symmetry relations in wurtzite nitrides and oxide nitrides and the curious case of Pmc21 Binary and multinary nitrides in a wurtzitic arrangement are very interesting semiconductor materials. The group–subgroup relationship between the different structural types is established. Introduction GaN and InN in particular are probably some of the most prominent and most important semiconductor materials; indeed, the 2014 physics Nobel Prize was awarded for the invention of blue-emitting GaN LEDs (Nanishi, 2014). As well as LEDs, other optoelectronic semiconductor devices, such as solar cells, have been realized using alloys of InN and GaN (e.g. Aliberti et al., 2010). However, In, which is needed for bandgap values suitable for visible-light absorption, is a very scarce element, accounting only for 0.16 p.p.m. of the earth's crust (Webelements, 2020). While the binary system is limited to trivalent cations to account for the triple negative nitride anion, variations on the cation charges can be realized through more complex substitutions of the cations, such that the overall charge is maintained. In the simplest case, two trivalent cations can, for instance, be replaced by one divalent and one tetravalent cation. This is, for instance, realized in the Zn-IV-V 2 nitride materials ZnSiN 2 , ZnGeN 2 and ZnSnN 2 (Punya et al., 2011). However, more complex substitutions are also observed in compounds such as Li (+1) Al (+3) Si 2 (+4) N 4 (À3) (Ischenko et al., 2002) or Zn 3 (+2) Mo (+6) N 4 (À3) (Arca et al., 2018) to name just two. The situation can get even more complex when introducing O 2À anions, and can lead to complex structure-composition relationships. The crystal structures of many of these materials, however, can be clearly linked to the wurtzite-type structure (Baur & McLarnan, 1982) and this symmetry relationship can, in turn, be used to rationalize some of the electronic properties of these materials, as the electronic structure is linked to the atomic structure for obvious reasons. Relationships between crystal structure types are based on the relationships of the underlying symmetries through the use of crystallographic group theory (Mü ller, 2013). With the wurtzite type being the aristotype, i.e. the crystal structure with the highest symmetry in the system, the lower-symmetry variants, the hettotypes, can be accessed in cascades of group-subgroup descents. While it is beyond the scope of this article to give a conclusive overview of crystallographic group theory, one point is eminently important: when lowering the symmetry from a group to a subgroup, symmetry operations are only lost, and no other symmetry operations are added. This means that a crystal structure in the aristotype can also be expressed in a subgroup, but a crystal structure that genuinely crystallizes in the subgroup type cannot be expressed in the group. The subgroup has higher degrees of freedom which permits shifting of atoms (for instance, out of the centre of tetrahedra), or splitting of crystallographic sites, allowing occupation of different atom types with discrete ordering. One can also make the distinction between subgroups within the same point group, i.e. without a change in point-symmetry operations, known as klassengleiche subgroups (abbreviated by k), or isomorphic subgroups (abbreviated by i) if group and subgroup belong to the same space-group type. These groupsubgroup transitions are accompanied by an enlargement of the unit cell or a loss of unit-cell centring. If the translational symmetry is kept and only point-symmetry operations are lost, however, the subgroups are called translationengleich (abbreviated by t). 1 The International Tables for Crystallography Volumes A1 and A (Wondratschek & Mü ller, 2004;Hahn, 2005) are a comprehensive tool for the establishment of relationships between space groups. A wide-ranging discussion of the group-subgroup relationships in wurtzite variants was published some time ago by Baur & McLarnan (1982), but it does, unfortunately, bear a few inaccuracies at some crucial points. Therefore, we set out to redevelop the symmetry relationships in the system specific to nitrides and oxide nitrides together with barely complete tables of nitrides and oxide nitrides in the different structural types as they appear in the Inorganic Crystal Structure Database (ICSD); we will outline some of the difficulties and pitfalls that can arise in the analysis of the symmetry relations in this system. An overview of the wurtzite-related structure types The most powerful tool for the graphical representation of group-subgroup relationships was developed by Bä rnighausen (1980), where the relationships are represented in the form of a tree diagram with the highest-symmetry structure standing at the top. Essentially, four subgroups of the wurtzite type are found amongst the wurtzite-derived nitrides and oxide nitrides besides the aristotype (Fig. 1). For the sake of completeness, Fig. 1 also contains the relationship between the wurtzite type and the Lonsdaleite type (hexagonal diamond), which can be understood as the prototype for the atom stacking in these materials. The first complication of this system arises from the transition of the hexagonal wurtzite type in space group P6 3 mc to its maximal translationengleiche orthorhombic subgroup Cmc2 1 . This is because one needs to transition from the hexagonal coordinate system with angles = = 90 , = 120 to an orthogonal coordinate system with = = = 90 . This is symbolized as a, a + 2b, c in the Bä rnighausen tree (Fig. 1), which relates the basis vectors of the subgroup to those of the group. It also corresponds to a complex transformation of atom coordinates between the two space groups which becomes necessary. Although no structure is observed in the maximal subgroup Cmc2 1 , it forms an important link, as it is an Bä rnighausen tree for the wurtzite-derived nitrides and oxide nitrides. The transformation of the basis vector is given below the maximal subgroup-type symbol and index, only if it changes. The respective structure types are given in green. The tables on the left depict the site splitting in the relationship between the Lonsdaleite type and the Na 2 SiO 3 type. The atomic coordinates for AlN (Schulz & Thiemann, 1977) and Ge 2 LiN 3 (Hä usler, Niklaus et al., 2018) are taken from the literature as example cases for the wurtzite type and the Na 2 SiO 3 type, respectively. intermediate space group to all lower-symmetry variants. If the atomic positions are given in decimals rather than fractions, they are no longer bound by symmetry to specific positions. The second complication arises because all subgroups are in orthorhombic space-group types, where one particular axis setting has been defined as the standard setting based on the symmetry operations. Mü ller (2013) advocates for the use of non-standard settings of space groups to avoid unit-cell transformations, but we use the standard settings of the space groups herein to facilitate the use of the work for the wider community. An example of this is the transition from Cmc2 1 to Pca2 1 , where a and b are swapped and c is inverted. While the intermediate space group Cmc2 1 allows for higher degrees of freedom, it still contains only two independent crystallographic sites, one for the anions and one for the cations. To accommodate different cations or anions on distinct crystallographic sites, the symmetry needs to be lowered even further. Lowering the symmetry, the cation and anion sites split in different ways to accommodate different ratios of cations and anions, but also to form different ordering patterns. It has been outlined before that Pauling's rules are a decisive factor in the way these materials are built (Baur & McLarnan, 1982;Quayle et al., 2015). It is interesting to note that the nominally highest-symmetry subgroup for such an octet-rule-obeying arrangement, Pmc2 1 , is not found in any existing material. Wurtzite type The binary III-V nitrides of main group 3 (apart from BN), namely AlN, GaN, InN and TlN, crystallize in the wurtzitetype crystal structure (Fig. 2, Table 1). It is worth noting that the wurtzite-type structure is non-centrosymmetric, i.e. it lacks a centre of inversion as symmetry operation. Consequently, all its hettotypes are non-centrosymmetric too. Besides the pure binaries, a vast number of binary alloy compounds exist, where the cations are disordered on the single crystallographic cation site. This feature is used for the effective bandgap tuning in these compounds, for instance in the system Ga (1Àx) In x N (Jakkala & Kordesch, 2017). Not only do III-V nitrides form wurtzite-type crystal structures, but nominally ternary systems, such as BeSiN 2 (Schneider et al., 1979) and ZnGeN 2 (Larson et al., 1974), have been reported to adopt the disordered wurtzite type. In fact, we recently showed that introduction of oxygen Figure 2 Structure of GaN (Paszkowicz et al., 2004) in the wurtzite type. All atoms are drawn as generic spheres. on the anion sites leads to the formation of Zn 1+x Ge 1Àx N 1Ày O y compounds in the wurtzite type with disordered cations and anions (Breternitz et al., 2019). This is in line with multi-cation oxide nitrides that still adopt the wurtzite aristotype, such as Cd 0.25 Zn 1.13 Ge 0.62 ON (Capitá n et al., 2000). The Na 2 SiO 3 type When the cation site is occupied by more than one atom type, an ordered case is normally energetically and geometrically favourable. The latter is easy to quantify through the ionic size of the different cations. If the cations are disordered on one crystallographic site, the difference in coordination environment between the different atom types is only possible within rather strict borders. Therefore, a tendency of the cations to order reduces the strain on the crystal structure, combined with an energetic advantage. To allow for an ordering of different cation types on different crystallographic sites, the positions need to be split in agreement with the ratio of the cations in the material. Take Zn 2 PN 3 , for instance (Fig. 3). The Zn:P ratio is 2:1 and hence a splitting of the crystallographic sites into a different ratio would unavoidably cause disorder on some positions. The simplest 2:1 splitting of the cation 4a site in Cmc2 1 , as the maximal subgroup of the wurtzite type, is through an isomorphic symmetry descent of index three (Fig. 1). Thereby, the 4a site is split into one special 4a position with x = 0, i.e. lying in the bc plane, and one 8b general position (x, y, z) that imposes no restrictions on the atom positions. Given the multiplicity of the two different sites, there are only half as many atoms on the 4a positions as on the 8b positions in this arrangement, which are being filled 2:1 in the Na 2 SiO 3 type ( Table 2). The Zn and P atoms in Zn 2 PN 3 occupy the Wyckoff positions 8b and 4a, respectively. It is worth mentioning that the cation site splitting goes along with an anion site splitting. In pure nitrides, these two anion positions are both occupied by nitrogen, but the site splitting allows for a deviation of the coordination environment for both cation sites. Taking Zn 2 PN 3 as an example case for the Na 2 SiO 3 type ( Fig. 3), it can be easily depicted that the PN 4 tetrahedra form strands along the b directions, which is an effect of the special 4a positions on which phosphorus resides, whereas the ZnN 4 tetrahedra are interconnecting the strands in three dimensions. It is interesting to note that all nitrogen atoms are connected to phosphorus atoms as well as to zinc atoms, with the nitrogen on 4a being connected to two P 5+ and two Zn 2+ and the one on 8b to one P 5+ and three Zn 2+ . Therefore, both positions do not strictly fulfil the octet rule as discussed previously, but only approximate Pauling's rules (George et al., 2020) with bond strengths of +3.5 and +2.75, respectively. 3.2.1. The ordered defect variant Si 2 N 2 O. The Si 2 N 2 O type ( Fig. 4 and Table 3) can be seen as a special case of the Na 2 SiO 3 type, since both crystallize in the same space groupand would be located at the same position in the Bä rnighausen tree -but are different in the occupation of the crystallographic sites. The 4a position remains unoccupied, as compared with the Na 2 SiO 3 type. However, since this class of compounds bears two distinct anions, the anion site splitting mentioned above plays an important role here in that the two crystallographically independent anion sites are occupied by (1993) nitrogen (8b) and oxygen (4a). In this particular arrangement, every oxygen atom is neighbouring two silicon atoms, whereas every nitrogen atom is neighbouring three silicon atoms and thereby obeying Pauling's rules with formal bond strengths of 2 and 3, respectively. Finally, SiPN 3 , which should be more correctly written as (Si 0.5 P 0.5 ) 2 N 3 since Si and P are sharing sites, can be viewed as an intermediate between the Na 2 SiO 3 type and the Si 2 N 2 O type, since it only contains one sort of anion, which it shares with the Na 2 SiO 3 type, but exhibits an unoccupied 4a cation position like the Si 2 N 2 O type. In fact, both cations share the general 8b position and show no particular order as observed by Baldus et al. (1993). However, this result was determined on the basis of X-ray and electron diffraction only, but P 5+ and Si 4+ are isoelectronic ions and would hence scatter in much the same way and therefore not allow a truly reliable determination. The b-NaFeO 2 type When it comes to a 1:1 ratio of cations, there has been much discussion of the formally highest-symmetry crystal structure that complies with Pauling's rules in Pmc2 1 . This space group is a maximal klassengleiche subgroup of Cmc2 1 and is a space group in the transition to the enargite-type structure (see also Figs. 1 and 6). However, this space group has not been observed and nitrides with a cation ratio of 1:1 are reported to crystallize in the -NaFeO 2 type in the space group Pna2 1 , which has a unit cell twice as large as that of the hypothetical Pmc2 1 structure. Since this phenomenon has led to some confusion, we will discuss it in more detail in Section 4. One complication that makes the direct comparison of this structure sometimes difficult is that all atoms in the -NaFeO 2type structure lie on general 4a positions. Therefore, the choice of the unit-cell origin is arbitrary in this system and may necessitate a shift of the experimentally determined coordinates to reveal the group-subgroup relationship derived ones, as is illustrated in the table in Fig. 5. This can either be performed manually or through automatic tools, such as the program COMPSTRU (de la Flor et al., 2016). Since this structure obeys Pauling's rules, every anion is surrounded by two cations of every sort to equalize the charges on every anion position [ Fig. 5(b)]. The fact that all atoms lie on general positions further allows a high structural flexibility accommodating cations of distinctly different sizes, for instance (Table 4). One particular case of the -NaFeO 2 type is the zinc nitride halides. In principle, there are two possible ways to view them in terms of the structure type: one either regards them as a special case, where the two cation types are the same and the anion sites are occupied by different atoms, or as anti--NaFeO 2 type, where cations and anions switch sites. Since all atoms lie on a general position and have the same coordination, both ways lead to the structure and are hence Group-subgroup relationship between the wurtzite subgroup Cmc2 1 and the -NaFeO 2 -type as well as the -LiSiON-type structures. Structure representations of the -LiSiON type (a) (Laurent et al., 1981) and the -NaFeO 2 type (b) (Hä usler, Neudert et al., 2017) are drawn as general views with atoms as generic spheres. An origin shift (0, 0, À0.5) was applied to the coordinates in the documented structure of LiSiON (Laurent et al., 1981) to highlight the group-subgroup relationship. We note that the orientation of the polar axis c is inverted in the documented structures with respect to the group-subgroup derived one. interchangeable from a structural point of view. However, the authors would argue for the latter case as anti--NaFeO 2 type, since one further point needs to be considered: the occupation of the anion positions by two different types with different charges can only obey Pauling's rule if the cation positions are filled by cations of the same charge, in the simplest case the same cation sort. This mutual dependency is the exact opposite for the pure nitrides in the -NaFeO 2 type and the nitride halides should hence be regarded as anti--NaFeO 2 type. a-LiSiON type As the nitride halides demonstrate, the -NaFeO 2 type does not allow for an occupation of the cation sites and the anion sites with differently charged ions, while obeying Pauling's rules at the same time; instead, a different arrangement of the tetrahedra needs to be achieved. The number of next neighbours of the distinct crystallographic sites needs to be different for the different sites. This is achieved through a different symmetry descent from the common intermediate subgroup into the space group Pca2 1 in the -LiSiON-type structure (Fig. 5). The cations form planes in the ac plane and so do the anions. The effect of this is that every oxygen atom has three lithium and one silicon neighbour, while the nitrogen atoms have three silicon and one lithium neighbour. While the formal bond strengths of 1.75 and 3.25, respectively, do not perfectly obey Pauling's rule, they are considerably closer to the expected values than the 2.5 throughout the -NaFeO 2 type. Given the rather special arrangement in this class, only two compounds, namely LiSiON (Laurent et al., 1981) and KGeON (Guyader et al., 1983), have been experimentally observed in this structure type. The enargite type Finally, one further structure type is observed in the nitride wurtzite system. Instead of a simple 1:1, or a 2:1 splitting of the crystallographic sites as observed in the -NaFeO 2 type and the Na 2 SiO 3 type, cation and anion positions are split into three crystallographic sites (Fig. 6) with different multiplicities and thereby allowing a 3:1 occupation of the cations or anions. Taking Na 3 MoO 3 N as an example, sodium occupies a 2a and a 4b position, while Mo occupies a 2a position, with the situation for the anions being analogous. Therefore, the MoN 4 tetrahedra are isolated within the crystal structure and completely surrounded by NaN 4 tetrahedra (Fig. 7). Taking a simple view with Pauling's rules fails in this structure type, since all anions are surrounded by three sodium cations and one molybdenum cation. This is probably due to the different nature of the Mo-N/O bonding versus the Na-N/O bonding, with the former being distinctly more covalent. Therefore, a careful consideration of the bond lengths in addition to the simple counting of nearest neighbours is necessary to rationalize this ordering. This situation is even more obvious in the compound Table 4 Some of the nitrides, oxide nitrides and nitride halides in the (anti-)-NaFeO 2 type. Figure 6 Group-subgroup relationship from the common intermediate subgroup the Li-O/N bonding is expected to be mostly ionic. Although this structure type shows a considerable degree of complexity, four compounds, Li 3 SO 3 N (Kurzman et al., 2013), Na 3 MoO 3 N (Arumugam et al., 2003), Na 3 WO 3 N (Elder et al., 1994) and Zn 3 MoN 4 (Arca et al., 2018), span the range for this class from oxide nitrides to pure nitrides. The curious case of Pmc2 1 In the light of the complexity of group-subgroup relationships and the fact that they are probably rarely taught, problems sometimes occur in the literature that are prone to being replicated in work based on it. They are, however, crucially important for a proper understanding of symmetry relations, and we would like to showcase this for the case of Pmc2 1 , a space group that has been postulated for ternary nitrides for a long time (e.g. Baur & McLarnan, 1982;Quayle et al., 2015;Quayle, 2020). As highlighted above, a number of research articles have pointed out that the highest-symmetry subgroup of the wurtzite type in which an ABN 2 nitride could crystallize and that obeys Pauling's rules is not the -NaFeO 2 type, but a crystal structure in Pmc2 1 . This postulated crystal structure can be found as an intermediate subgroup in the descent to the enargite type ( Figs. 1 and 6). The difference between the hypothetical crystal structure in Pmc2 1 and the -NaFeO 2 type is, indeed, only found in the relative arrangement of the different cations to each other, which led Baur and McLarnan to speculate in 1982 that the energy difference between the two conformations may be small. Quayle et al. (2015) calculated the energy difference between the observed -NaFeO 2type structure and the hypothetical Pmc2 1 structure to be only 13 meV per formula unit in the case of ZnSnN 2 . From this point of view, it is considered interesting why the Pmc2 1 structure has not been observed (and in fact has not been observed for any of the wurtzite series of materials). From a crystallographic and crystal chemistry point of view, it is not quite as surprising to find that the structure in Pmc2 1 is not observed. Regarding the group-subgroup relationship as outlined in Figs. 1 and 6, it is evident that the crystallographic sites of the cations and anions split from 2b Wyckoff sites in the hexagonal wurtzite aristotype into 2a and 2b sites in Pmc2 1 . It is important to remember that Wyckoff sites not only show the multiplicity, but also indicate the site symmetry (which is m for both Wyckoff sites). In fact, the 2a and 2b sites in Pmc2 1 are special positions with the coordinates (0, y, z) and ( 1 2 , y, z), respectively. This essentially means that the two sites can accommodate different atom types, but that they are bound to lie in the bc plane (Fig. 8). This restriction does not only apply to the cations, but is true for the anions too. The restriction to the bc plane means that the M-N distances (M, cations) outside the bc plane are critically dependent on the a-axis length and this is the same for both types of cations. In essence, the tetrahedral coordination can either strongly distort into a disphenoid, 2 or will have to remain very similar for both ions. This arrangement cannot be favourable, since a distortion would create a strongly anisotropic bonding situation, which is hardly energetically favourable, and identical tetrahedron sizes for cations of different size would be peculiar, as this is one of the main drivers for ordering. This is in line with the observation of Quayle et al. (2015) that the energy difference between the -NaFeO 2 -type structure and the hypothetical Pmc2 1 structure is larger for ZnGeN 2 than for ZnSnN 2 , as the Shannon radii of Zn 2+ (0.6 Å ) and Ge 4+ (0.39 Å ) are more different than those of Zn 2+ and Sn 4+ (0.55 Å ) (Shannon, 1976). One further thing should be considered. It is not the atoms that follow the symmetry of the space group, but the space group that reflects the symmetry of the atomic arrangement. While the space group Pmc2 1 is not a likely choice, as outlined above, the particular arrangement of tetrahedra in this hypothetical structure could well exist. To rectify the symmetry restriction, one needs to lose the mirror planes perpendicular to the a axis. This could be done in a klassengleiche descent of index two into Pca2 1 with a doubling of the b axis, or through translationengleiche descents into one of the monoclinic space groups Pc or P2 1 . However, no crystal structure that would correspond to one of these subgroups has been reported in the ICSD. Conclusions The structural variability of nitrides and oxide nitrides in the wurtzite type and its subgroups is rich and gives rise to many different properties that can be attained in particular arrangements. Combining and comparing these different arrangements and putting them into their group-subgroup relationship can greatly aid the interpretation, but it needs to be performed with great care. In particular, group-subgroup relationships, as they exist in this family, between the hexagonal and the orthorhombic crystal system can be difficult due to the change of basis vectors, as well as those within the orthorhombic crystal system, due to different unit-cell settings. We have developed the relationship between the most important structural types that are found for wurtzite and wurtzite-derived nitrides and explained the differences between them, which mostly reside in the cation arrangements relative to each other. Finally, we showcased why ternary nitrides with a 1:1:2 stoichiometry are unlikely to adopt a structure in the space group Pmc2 1 , although much speculation has been devoted to this point. A thorough understanding of the relationship between electronic and atomic structures must, from our perspective, be preceded by a thorough understanding of the atomic structures themselves.
5,913.6
2021-03-23T00:00:00.000
[ "Physics" ]
Preliminary investigation of a luminescent colloidal quantum dots-based liquid scintillator Nanoparticles are appealing materials because of their versatility in addition to the uniqueness of their properties enabling one to use them in different applications. Radiation detection with colloidal quantum dots (cQDs) is one of the domains where the nanoscience entered lately. The luminescent nanocrystals that are cQDs are of particular interest in scintillation dosimetry, where they could play the role of the fluorophore in a liquid scintillator. The study presented in this paper investigates the response of a cQD-based liquid scintillator to X-ray radiation in order to characterize the dose and cQDs concentration dependence of the radioluminescence (RL) signal. For a beam energy of 180 kVp, the latter was found to be linear as a function of dose, with a majority of the signal (∼80%) coming from the cQDs and not the solvent, in this case hexane. Even with an ultra low concentration (μM), cQDs emit sufficient light to be detected. The RL intensity followed also a linear trend as a function of cQDs concentration, independently of the exposure time. Introduction The interest for nanotechnology has been constantly growing over the last decades because of the opportunities offered by nanoscale physics. This young science has first reached the medical sciences when nanoparticles (NPs) were used for drug delivery and optical imaging. It gained popularity in medical physics too where, nowadays, gold NPs are used as dose enhancers during radiotherapy treatments. Different declinations of NPs are available and allow one to choose the right type for the intended application. One of those types is particularly interesting for applications in scintillation dosimetry: colloidal quantum dots (cQDs), luminescent nanocrystals (NCs) of semiconductors. cQDs benefit from the quantum confinement of their charge carriers to have discrete energy levels. Their properties are then proper to the NC size and shape due to a modulation of the bulk medium properties of the same composition. One example of these properties is that they are brighter light emitters than their bulk counterpart. They also have a size-dependent broad absorption and narrow emission spectra [1]. The latter is of interest when considering the match of the scintillation wavelength to the photodetector sensitivity range. Finally, the surface chemistry of the NCs, and their small size, allows for the dispersion of cQDs in many physical supports (matrices), including water, in an ultra low concentration, of the order of the ppb. Consequently, the motivation of using a cQD-based liquid scintillator for 3D dosimetry is to get a liquid scintillator with easy control on the scintillation peak wavelength, on the type of solvent and to get a scintillator with low energy dependence. Even if the scintillating cQDs are composed of high-Z elements, the really low concentration of NCs in the liquid could allow one to get rid of this dependence. Lecavalier et al [2] have already investigated preliminarily the response of cQDs to cobalt irradiation and obtained promising results concerning the use of cQDs dispersed in hexane and in water for liquid scintillation applications. The study presented in this paper describes further investigation of the response to ionizing irradiation of a cQD-based liquid scintillator. The objective of this particular work is to characterize the scintillation of cQDs dispersed in hexane as a function of dose and concentration. cQDs synthesis and composition The NCs are synthesized in a three-neck flask: the cores are first grown with precursors heated at 250°C to get CdSe, than the shells are synthesized using the successive ion layer adsorption and reaction (SILAR) method [2]. Multiple shells surround the cQDs' core in order to passivate the dangling bonds responsible for the reactivity of the cQDs with their environment. They are successively composed of CdS, Cd 0.5Zn0.5S and ZnS as depicted in figure 1. The cQD diameter and elementary composition dictates the peak emission wavelength. Figure 2 presents different colors available for CdSe cQDs. The cQDs were dispersed in hexane and four dilutions were prepared at 1/5, 1/10, 1/20 and 1/30 of the initial concentration, which was 22.5 micro molar (μM). These dilutions fractions correspond respectively to 588, 294, 147 et 98 parts per billion (ppb). Irradiation conditions The cQDs liquid preparations were irradiated with an orthovoltage device Xstrahl 200 at 180 kVp with a dose rate of 369 MU/min. The field had a diameter of 5 cm and a SSD to Dmax of 20 cm. The cQDs were also irradiated at 6 MV (Varian Clinac iX) with a 10 x 10 cm 2 field. The 6 MV measurements were also achieved with a commercial liquid scintillator (Ultima Gold, Perkin Elmer) to get a basis of comparison of the cQDs' intensity. Detection set-up A CCD camera was used to image the vial/cuvette with the cQDs in hexane placed 1 cm away from the field applicator. The set-up was covered with black blankets to cut the residual ambient light contamination. A vial containing only hexane was also irradiated in the same conditions to account for the Cherenkov light production in the solvent. Dose and energy dependence The radioluminescence (RL) signal collected at 180 kVp was found to be linear with exposure time, hence with dose deposited in the cQDs. Figure 3 presents this linear dependence of the RL intensity over a dose range up to 2.5 Gy. It also shows that the majority (~80%) of the RL intensity is due to cQD scintillation with the remaining 20% from Cherenkov (or fluorescence) from hexane. This implies that the energy transfer between the solvent and the cQDs is good, which is promising since the solvent could be eventually changed to optimize this energy transfer. There is no need for a third component acting in the energy transfer because the cQDs have a broad absorption spectrum, which can be tuned by their size. Even if the NCs have a concentration in the μM range, the signal is sufficient for scintillation measurement. This ultra low concentration lets us believe that cQDs will not perturb the beam for dose measurement purposes. The collected signal at 6 MV for the cQDs in hexane represents about 3% of the total signal of the Ultima Gold scintillator. At first, this proportion may seem low, but one has to keep in mind that the concentration of fluorophore in the Ultima Gold is far more important than that of the cQDs in hexane. When comparing the proportion of Cherenkov produced in hexane only to the total scintillation signal of cQDs in hexane, we get that the scintillation signal is 120% that of the Cherenkov's. Since the cQDs emission wavelength is at 615 nm, we looked up only at the red channel of the CCD to see the change in that proportion: the percentage reaches up to 500%. RL intensity as a function of cQDs concentration Each dilution was irradiated in the same conditions to characterize the dependence of the RL intensity as a function of the cQD concentration. At the same time, we also collected the RL signal as a function of dose, which gave similar results to figure 3 for all concentrations. As for the concentration dependence, we found a linear trend of the signal as shown in figure 4, which is valid for the 4 exposure times tested. No saturation point, where the RL intensity starts to decrease, was reached so far as it could be observed in photoluminescence [3]. Beyond differences in sample concentration, the excitation sources, ionizing vs. visible, do not have the same energy fluence and energy values (eV vs keV). This could affect the energy transfer mechanism that stays optimal for ionizing radiation. Overall, the linearity observed offers a great way of normalizing the scintillation signal and a possibility to optimize the efficiency by increasing the concentration while remaining in the ppb range. Conclusion The first preliminary results presented here for cQD-based liquid scintillators are promising. The scintillator fulfills an important requirement for dosimeters to be suitable for applications, that is the linearity of its signal as a function of the dose. Also, it was shown that the concentration of the liquid samples can be taken as a normalization tool and could be chosen to be higher to get a better light production. Further investigation will look into the energy transfer dependence as a function of the solvent, in particular in toluene, alkyl benzene and water, the last two making it easier to manipulate as 3D dosimeters.
1,947.4
2017-05-01T00:00:00.000
[ "Physics" ]
Internet of Things: Visions, Technologies, and Areas of Application The internet of things (IoT), also called internet of all, is a new paradigm that combines several technologies such as computers, the internet, sensors network, radio frequency identification (RFID), communication technology and embedded systems to form a system that links the real worlds with digital worlds. IoT is recognized as one of the most important areas of the technology of the future and wins attention from a wide range of industries. Currently, a large number of smart objects and different types of devices are interconnected and communicate using the internet protocol. With an increase in the deployment of smart objects, the internet of things should have a significant impact on human life in the near future. To understand the development of the IoT, this paper reviews the current research of the IoT, key technologies, the main applications of the IoT in various field, and identifies research challenges. In this article, we present the main visions and the scope of the internet of things and the futuristic research and its fields of application. the main contribution of this review article is that it summarizes the current state of the IoT technology in several areas, and also the applications of IoT that cause side effects on our environment for monitoring and evaluation of the impact of human activity on the environment around us, and also provided an overview of some of the main challenges of IoT, and also shows that application of the IoT. This article presents not only the problems and challenges of IoT, but also solutions that help overcome some of the problems and challenges. Introduction This paper presents a review of literature on the subject of the internet of things technologies and their applications domains and the futuristic research areas. Several research studies have addressed and developed this topic with detailed studies synthesis about the fields of application of internet of things, and general visions [1,2]. Throughout this paper, we propose a state of the art on the technologies of the internet of things and their areas of application and current research in the areas of health, industry, robotics, transport and logistics (production, distribution, transportation, maintenance, marketing, management). The Internet of Things (IoT) is a computing concept that describes a future where every day physical objects will be connected to the Internet and be able to identify themselves to other devices. The term "internet of things" has emerged 20 years ago in downtown MIT auto-ID1 marked the beginning of a new era for trade and industry. First, the internet of things was regarded as a simple extension of the identification by radio frequency (RFID). But if you consider the possibilities of evolutions and the number of applications attached to the interconnection of objects, the internet of objects appears more as a revolution: during the 19 th century, machines have learned to execute commands during the 20 th century, they have learned to think, and in the 21 st century, they will learn to anticipate and perceive [3,4]. Over the past three decades, a fantastic job on the internet has led to the growth of the internet of things where are created intelligent interconnections between various objects. The main vision behind the internet of things is that devices shipped, also known as smart objects, are increasingly organizing connected between them [5]. The technology of the internet of things (IoT) establishes a connection between all things and the Internet. IoT is widely applied in intelligent transportation, environmental protection, public safety, positioning, tracking, and monitoring and management intelligently. Intelligent building and house residential area based on the application of information technology little become more imperative. In this article, we aim to provide a global perspective on the concept and development of internet of things, including a critical review of application domains, enabling technologies and challenges of research. In fact, the community of active research on topics related to the (IoT) is still very fragmented, and, largely, focused areas of simple applications or simple technologies. In addition, the participation of the communities for networking and scientific communication is still limited, despite the potential impact of their contributions on the development of the field [6,7]. The technology of internet of things (IoT) establishes a connection between all things, and internet via detection of devices and smart tools identification and management. The means of remote sensing information includes RFID, infrared, GPS and laser sensor devices scanner. They are all connected to the internet to implement control and distance perception [8]. IoT is widely applied in intelligent transportation, environmental protection, the governmental jobs, public safety smart homes, fire control, industrial monitoring, and care for the older persons, health personnel stuff, etc… Several industrial entities, research and standardization are currently involved in the activity of developing solutions to meet the technological needs highlighted. This survey gives a picture of the current state of the art on the IoT. More specifically, it: provide readers a description of the different visions of the internet of things paradigm from different scientific communities. The main objective is to give the reader the opportunity to understand what has been done and what remains to be done in this area. The rest of this article is organized as follows. In Section 2, we showing us and explaining other surveys on the Internet of Things. In section 3, we introduce the essential IoT technologies. In section 4, we give a general overview of the challenges of the Internet of Things. Application fields of Internet of Things are defined in section 5. Section 6 is devoted to describing and present IoT futuristic applications Section 7 concludes the investigation with a number of remarks on possible approaches. Other Surveys on IoT In this part, we showed the main research areas considered by most surveys published in the field of the Internet of Things. To clarify that the IoT refers to several good surveys recently presented each display IoT from a different perspective: challenges [3] applications [8] and standards [6]. Among these investigations, a complete overview of IoT, and three different angles: things, Internet, and semantics, was presented by Atzori and her colleagues [7]. There are several documents published survey covering different aspects of IoT technology. For example, the survey by Eleonora Borgia [9] covers the main communication enabling technologies, and the elements of wireless sensor networks (WSN). In addition, [12] discusses the IoT in terms of enabling technologies focusing on RFID and its potential applications. IoT challenges presented in [13] to bridge the gap between research and the practical aspects. An overview of the standards and challenges for current IoT presented in [14]. All that for IoT technologies and research challenges for the applications of the Internet of things, there are many surveys that address what this part of applications for example in [15] the authors come up with a solution to manage IoT bicycle parking very effectively is a parking for bicycles. A project for smart water monitoring and management of the water cycle was presented in [16]. A tracking application people and inventory logistics was presented in [17]. Another IoT application that is very important in the field of mining is mining production which is developed by [18] for Safety in mines, to prevent and reduce accidents in the mining sector, Using RFID, Wi-Fi and other wireless communication technologies. Another useful application is to use chemical and biological sensors for early detection of the disease and diagnosis of minors. The IoT use in the transport and logistics is given by [19]. Zhang et al. [20] developed an intelligent control system to monitor the temperature / humidity inside refrigerated trucks using RFID tags, sensors and wireless communication technology. To the best of our knowledge, however, no investigation has focused on industrial solutions of IoT. All the above investigations have reviewed the solutions proposed by academic and research communities and to refer to scholarly publications produced by the respective researchers. In this article, we review the problems, challenges, technologies and IoT applications that are proposed, designed, developed and marketed, and are useful for researchers and industrial organizations. This paper begins by providing a horizontal overview of the IoT. Next, we give an overview of some technical details that are relevant to the IoT enabling technologies. Compared to other documents from the field survey, our goal is to provide a more detailed summary of IoT technologies, research challenges, problems, and existing applications to enable researchers and application developers to see that they are the areas covered by the Internet of Things. The contour of the contributions of this paper compared to the recent literature in the field can summarized as follows: i. Compared to other documents from the field survey, this survey provides a deeper summary of the Internet of Things, which allows us to know what the Internet of Things is in details. ii. We provide an overview of some of the main challenges of IoT presented in recent literature and provide a summary of related research. In addition, we explore the relationship between the IoT and other emerging technologies: sensor networks, RFID technology, and cloud computing. iii. We express the need for better horizontal integration between IoT services. iv. We also present the different fields of application of the Internet of Things in the human life, and the detailed futuristic applications to illustrate the further work in this area. Internet of Things Technologies A novel paradigm called Internet of Things (IoT) has rapidly gained ground in recent years. IoT refers to "a global network of interconnected objects that are uniquely addressable based on standard communication protocols", the point of convergence of which is the Internet. IoT is powered by the latest advances in a variety of communication devices and technologies, but the things included in IoT are not just complex devices such as mobile phones, but they also include everyday objects such as food, clothing, furniture, paper, landmarks, monuments, works of art, etc.. These objects, acting as sensors or actuators, are capable of interacting with each other to achieve a common goal. In the following, we describe some very important technological aspects related to the IoT. RFID Technology (Radio Frequency Identification) RFID is an automatic and contactless technology, providing a communication interface with the tagged objects through wireless data transmission to retrieve relevant information. [21] Radio frequency identification (RFID) allows automatic identification and data capture using radio waves, a tag, and a reader. The tag cab store more data than traditional barcodes. Three types of tags are used. Passive RFID tags rely on radio frequency energy transferred from the reader to the tag to power the tag; they are not battery-powered. Passive RFID technologies present many advantages. Indeed, RFID can be seen as "electronic bar codes" which do not need objects to be handled one by one. In addition, no direct sight is necessary. More and more information can be stored in tags. Tag reading is quick (a reader can read up to 250 tags per second) [22]. Applications of these can be found in supply chains, passports, and electronic tolls. Active RFID tags can contain external sensors to monitor temperature, pressure, chemicals, and other conditions. Active RFID tags are used in manufacturing, hospital laboratories, and remote-sensing IT asset management. Semi-passive RFID tags use batteries to power the microchip while communicating by drawing power from the reader. Active and semi-passive RFID tags cost more than passive tags. In IoT scenario, a key role is played by RFID systems, composed of one or more readers and several tags. These technologies help in automatic identification of anything they are attached to, and allow objects to be assigned unique digital identities, to be integrated into a network, and to be associated with digital information and services. Wireless Sensor Networks (WSN) A Wireless Sensor Network (WSN) can be defined as a network of small embedded devices, called sensors, which communicate wirelessly following an ad hoc configuration. [23]. Wireless sensor networks (WSN) consist of spatially distributed autonomous sensor-equipped devices to monitor physical or environmental conditions and can cooperate with RFID systems to better track the status of things such as movements, pressure, temperature, and location. Wireless sensor networks (WSN) my provide various useful data and are being utilized in several areas like healthcare, government and environmental services (natural disaster relief), defense (military target tracking and surveillance), hazardous environment exploration, seismic sensing. (WSN are used for maintenance and tracking systems. For example, General Electric deployed sensors in its jet engines, turbines, and wind farms. By analyzing data in real time, General Electric saves time and money associated with preventive maintenance. Likewise, American Airlines uses sensors capable of capturing 30 terabytes of data per flight for services such as preventive maintenance. Middleware Middleware is a software layer interposed between software applications to make it easier for software developers to perform communication and input/ output. Middleware gained popularity in the 1980 due to its major role in simplifying the integration of legacy technologies into new ones. It also facilitated the development of new services in the distributed computing environment. A complex distributed infrastructure of the IoT with numerous heterogeneous devices requires simplifying the development of new applications and services, so the use of middleware is an ideal fit with IoT application development. For example, Global Sensor Network (GSN) is an open source sensor middleware platform enabling the development and deployment of sensor services with almost zero programming effort. Due to the heterogeneity of the participating objects, to their limited storage and processing capabilities and the huge variety of applications involved, a key role is played by the middleware between the things and the application layer, whose main goal is the abstraction of the functionalities and communication capabilities of the devices. The middleware can be divided in a set of layers: Object Abstraction, Service Management, Service Composition, and Application. [24] Cloud Computing The essential aspects of cloud computing have been reported in the definition provided by the National Institute of Standard and Technologies (NIST): "Cloud computing is a model for on demand access to a shared pool of configurable resources (e.g., computers, networks, servers, storage, applications, services, software)" that can be provisioned as infrastructure as a service or software of the IoT is an enormous amount of data generated from devices connected to the internet. Many IoT applications require massive data storage, huge processing speed to enable real-time decision making, and high-speed broadband networks to stream data, audio, or video. Cloud computing provides an ideal back-end solution for handling huge data streams and processing them for the unprecedented number of IoT devices and humans in realtime. [25]. Challenges of Internet of Things Security challenges can be addressed by training developers to integrate security solutions (such as firewall, firewall, etc.) into products and by encouraging users to use the security features of the IdO which are built into their devices Technology Challenges The Internet of Things (IdO) as a new global Internetbased information architecture that facilitates the exchange of goods and services are gradually gaining in importance. The best-known use of the IdO is based on RFID (Radio Frequency Identification Device). In practice, the level of sophistication and RFID prices can be very different, starting with the cheap passive device without a power supply and storage limited to an active RFID having advanced storage and communication capabilities. Some of the data collected seem to be insignificant, but for example, data relating to a production process could be very valuable, thus requiring appropriate protection.. In addition, all smart phones today carry position sensors by allowing them to continuously monitor their users. All these IdO devices in a form add value to individuals as well as businesses; However, they also cause risks. Security Challenges The IdO devices collect a large amount of information and therefore they carry a great potential for privacy risks as compared to using the data and accessing it. As more and more IoT devices used in all areas of everyday life, such as in the health care sector, a large amount of private information considered is stored and collected. [26] As a growing number and variety of connected devices are introduced into IO networks, the potential threat to security degenerates. While IdO improves business productivity and improves the quality of people's lives, the IdO also increase potential attack surfaces for hackers and other cybercriminals. A recent study found that 70% of the more commonly used IdO devices contain severe vulnerabilities. Domain of Applications Applications can be grouped into six main areas: industrial domain, smart city, intelligent environments, social domain, and the domain of health. Each domain is not isolated from the others, but it is partially overlapping because some application are shared. An example is the tracking of products, which is in common between industrialists and the domain of health because they can be used for the monitoring of food, but it is also able to monitor the delivery of pharmaceuticals. Figure 1 shows the subdivision of the fields of application of internet of things. Transportation and Logistics The transport and logistics sector is a promising sector for the economy, the importance of the sector of transport and logistics is measured also by its direct impact on the competitiveness of the economic fabric as well in terms of export as import that is why one finds several application in this field such as logistics, helps conduct, mobile ticketing…etc. Logistics Applications of IoT in the logistics field include an application industrial is the management of the supply chain and procurement. RFID can be attached to an object and allows identification of materials and products, such as clothing, or food and liquids. Their use helps to effectively manage ware houses and simplify inventory by providing a precise knowledge of the current inventory, while reducing inventory inaccuracies. Assisted Driving Cars, trains and buses as well as roads are equipped by sensors and actuators to provide important information to the driver or passenger of a car to allow better navigation and good security. Also there is monitoring of hazardous materials transportation systems. In addition, government authorities would benefit from these applications at the level to have more accurate information on road traffic patterns. Mobile Ticketing In this axis there are posters in railway stations or airports which provided information regarding transportation such as the prize time… these posters are equipped with tags NFC (Near Field Communication: is a standard based short-range wireless connection technology that enabled simple and safe peer-to-peer interconnections between electronics devices). The user can then obtain information on several categories of options on the web either by its hovering mobile phone on the NFC tag, or by pointing the mobile phone for visual markers. The mobile phone automatically gets information of the associated web services(stations, number of passengers, costs, availability and type of services) this application also allows the user to buy tickets online [27]. The Monitoring of Products Transported Among the goods transported are fresh products such as fruit and meat also dairy, they are vital products of our foods. For the transported thousands of kilometers, it should be monitored to avoid the uncertainty of the level of the quality of the goods. Working on ubiquitous computing and sensor technologies, these technologies are able to maintain the temperature and humidity at the state normal. Health Care Domain The internet of things will play an essential role to develop intelligent services and to support and enhance the activities of the society and the people. These services enable peoples to live independently, and to improve their health for this there are many benefits offered by the internet of things technologies include: tracking objects and people (staff and patients), medical parameters monitoring and administration of the drugs, the identification and authentication of people, collects data automatic and remote sensing [28]. Tacking and Monitoring of Objects and Persons Monitoring is the function aims at the identification of a person or an object in motion. In the field of health there are tracking and monitoring of patient flow to improve the workflow in hospitals and motion tracking through choke points, such as access to designated areas. The follow-up is more frequently applied instead of permanent inventory of monitoring (e.g. for maintenance, availability in case of need and use monitoring) and tracking of materials in order to avoid problems during surgery. Identification and Authentication The identification of the patients is to reduce malicious patient incidents (e.g. bad drug/dose/time/procedure), also the identification of the medical file complete and current, with regard to staff, the identification and authentication is most often used to grant access and improve the morale of the employees based on the safety of patients, are used as identification and authentication to meet the requirements of thee safety procedures to prevent the theft or loss of products and instruments. Transport and Data Collection Nowadays, personal health devices can transmit data using short-range Bluetooth wireless technologies, Near Field Communication (NFC), ZigBee or Bluetooth Low Energy (BLE), to mention some [29]. Transfer and automatic data collection is to reduce the time of processing forms, automated care and audit and management of medical procedures. This function also refers to integration RFID technology. There is an application called system of blind navigation for people with visual impairments to shopping. This blind navigation system helps these people to find their way in a store. Base system RFID store can use software to guide the visually impaired in shopping as indicated in Figure 2 the supermarket is divided into cells containing one shelf and the passage cells. RFID tags are distributed through the floor. Monitoring station (smart phone) maintains a Bluetooth connection with the reader RFID (intelligent cane) of the user [30]. Application for Persons with Disabilities The internet of things will play a very important role of the field of health in the near future because application in this area are enormous and these are very important and have many advent disabilities in the social economic, political and cultural fire. These applications makes it easy for people have a disability to make their activities every day it increases their autonomy and their self-confidence. Smart environments Domain An intelligent environment domain is a field concerned by the easy and comfortable use with comfortable workplaces and offices and the intelligence of content objects, be it a house, an industrial establishment, or a leisure environment. Comfortable Offices Sensors and actuators in the offices can make our life more comfortable in several respects: rooms heating of office lighting can change according to the hour of the day, appropriate monitoring and alarm systems, and also energy can be saved by automatically switched off of electricity equipment when not required. Leisure Environments Intelligent environments of leisure, like the Museum and the gym are two intelligent environments of leisure, like the Museum and the gym are two example where the internet of things technologies are exploited. For example the Museum can expose different historical periods with widely divergent of the climatic conditions. The premises of the Museum automatically adjusts with the external conditions (temperature, humidity…). Another application is in the gym, or trainers can download the profile of the exercise in the training for each player machine, when the player takes the machine, it will be automatically recognized through the label RFID connected to the machine. Also health parameters are monitored during the workout. Personal and Social Domain By analogy with the services of social networks for human beings, the internet of things introduces the concept of the social relationships between the objects. The advantages are the possibility of giving the internet of objects that can be shaped as needed to ensure the airworthiness of the network [31]. In this area there are applications that enable the user to interact with others to maintain and build social relationship. Indeed, things can automatically trigger the transmission of messages to friends so that they know what we do or what we have done in the past, share a few things in common. Social Networking We can think of RFID tags that generate events on the people and places to give-up dates in real time to users in their social networks such as Twitter, which are then collected and uploaded in networking sites social. Application user interfaces display a stream of events that friends have previously set and users can control their friend lists so that what events are disclosed to what friends. The field of the internet of things does not escape the trend of internet of things social. In the future internet, the majority of the connections will established among humans, but among devices (things, more or less "intelligent"). Historical Queries Found several applications in the field of historical queries among these implementation is cited: loss (creation of a search engine for things which helps to find items that we do not remember or were left) and flights (application can send a SMS to users stolen objects leave their seats without permission). Industrial Domain A number of industrial projects of the internet of things were conducted in such areas as agriculture, processing of industrial food, environmental monitoring and security surveillance. The internet of things will play a role more important in the transport and logistics industries such as physical objects are equipped with barcodes, RFID tags or sensors, transport and logistics companies can conduct surveillance in real-time to move physical objects of an origin to a destination through the supply chain including manufacturing, shipping, distribution and so on [32]. Intelligent environments also helps improve the automation in industrial installations with a massive deployment of labels it RFID related for production parts. Areas of Smart Cities The "Smart City", is designed to make better use of public resources, increasing the quality of services offered to citizens, while reducing the operational costs of public administration found applications in many different areas, such as home automation, industrial automation, medical aid, mobile health care, elderly assistance, intelligent management of energy and smart grids, the automobile traffic, management and others [33]. There are also other applications in this field such as: Futuristic Research Areas The internet of things is a vision that embraces and overcomes several technologies at the confluence of the nanotechnology, biotechnology, information technology and cognitive science. During the 7 to 10 years, the internet of things is likely to quickly develop and shape a new "information society" and "knowledge economy". The Aerospace and Aviation The internet of things will strengthen the security of products and services protecting them from counterfeiting. A problem facing aviation, 28 incidents 4 have caused in the United States by counterfeit components not conforming to the safety standards. These incidents could be avoided by introducing an "electronic pedigree" which will shape the life cycle of the critical elements of the aircraft, from their manufacture to their use. This is done by linking the RFID technology to a dynamic database. This database can be coupled with other elements of the aircrafts as different sensors (pressure, temperature…) and safety systems. Telecommunications Based on multiple existing and future technologies (GSM, UMTS, LTE, NFC, Bluetooth, Wi-Fi, GPS and sensors), internet of things will promote the development of new applications and new services. For example in the case of the NFC, we communique simply and securely with different objects by "scanning" them with a mobile phone that will transmit the data to a server. The interconnection of objects creates a vast network of data exchange that allows even to keep a medium of communication in the event of failure of the current telecommunications infrastructure [34]. In addition, the management of personal data by the SIM card offers increased security for authentication, the exchange of confidential data, or even the payment by mobile. The Intelligent Building and Instrumentation of the Buildings Among the economic sectors, the building sector is the largest energy consumer. Technology advanced the internet of objects in this sector can help to reduce the consumption of resources related to building (electricity, water) as well as to improve the level of satisfaction of humans inhabiting, or workers in office buildings or tenants of private houses [35]. Solutions such as smart meters are becoming increasingly popular for measuring energy consumption and transmission, by phone or by current holder in line to the manager of metering data. Always in a logic of interconnected objects, this type of solution can be combined with other sensors (temperature, humidity) in order to provide general and specific information on the buildings forming an intelligent and economic environment. Logistics and Supply chain Management Implementation of the internet of things in logistics and presents the many benefit supply chain: elements equipped with RFID and smart shelves that follow the elements in real time, [36] can optimize the many applications, such as: automatically verifies the receipt of goods, followed by stocks, followed out-of-stocks or the detection of shoplifting in real-time traceability in real time using RFID chip fitted equipment, exchange of product data, intelligent management of stocks, automatic checking of the inputs/outputs of products…. These applications will generate a gain of time and significant savings in logistics. Below a summary table of the areas of possible applications of the internet of things. The Automotive Field The next few years, the communication of vehicle data will be changed gradually from the embedded electronic systems to the software of the wireless sensor network. Nissan Motor Company (Japan) runs on the network of the car in the web2.0 + telematics to achieve automotive architecture. VIP automobiles are fixed with the acceleration of the vehicle axis sensors, alcohol sensors, smoke sensors can be sent to the monitoring office by the general packet radio service (GPRS) in real-time. There are also other applications as the diagnosis of vehicles in real time can be controlled by specific sensors: the tire pressure, fuel consumption and distance between vehicles. All data detected is then rendered to the central system. Field of Robotics In future cities, robot taxis swarm together, moving in flocks, providing the service where it is needed in the timely and efficient way. Robot taxis are calibrated to reduce congestion at the bottlenecks in the city and pickup service areas that are more frequently used. A robot is a perfect example of intelligent physical devices. It is usually a system, which, by its appearance, gives the impression that it has intent or a clean body [37]. In closing, we anticipate that research on the IoT technology will continue to evolve over the next decade. Several applications of internet of things technology such as new communication technology and information processing and application in the field of robotics and aviation may become available. New approaches and models in the field of logistics for example automatic verification of receipt of the goods, followed stocks, monitoring out-of-stocks or the detection of flight may become available through the use of use of IoT technology. Conclusion The internet of things gives the possibility of merging seamlessly the real world and the virtual world. Thanks to the huge deployment of embedded systems, in fact, the entire range of design for IOT systems options is quite wide, across the open and standardized protocols is much smaller. The internet of things became a focus for research and development in the last 15 years. A large amount of investments for internet of things was and is still being made by government agencies and industry worldwide. As described in this document, the internet of things gives an idea of the possibilities offered by a number of existing and future technologies which, together, could in the next 5-10 years, change the mode of functioning of our societies in depth. It is an evolution of our information and communication systems that will result in the internet of things but the acceptance of IoT by the company will be strongly linked to respect for privacy and the protection of personal data we hope that this survey will be useful for researchers and practitioners in the field, helping them to understand the huge potential of IoT and what are the main fields of application of the internet of objects that are capable of transforming the IoT to a vision of research actually. Future Directions According to our Survey of Object Internet Technologies and Applications, much research is needed to make the IdO Paradigm come true. In this section, future research directions are suggested: development of many applications closely or directly applicable to our present life, such as personal and social areas, mobility and transportation, enterprise fields and industry areas. Security and privacy issues should be considered very seriously since IdO not only handles huge amounts of sensitive data (personal data, business data, etc.), but also has the power to influence the physical environment with its control capabilities. Cyber-physical environments therefore need to be protected from any form of malicious attacks. Identify, categorize and categorize IdO technologies, devices and services that will drive IdO development and support IdO vision. Designing architecture standards would have well defined abstract data models, interfaces and protocols, as well as concrete links to neutral technologies in order to support the broadest possible human beings, software, objects or intelligent devices. Development of new frameworks for global identification programs, identity management, identity / encryption, authentication, and the creation of global directory search and discovery services for 'IdO with various identification schemes.
7,754.6
2017-11-29T00:00:00.000
[ "Computer Science" ]
Nonlinear Resonance of Cavities Filled with Bubbly Liquids : A Numerical Study with Application to the Enhancement of the Frequency Mixing Effect Marı́a -is paper studies the nonlinear resonance of a cavity filled with a nonlinear biphasic mediummade of a liquid and gas bubbles at a frequency generated by nonlinear frequency mixing. -e analysis is performed through numerical simulations by mixing two source signals of frequencies well below the bubble resonance. -e finite-volume and finite-difference based model developed in the time domain simulates the nonlinear interaction of ultrasound and bubble dynamics via the resolution of a differential system formed by the wave and Rayleigh–Plesset equations. Some numerical results, consistent with the literature, validate our procedure. Other results reveal the existence of a frequency shift of the cavity resonance at the difference-frequency component, which rises with pressure amplitude and evidences the global changes undergone by the bubbly medium under finite amplitudes. Finally, this work shows the enhancement of the amplitude of the difference-frequency component generated by parametric excitation using the nonlinear resonance shift, which is more pronounced when the second primary frequency is constant, the first one is varied to match the nonlinear resonance, and both have the same amplitude. e sound speed, attenuation coefficient, compressibility, and nonlinear parameter acquire dispersive dependence on bubble resonance.e nonlinear interaction of ultrasound and bubble oscillations must be understood to take advantage of these properties in different applied frameworks such as sonochemistry [6], medicine [7], and others [8,9].Lauterborn, in [10], studies the nonlinear behavior of a single bubble in an acoustic field to analyze the effect of the pressure amplitude on the bubble resonance and concludes that a shift of the bubble resonance exists and is dependent on pressure amplitude.e nonlinearity of the medium is responsible for the generation of harmonics from the fundamental frequency and generates combinations of frequencies by nonlinear frequency mixing (sum frequency and difference frequency) when several ultrasonic signals travel through the medium [11].ese effects have multiple applications.Medical imaging can be generated from higher harmonic components [12].Underwater exploration or transmission and nondestructive testing are fields where the differencefrequency signal has a huge interest because of its low attenuation, good directivity, and high penetration [13,14].Characterization and detection of bubbles are also attractive applications of the frequency mixing phenomenon [15][16][17][18]. Several studies based on linear models have been performed to understand the behavior of ultrasonic waves in bubbly liquids inside a cavity [19][20][21].Omta studied the behavior of a bubbly liquid cloud in [22] showing that the nonlinear response emitted from the cloud, much lower than the bubble resonance, is determined mainly its total gas content.Other studies that analyze the behavior of standing ultrasonic waves are based on nonlinear models [23,24].In those papers, both the sound speed and the resonance frequencies are calculated without taking into account the amplitude of the waves [2,3].In this paper, we aim at showing that the pressure amplitude of the signal changes the resonance of the cavity (and the sound speed). e dependence of the resonance frequency on drive amplitude has been observed in solids, for which the nonlinear features of ultrasound are used in areas as damage diagnostics in materials [25], granular media and dynamic earthquake triggering [26], and fluids in closed tubes of variable cross section [27].Omta also analyzed in [22] the signal emitted from a bubbly liquid cloud as a function of the amplitude of the acoustic perturbation, concluding that the frequency of this signal undergoes a variation that is amplitude dependent.Up to our knowledge, that paper, and more specifically its Figures 4-6, was the very first demonstration of the shift of the resonance of a bubbly cloud with pressure amplitude.Matsumoto and Yoshizawa, in [28], also detected the shift with pressure amplitude of the resonance of a cluster containing a bubbly liquid.is effect has also been studied in bubbly liquids for a resonance frequency associated to the multiple scattering of bubbles that changes as a function of the amplitude of an incident Gaussian pulse [29]. e objective of this work is to study the variation with pressure amplitude of the resonance of a one-dimensional resonator filled with a fluid made of a liquid and gas bubbles when working at nonlinear regime by mixing two finiteamplitude continuous excitation signals.Frequencies well below the bubble resonance are used to take advantage of the nonlinearity of the dispersive medium with a relative low attenuation. In Section 2, we present the physical problem and the corresponding mathematical model used in this work.Several numerical experiments performed by varying the amplitude at the source are shown in Section 3. ey allow us to observe the nonlinear resonance phenomenon of the cavity at the difference-frequency component generated by nonlinear frequency mixing.is resonance frequency shift is used to maximize its amplitude.Similarities with classic results are also commented.Section 4 gives the conclusions of this work. Materials and Methods We consider a one-dimensional cavity of length L filled with a mixture of water and air bubbles.Under the Rayleigh-Plesset approximation, we suppose that, among others, the bubbles are spherical and have the same size.We also assume that they are evenly distributed in the liquid. e model assumes that bubbles are the only source of attenuation, dispersion, and nonlinearity. e buoyancy and Bjerknes and viscous drag forces are not considered in this work.e interaction between the acoustic pressure p(x, t) and the volume variation of the bubbles v(x, t) � V(x, t) − v 0g is modeled by the wave equation, Equation (1), and a Rayleigh-Plesset equation, Equation (2) [3,30], where x is the one-dimensional space coordinate, t is the time, V is the current volume of the bubble, and v 0g � 4/3πR 3 0g is the initial bubble volume, with R 0g as the initial radius. ( In Equation ( 1), c 0l and ρ 0l are the sound speed and the density at the equilibrium state of the liquid.N g is the density of bubbles, i.e., the bubble number per m 3 .In Equation (2), δ � 4] l /ω 0g R 2 0g is the viscous damping coefficient of the bubbly fluid, in which ] l is the cinematic viscosity of the liquid and ω 0g � is the resonance frequency of the bubbles, where c g is the specific heats ratio of the gas, p 0g � ρ 0g c 2 0g /c g is its atmospheric pressure, and ρ 0g and c 0g are the density and sound speed at the equilibrium state of the gas. e parameter η � 4πR 0g /ρ 0l and the nonlinear coefficients a � (c g + 1)ω 2 0g /2v 0g and b � 1/6v 0g are constant.e numerical experiments last a total time T t .In the following studies, Section 3, the value of this parameter T t is high enough to ensure that the steady state of the waves is reached. e system is closed by supposing that the liquid and the bubbles are unperturbed at the onset of the studies: and the resonator is excited by a time-dependent pressure source s(t) placed at x � 0: and a free-wall condition is imposed at the reflector: To solve this differential system, we use a numerical model developed in [24] based on a finite volume method in the space dimension and a finite difference method in the time domain.e frequency components of the timedependent solution used in the next section are obtained by applying a fast Fourier transform. Results e objective of this section is, using the phenomenon known as nonlinear frequency shift [22,28,29], to show the enhancement of the difference-frequency component generated in a cavity that contains a bubbly liquid by nonlinearly mixing two signals of different frequencies. Although very few studies exist in the literature, the dependence of the resonance of a bubble cloud on pressure amplitude is a phenomenon, known as nonlinear frequency shift, that has been observed previously in seminal papers by Omta [22], Matsumoto and Yoshizawa [28], and Doc et al. [29].In Appendix, we show some results obtained with the model described in the above section that corroborate the conclusions of these papers: (i) the increase of pressure amplitude induces the nonlinear frequency shift of the cavity resonance; (ii) this effect relies on the softening of the bubbly liquid, that is due to the variation of the average volume of bubbles; and (iii) this nonlinear effect is more pronounced at higher void fraction in the cavity. We focus now on the discussion on the application of the softening behavior of the bubbly liquid by taking advantage of the nonlinear frequency shift to strengthen the amplitude of the difference-frequency component generated in the context of the nonlinear frequency mixing of two signals of different frequencies [1,31].To this purpose, the analysis is performed by means of a comparison of several numerical experiments for which we search the highest response by parametric emission, i.e., the maximal response of the system at the difference frequency f d � f 2 − f 1 : (1) by setting the first primary source frequency f 1 at a constant value and moving the second primary source frequency f 2 , with the same constant source amplitude for both the first and the second primary signals p 01 � p 02 ; (2) by setting f 1 at a constant value and moving f 2 and considering two subcases, (a) a constant p 02 and a varied p 01 and (b) a constant p 01 and a varied p 02 ; (3) by setting f 2 at a constant value and moving f 1 , with the same constant value p 01 � p 02 . To this end, in this section the pressure source we use is s(t) � p 01 sin(ω 1 t) + p 02 sin(ω 2 t), where ω 1 � 2πf 1 and ω 2 � 2πf 2 .e cavity length is set to fit the linear resonance at the difference frequency f dL � 200 kHz, L � λ dL /2 � 0.0031 m, where λ dL � c dL /f dL is the wavelength and c dL � 1222.8 m/s is the sound speed in this biphasic and dispersive medium at this frequency [3].We study the nonlinear resonance shift of the difference-frequency component of the signal pressure in the cavity.For each numerical experiment 1 to 3, simulations are performed varying the source amplitude.For each amplitude, we apply a frequency sweep, in such a way that the difference frequency f d is around the linear resonance f dL , by increments δf � 10 Hz to evaluate the highest difference-frequency pressure amplitude reached in the cavity p dm at each frequency, and the maximal value p dmax over the frequency range is then localized.Note that we work at primary frequencies chosen to be close to half the resonance frequency of the bubbles since the nonlinearity is high at this frequency in the dispersive medium [3]. Case 1. e first primary frequency is constant, f 1 � 700 kHz, whereas the second primary source frequency f 2 is moved from 896 kHz up to 902 kHz.e source amplitude is varied from p 01 � p 02 � 1 kPa up to p 01 � p 02 � 6.5 kPa. Figure 1 shows the result, i.e., p dm as a function of frequency (around f dL ) over the amplitude range.It is seen here that the behavior of the differencefrequency component is assimilated to others previously observed through other frequencies, i.e., it shows the same main properties when amplitude is raised as the ones described in the literature and in Appendix [22,28,29], which means that the amplitude-dependent behavior of the medium can also be characterized by the behavior of f d .e resonance of the cavity at the difference frequency clearly undergoes a dependence on pressure amplitude, i.e., a nonlinear frequency shift exists.is means that the softening of the bubbly liquid in the cavity with pressure amplitudes also affects the difference-frequency component.In this case, at p 01 � p 02 � 6.5 kPa, the resonance is at f d � 197.6 kHz, denoted by f dNL , the frequency shift is Δf dNL � 2.4 kHz, and the highest value is p dNL � 19.794 kPa, which is 304.5% of the source amplitude.Since L is constant, this frequency shift means that the sound speed in the medium is c dNL � 2Lf dNL � 1208.12 m/s.Also, the symmetry of the curves around the linear resonance observed at the lowest amplitudes is lost when the latest rise, the nonlinear attenuation reduces the ratio of p dmax to source amplitude. Figure 2 presents the frequency shift Δf obtained as a function of pressure amplitude at the source p 01 � p 02 (a), as a function of average bubble volume increase Δv (b), and the maximal value of difference-frequency pressure amplitude reached in the cavity p dmax over the frequency range as a function of pressure amplitude at the source p 01 � p 02 (c). e fitting curves are also displayed (green color).A 4th degree polynomial fit is obtained for Figure 2(a).is means that the frequency shift increases hugely as the pressure amplitude rises.e linear dependence of the frequency shift observed in Figure 2(b) proves that the softening of the medium is due to the increase of the mean bubble volume that raises the compressibility, and thus, the nonlinearity of the bubbly medium.Figure 2(c) is a consequence of the two other diagrams, and a 3rd degree polynomial fit is obtained, which means that the nonlinearity of the medium due to the increase of the pressure amplitude generates a huge difference-frequency amplitude. e source amplitude for this study is p 01 � p 02 � 6.5 kPa, and we analyze the difference-frequency generation by comparing the results when we do take into account the nonlinear resonance frequency shift and when we do not.e study is thus performed at two difference frequencies, f dL � 200 kHz, which is the linear resonance, and f dNL � 197.6 kHz, which is the nonlinear resonance found above at this source amplitude (Figure 1), i.e., the frequency that produces the highest response by parametric emission in this case.e primary frequencies at the source are set at f 1L � 700 kHz and f 2L � f 1L + f dL � 900 kHz for the linear resonance case and at f 1NL � 700 kHz and f 2NL � f 1NL + f dNL � 897.6 kHz for the nonlinear resonance case.Figure 3 shows the pressure amplitude distribution along the cavity of the primary frequencies f 1L , f 2L , the difference frequency f dL f (Hz) ×10 5 1.96 Case 2. e first primary frequency is constant, f 1 � 700 kHz, whereas the second primary source frequency f 2 is moved from 896 kHz up to 902 kHz.Two configurations are considered here.For the first one, the source amplitude of the second primary component is constant, p 02 � 6.5 kPa, whereas the source amplitude of the first primary component is varied from p 01 � 6.5 kPa up to p 01 � 8.125 kPa. Figure 4(a) shows p dm as a function of frequency (around f dL ) for three amplitude values.At p 01 � 8.125 kPa, the resonance is at f dNL � 196.54 kHz, the frequency shift is Δf dNL � 3.46 kHz, and the highest value is p dNL � 22.276 kPa, which is 274.2% of the source amplitude.Since L is constant, this frequency shift means that the sound speed in the medium is c dNL � 2Lf � 1201.65 m/s.For the second configuration, the source amplitude of the first primary component is constant, p 01 � 6.5 kPa, whereas the source amplitude of the second primary component is varied from p 02 � 6.5 kPa up to p 02 � 8.125 kPa. Figure 4(b) shows p dm as a function of frequency (around f dL ) for three amplitude values.At p 02 � 8.125 kPa, the resonance is at f dNL � 196.24 kHz, the frequency shift is Δf dNL � 3.76 kHz, and the highest value is p dNL � 23.674 kPa, which is 291.4% of the source amplitude.Since L is constant, this frequency shift means that the sound speed in the medium is c dNL � 2Lf dNL � 1199.81 m/s. It must be noted that the nonlinear curves are not symmetric.Also, the maximal values obtained for Case 2 are lower, relatively to the source amplitude (even at more source amplitude), than the one obtained for Case 1, for which the same amplitude at the source is applied to both primary frequencies. Case 3. e second primary frequency is constant, f 2 � 900 kHz, whereas the first primary source frequency f 1 is moved from 698 kHz up to 704 kHz.e source amplitude is set at p 01 � p 02 � 6.5 kPa. Figure 5 shows the comparison of p dm as a function of frequency (around f dL ) over the amplitude range in this case and in Case 1. e general behavior observed in Case 1 becomes apparent here as well, especially a clear nonlinear frequency shift of resonance at the difference frequency compared to the linear resonance.However, the efficiency of the mixing-frequency process is higher in Case 3, giving a higher amplitude of the difference-frequency component, p dNL � 20.825 kPa (320.4% of the source amplitude, at f dNL � 197.38 kHz with Δf dNL � 2.62 kHz) instead of p dNL � 19.794 kPa (304.5% of the source amplitude, at f dNL � 197.6 kHz with Δf dNL � 2.4 kHz) in Case 1.In Case 3, the sound speed in the medium is c dNL � 2Lf dNL � 1206.83m/s instead of c dNL � 2Lf dNL � 1208.12 m/s in Case 1. is effect is the most likely due to the following.At finite amplitude, the bubble resonance also undergoes a variation from its linear value, f 0g � ω 0g /(2π) � 1.34 MHz, toward lower frequencies, and this frequency shift increases with amplitudes [10].us, when f 2 is constant, at high amplitudes the difference between the nonlinear bubble resonance and the primary frequency f 2 is significantly reduced, whereas when f 1 is constant that difference depends on f 2 that is moving (toward lower values when fitting the nonlinear resonance).Since the primary frequency f 2 is the one that gives most energy to the difference-frequency component [24], the f 2 component of the source then tends to excite the bubbles with more intensity in such a way that the bubbles oscillate around a mean volume that is higher in Case 3 than in Case 1. e softening of the bubbly medium that takes place in the cavity is thus more pronounced in Case 3 than in Case 1 (see Appendix and Refs.[22,28,29]), and this gives way to a frequency shift of the difference frequency in the cavity that is higher and produces more intensity in Case 3 than in Case 1. e results obtained in this section suggest the following new points that are of great interest in the framework of nonlinear ultrasound in bubbly liquids: the nonlinear Shock and Vibration frequency shift of the cavity resonance (decrease of sound speed, softening of the medium by increase of the effective bubble volume with pressure amplitude) can be applied to the mixing of two signals of different frequencies to strengthen the nonlinear generation of the difference frequency component; the comparison of Cases 1 and 2 and Cases 1 and 3 performed above, with primary frequencies well below the bubble resonance, evidences that the enhancement of the difference-frequency component at the nonlinear resonance is very effective (in relation to the source amplitude) when the second primary frequency is constant, whereas the first one is varied to match the nonlinear resonance, and both primary component amplitudes are set at the same value; an unbalanced contribution, in terms of amplitudes, of the primary signals limits the necessary equilibrium to maximize the difference frequency but promotes the generation of harmonics of the strongest primary signal; and a variation of the second primary frequency instead of the first one limits the efficiency of the nonlinear mixing frequency. Conclusions is work shows that a frequency shift, which grows with pressure amplitudes (nonlinear resonance effect), of a system composed by a bubbly liquid in a cavity exists at the difference-frequency component generated by nonlinear frequency mixing of two primary signals at frequencies well below the bubble resonance. is numerical study also analyzes different ways to enhance the intensity of the difference-frequency signal using this nonlinear resonance effect and suggests the use at the source of a constant second primary frequency combined with a varied first primary frequency to adjust the difference frequency at the nonlinear resonance, both at the same amplitude. Appendix In this appendix, we study the nonlinear resonance shift in the cavity for a single-frequency excitation around f � 200 kHz with the bubble density N g2 � 5 × 10 11 m −3 .e pressure source used is s(t) � p 0 sin(ω f t) where p 0 is the amplitude, ω f � 2πf.e length of the cavity is set to be resonant at L � λ/2, where λ � c f /f is the wavelength and c f is the sound speed in this biphasic and dispersive medium at this frequency [3].At f � 200 kHz, c f � 1222.8 m/s and L � c f /2f � 0.0031 m.We perform simulations varying p 0 from 1 Pa up to 250 Pa, and at each amplitude, a frequency sweep around f is done (with increment δf � 10 Hz) to localize the frequency at which the maximum pressure amplitude is reached in the cavity p m .Figure 6(a) shows the result.For the lowest amplitudes (linear case), the curve is perfectly symmetric, and p max � 113 Pa � p L corresponds to the linear resonance f L � 200 kHz for p 0 � 1 Pa.By increasing p 0 , p max corresponds to lower frequencies (the nonlinear resonance and the frequency shift are amplitude dependent), and it is also reduced in relation to p 0 due to the nonlinear attenuation, and the symmetry around the resonance is lost.Similitude about the behavior of the frequency shift from the linear resonance with the frequency variation undergone by the single bubble resonance shown by Lauterborn exists [10].For p 0 � 250 Pa, the resonance is at f NL � 198.74 kHz (the frequency shift is Δf NL � 1.26 kHz).Since L remains the same at all amplitudes, the resonance shift means a change of sound speed in the medium with amplitudes. is value here is c NL � 2Lf NL � 1215.1 m/s.us, the medium experiences a modification of its acoustic properties when amplitudes change, not only on a local basis (velocity of particles) when nonlinear distortion occurs (e.g., as for a shock wave), but on a global basis.It undergoes a softening process when pressure amplitudes are raised, due to the increase of the effective bubble volume.At nonlinear regime, the positive volume variations prevail over the negative values, and the bubble oscillations are then produced around a mean volume that is bigger than the initial one, v + 0g > v 0g , which lowers the sound speed in the effective medium [3].e effect of the variation of bubble density in the cavity on its nonlinear resonance shift is briefly described in the following. e sound speed and resonator length are c f � 1314.1 m/s and L � c f /2f � 0.0033 m for N g1 � 3 × 10 11 m −3 , c f � 1222.8 m/s and L � c f /2f � 0.0031 m for N g2 , and c f � 1148.2 m/s and L � c f /2f � 0.0029 m for N g3 � 7 × 10 11 m −3 .We keep the same amplitude sweeping range as above.Figure 6(b) represents the resonance frequency variation from the linear resonance at f � 200 kHz as a function of p max for each bubble density, including the 3rd degree polynomial fitting of the resonance frequency variation, where Δf is expressed in Hz and p max in Pa.For the same amplitude, the frequency shift is more pronounced at higher bubble density, since the nonlinear acoustic parameter is higher.Moreover, at constant amplitude, Δf seems to have a pseudolinear behavior vs. bubble density, which is qualitatively coherent with the results given by Brennen [5].Nevertheless, note that for the same source amplitude, the maximum pressure reached is higher when the bubble density is lower, since there is less attenuation in the medium. Shock and Vibration Figure 6(b)) with Δv, i.e., when the effective bubble volume is higher, following a linear fit, where Δf is expressed in Hz and Δv in m 3 .ese conclusions are in concordance with other results published in the literature [22,28], which results in a qualitative validation of our model and procedure. p 2 0+ 1 .Figure 2 : Figure 2: Fitting curves in Case 1 of frequency shift Δf vs. pressure amplitude at the source p 01 � p 02 , denoted by p 0 in (a) and vs. average volume increase Δv (b) and maximal value of difference-frequency pressure amplitude p d max over the frequency range vs. pressure amplitude at the source p 01 � p 02 , denoted by p 0 in (c). Figure 6 (Figure 6 : Figure 6: Maximum pressure in the cavity p m vs. source frequency (f ) for different source amplitudes p 0 (a); fitting curves of resonance frequency shift Δf vs. pressure amplitude p max (b) and vs. average volume increase Δv (c), for different bubble densities N g . Maximum difference-frequency pressure amplitude in the cavity p dm vs. frequency (around f dL ) for different source amplitudes in Case 1. 4Shock and Vibration (continuous lines), the primary frequencies f 1NL , f 2NL , and the difference frequency f dNL (dashed lines).Different amplitudes are observed for f dL and f dNL .e response at the difference frequency is much better by taking into account the nonlinear frequency shift.Whereas the maximum pressure for f dL is p dL � 9.217 kPa (141.8% of the source amplitude), the corresponding value for f dNL is p dNL � 19.794 kPa (304.5% of the source amplitude, which is a very high value for parametric emission).e benefit drawn in terms of difference-frequency amplitude is 162.7%.It is also interesting to note that by moving one of the primary source frequencies, f 2 from f 2L to f 2NL , its amplitude decreases, p f 2NL < p f 2L , whereas the amplitude of the other primary signal increases, p f 1NL > p f 1L . is results clearly shows that, by taking into account the resonance frequency shift, the second source frequency is the one that undergoes a strong loss of energy to feed the difference-frequency component that acquires intensity and becomes much stronger, p dNL > p dL . is behavior is most likely due to the fact that f 2 is closer to the bubble resonance (see Case 3). Figure 4: Maximum difference-frequency pressure amplitude in the cavity p dm vs. frequency (around f dL ) in Case 2 (constant f 1 , varied f 2 ) for (a) p 02 � 6.5 kPa and three values of p 01 and (b) p 01 � 6.5 kPa and three values of p 02 . d (constant f 2 )Figure5: Maximum difference-frequency pressure amplitude in the cavity p dm vs. frequency (around f dL ) for p 01 � p 02 � 6.5 kPa in Case 1 (constant f 1 , varied f 2 , green line) and Case 3 (varied f 1 , constant f 2 , red line).
6,511.4
2018-12-06T00:00:00.000
[ "Physics" ]
Inhibition of hypoxia-inducible factor 1α accumulation by glyceryl trinitrate and cyclic guanosine monophosphate Abstract A key mechanism mediating cellular adaptive responses to hypoxia involves the activity of hypoxia-inducible factor 1 (HIF-1), a transcription factor composed of HIF-1α, and HIF-1β subunits. The classical mechanism of regulation of HIF-1 activity involves destabilisation of HIF-1α via oxygen-dependent hydroxylation of proline residues and subsequent proteasomal degradation. Studies from our laboratory revealed that nitric oxide (NO)-mediated activation of cyclic guanosine monophosphate (cGMP) signalling inhibits the acquisition of hypoxia-induced malignant phenotypes in tumour cells. The present study aimed to elucidate a mechanism of HIF-1 regulation involving NO/cGMP signalling. Using human DU145 prostate cancer cells, we assessed the effect of the NO mimetic glyceryl trinitrate (GTN) and the cGMP analogue 8-Bromo-cGMP on hypoxic accumulation of HIF-1α. Concentrations of GTN known to primarily activate the NO/cGMP pathway (100 nM–1 µM) inhibited hypoxia-induced HIF-1α protein accumulation in a time-dependent manner. Incubation with 8-Bromo-cGMP (1 nM–10 µM) also attenuated HIF-1α accumulation, while levels of HIF-1α mRNA remained unaltered by exposure to GTN or 8-Bromo-cGMP. Furthermore, treatment of cells with the calpain (Ca2+-activated proteinase) inhibitor calpastatin attenuated the effects of GTN and 8-Bromo-cGMP on HIF-1α accumulation. However, since calpain activity was not affected by incubation of DU145 cells with various concentrations of GTN or 8-Bromo-cGMP (10 nM or 1 µM) under hypoxic or well-oxygenated conditions, it is unlikely that NO/cGMP signalling inhibits HIF-1α accumulation via regulation of calpain activity. These findings provide evidence for a role of NO/cGMP signalling in the regulation of HIF-1α, and hence HIF-1-mediated hypoxic responses, via a mechanism dependent on calpain. While HIF-1β is constitutively expressed, HIF-1α is regulated such that its protein levels determine HIF-1 transcriptional activity [2]. Under well-oxygenated conditions, prolyl residues 402 and 564 are hydroxylated by the prolyl-hydroxylase domain (PHD)-containing enzymes (PHD1, PHD2, and PHD3) [6]. This hydroxylation is required for the binding of the von Hippel-Lindau tumour suppressor protein (pVHL; the substrate recognition component of the E3 ubiquitin ligase complex), which leads to ubiquitylation and proteasomal degradation of HIF-1α [7]. Since oxygen is an absolute requirement for PHD enzyme activity, prolyl hydroxylation of HIF-1α is inhibited under hypoxic conditions, allowing HIF-1α to accumulate. While this is a well-characterised mechanism of regulation of HIF-1 activity, there is evidence that alternative pathways involving nitric oxide (NO) signalling regulate HIF-1 and, consequently, adaptations to hypoxia. NO is produced endogenously as a product of the oxidation of l-arginine into l-citrulline in an oxygen-dependent reaction catalysed by the enzymes NO synthases (NOSs) [8]. NO plays key roles in the regulation of a vast array of biological functions as well as pathological states such as cancer. It has been shown that NO affects various aspects of cancer biology including cell proliferation, metastasis, angiogenesis, and resistance to therapy; however, the precise role of NO in tumour progression has been controversial, with studies suggesting either tumour-promoting effects [9][10][11][12] or tumour-inhibitory effects [13][14][15][16]. The apparent dichotomy of NO-mediated effects may be explained by the fact that NO can regulate phenotypes through a variety of mechanisms depending on the local concentration of NO and the molecular environment (e.g. redox status) [17]. At high concentrations (>1 μM), NO can undergo reactions with oxygen or superoxide radicals to produce reactive oxygen species that alter protein function via nitrosylation and nitration [8,18]. At low concentrations (<1 μM), NO can interact with transition metals, such as iron in haem proteins [8,18]. The haem-containing enzyme soluble guanylyl cyclase (sGC) is the main target of NO that mediates most of its downstream effects by catalysing the conversion of guanosine triphosphate (GTP) into cyclic guanosine monophosphate (cGMP) [8]. cGMP subsequently activates various downstream effectors, of which cGMP-dependent protein kinase (PKG) is responsible for many of the effects of cGMP via phosphorylation of molecules that regulate gene expression and cell function [8,17]. Studies from our laboratory demonstrated that the acquisition of hypoxia-induced malignant properties in tumour cells, such as invasiveness, metastatic ability, and drug resistance, is inhibited by activation of the low concentration NO/cGMP signalling pathway, whereas inhibition of this pathway in well-oxygenated cells results in phenotypes similar to those induced by hypoxia [4,5,17,[29][30][31][32][33][34]. Given the central role of HIF-1 in mediating these hypoxic responses, in the present study, we tested the hypothesis that HIF-1α is a downstream target of the NO/cGMP signalling pathway and, therefore, an important component of the mechanism by which NO modulates hypoxia-induced phenotypes. Cells and culture conditions The human prostate carcinoma cell line DU145 was obtained from the American Type Culture Collection (ATCC; Manassas, VA, U.S.A.). Cells were maintained in RPMI 1640 medium (Life Technologies Invitrogen Corporation, Burlington, ON, Canada) supplemented with 5% fetal bovine serum (Sigma-Aldrich Canada Ltd., Oakville, ON, Canada) and plated in six-well plates at 60-70% confluence (to avoid pericellular hypoxia resulting from high-density cultures [35]) at the start of all experiments. Following a 24-h incubation in standard culture conditions (20% O 2 ), the culture medium was changed and the cultures were incubated in 20% O 2 or hypoxia (0.2% O 2 ). For incubations in standard conditions (20% O 2 ), cells were placed in a Thermo Forma CO 2 incubator (5% CO 2 in air at 37 • C) whereas for incubations in hypoxia, cells were placed in airtight chambers that were flushed with a gas mixture of 5% CO 2 /95% N 2 (BOC, Kingston, ON, Canada) and maintained at 37 • C. Oxygen concentrations within these chambers were kept at 0.2% using Pro-Ox model 110 O 2 regulators (Biospherix, Redfield, NY, U.S.A.). The involvement of calpain in the NO/cGMP-mediated regulation of HIF-1α was analysed by incubating cells with GTN (1 μM for 4 h) or 8-Bromo-cGMP (1 μM for 4 h), or in combination with the selective calpain inhibitor calpastatin (CS) peptide (2 μM for 4 h; Calbiochem/EMD Biosciences, San Diego, CA, U.S.A.) in 20% O 2 or 0.2% O 2 . The concentration of CS used in the present study was previously shown to effectively inhibit calpain-mediated degradation of HIF-1α [26]. Calpain activity assay Calpain activity was determined using a calpain activity assay kit (catalogue# MAK228, Sigma-Aldrich) in which fluorescence emitted by the calpain substrate Ac-LLY-AFC was assessed. Cells incubated in 20% O 2 or 0.2% O 2 for 24 h were flash-frozen and whole lysates were extracted using the buffer provided in the kit. Eighty-five micrograms of extracted proteins were incubated with calpain substrate at 37 • C for 1 h. Fluorescence was measured using a Spectramax iD3 (Molecular Devices, San Jose, CA, U.S.A.) plate reader with the excitation filter set at 400 nm and the emission filter set at 505 nm. Cell extracts incubated with the calpain inhibitor Z-LLY-FMK included with the kit were used as negative controls to establish basal fluorescence. Calculations and statistical analysis To quantify HIF-1α protein levels from Western blot experiments, x-ray films were scanned and densitometric analysis was performed using Image Processing and Analysis in Java (ImageJ, National Institute of Mental Health, Bethesda, Maryland, U.S.A.). The relative levels of HIF-1α protein and mRNA were determined by densitometry after calculating the HIF-1α/β-actin ratio to account for differences in sample loading. Results are presented as means + − the standard error of the mean (SEM). All statistical analyses were performed using Prism 6.0 Software (GraphPad Software Inc., La Jolla, CA, U.S.A.). Based on the experimental design, statistical significance was determined using one-way analysis of variance (ANOVA) or two-way repeated-measures ANOVA followed by Bonferroni's post hoc test when comparisons involved three or more groups. A two-tailed paired Student's t test was used to statistically analyse the effect of hypoxia on calpain activity. Differences in mean values were considered statistically significant at P<0.05. Effect of GTN on HIF-1α expression Immunoblot analysis revealed that incubation of cells in 0.2% O 2 for 4 h resulted in a significant increase in HIF-1α protein levels compared with cells incubated in 20% O 2 , while administration of GTN (1 nM-10 μM) at the onset of the 4-h hypoxic exposure prevented the accumulation of HIF-1α in a concentration-dependent manner ( Figure 1A). A significant inhibitory effect of GTN on HIF-1α accumulation was observed at concentrations of 100 nM, 1 μM, and 10 μM. Further analysis revealed that the inhibitory effect of GTN (1 μM) on hypoxia-induced HIF-1α accumulation was most robust at the 4-h time point ( Figure 1B). GTN did not significantly alter HIF-1α protein levels in cells incubated in 20% O 2 at all time points examined (Supplementary Figure S1). In contrast with the hypoxic induction of HIF-1α protein accumulation, exposure of DU145 cells to 0.2% O 2 for 24 h resulted in a significant decrease in HIF-1α mRNA levels compared with cells incubated in 20% O 2 ( Figure 1C). Administration of GTN did not affect HIF-1α mRNA expression at all concentrations (10 nM-10 μM) tested ( Figure 1C). Similarly, time-course analysis revealed that various durations of hypoxic exposure (4-24 h) significantly decreased HIF-1α mRNA levels and addition of GTN (1 μM) did not alter HIF-1α mRNA expression in either 20% O 2 or 0.2% O 2 at all time points examined ( Figure 1D). Effect of 8-Bromo-cGMP on HIF-1α accumulation Incubation of cells with the non-hydrolysable cGMP analogue 8-Bromo-cGMP (1 nM, 100 nM, and 10 μM) for 4 h significantly inhibited the hypoxic accumulation of HIF-1α protein at all concentrations of the cGMP analogue used (Figure 2A,B). The significant effect of 8-Bromo-cGMP on HIF-1α accumulation was only observed at 4-h incubation in hypoxia ( Figure 2B). HIF-1α protein levels in cells incubated in 20% O 2 were unaffected by exposure to 8-Bromo-cGMP at all time points examined (Supplementary Figure S2). Exposure of DU145 cells to 8-Bromo-cGMP (1 μM) in 20% O 2 or 0.2% O 2 for various time periods (4-24 h) did not result in a significant change in HIF-1α mRNA levels compared with controls (cells incubated without 8-Bromo-cGMP) at all time points tested, as determined by qRT-PCR ( Figure 2C). Calpain and NO/cGMP-mediated inhibition of HIF-1α protein accumulation To determine the role of calpain in the NO/cGMP-mediated attenuation of HIF-1α accumulation, DU145 cells were treated with GTN (1 μM) or 8-Bromo-cGMP (1 μM) alone, or in combination with the specific calpain inhibitor CS (2 μM) at the onset of a 4-h incubation in 0.2% O 2 . While GTN alone reduced HIF-1α protein accumulation in cells exposed to hypoxia, co-incubation with CS significantly attenuated the inhibitory effect of GTN on HIF-1α accumulation ( Figure 3A). Similarly, 8-Bromo-cGMP alone significantly reduced hypoxic accumulation of HIF-1α protein while addition of CS significantly blocked the effect of 8-Bromo-cGMP in DU145 cells ( Figure 3B). Finally, results of the calpain activity assay revealed that incubation of DU145 cells with various concentrations of GTN or 8-Bromo-cGMP (10 nM or 1 μM) in 20% O 2 or 0.2% O 2 did not affect calpain activity ( Figure 4A,B), and there were no significant differences in mean calpain activity levels in cells incubated in 20% O 2 versus 0.2% O 2 (P=0.24). Discussion The main finding of the present study was that GTN and 8-Bromo-cGMP inhibited the hypoxia-mediated accumulation of HIF-1α via a mechanism that likely involves calpain activity. While the role of calpain in the regulation of HIF-1α accumulation by NO was previously reported [26], our study reveals, for the first time, that cGMP likely mediates this effect of NO. This finding is important because most studies on the regulation of HIF-1α by NO have primarily centred on cGMP-independent, high concentration effects of NO. A well-known mechanism of HIF-1α regulation involves PHD-mediated hydroxylation of prolyl residues (Pro 402 and Pro 564 in human HIF-1α) in the oxygen degradation domain of HIF-1α in an oxygen-dependent manner. Our study provides evidence that NO signalling via cGMP generation and inhibition of calpain activity is an additional mechanism of HIF-1 regulation. Both concentration and duration of exposure to NO are critical determinants of the quality and magnitude of the biological response to exogenously administered NO mimetics [36]. Analysis of the effect of GTN on HIF-1α accumulation at various concentrations and exposure times revealed that relatively low concentrations (100 nM and 1 μM) of GTN were able to significantly attenuate hypoxic accumulation of HIF-1α protein and that the inhibitory effect of GTN was rapid and possibly transient, peaking at 4-h treatment. Most of the low concentration effects of NO are attributable to the activation of the NO/cGMP signalling pathway in which NO binds to sGC and subsequently induces cGMP production and activation of downstream effectors [8]. Thus, the observed inhibitory effect of GTN, at such low concentrations as 100 nM and 1 μM, on hypoxia-induced HIF-1α accumulation suggests that this effect occurs via activation of the cGMP-dependent signalling pathway. This is further supported by results of previous studies showing that levels of NO in cells treated with ≤1 μM of NO donors, including GTN, were undetectable using standard assays that measure nitrate and nitrite formation as an index for NO production [30]; this indicated that NO levels were lower than those required to produce the reactive nitrogen species nitrate/nitrite and, as such, exerted their effects predominantly through the low-concentration NO/cGMP pathway [30]. Based on the observed effect of GTN on HIF-1α accumulation and given the central role of HIF-1α in mediating hypoxic responses, it is possible that the inhibitory effects of low concentrations of GTN on hypoxia-induced phenotypes observed in previous studies [5, [29][30][31][32][33][34] are, in part, a result of interfering with HIF-1α accumulation. Interestingly, while induction and attenuation of these previously reported phenotypes (i.e. increase in invasion and metastasis, drug resistance, and immune escape) by hypoxia and by low concentrations of GTN, respectively, were evident following 24-h exposures, a similar pattern of effect of hypoxia and GTN on HIF-1α protein accumulation was observed as early as, and most prominently, 4 h. This may perhaps reflect the time it takes to effect a change in transcription and protein expression to finally manifest phenotypic alterations, once HIF-1α accumulation and, consequently, HIF-1 activity are modified. In addition to the results showing that GTN, at a concentration known to primarily activate the cGMP-dependent pathway, inhibited hypoxic accumulation of HIF-1α, evidence in support of the participation of the cGMP signalling pathway in HIF-1α regulation was provided by the finding that 8-Bromo-cGMP (i.e. a cGMP analogue) similarly attenuated hypoxia-induced HIF-1α accumulation. These results are in agreement with previous studies [30][31][32] showing that NO via cGMP production prevents hypoxia-mediated acquisition of malignant phenotypes in tumour cells and suggest that modulation of HIF-1α may be an important aspect of the mechanism by which NO/cGMP signalling regulates hypoxic responses. Although many of the studies examining the regulatory effects of NO on HIF-1α accumulation and/or HIF-1 activity have proposed cGMP-independent mechanisms of HIF-1 regulation, Tsuruda et al. [37] have found that activation of sGC/cGMP signalling in cultured cardiomyocytes decreased hypoxia-induced HIF-1α protein accumulation; this further supports the notion that the cGMPdependent signalling interferes with hypoxic induction of HIF-1α accumulation and suggests that such mechanism of HIF-1α modulation may apply to a variety of normal and transformed cells. Interestingly, in contrast with its effects on HIF-1α accumulation in hypoxia, neither GTN nor 8-Bromo-cGMP altered HIF-1α protein levels in DU145 cells under well-oxygenated conditions (20% O 2 ) (Supplementary Figures S1 and S2). This selective action of GTN and 8-Br-cGMP is in line with previous studies showing that such activation of NO/cGMP signalling inhibited malignant phenotypes of hypoxic tumour cells without affecting well-oxygenated cells [29,30,32,33], highlighting its potential to selectively target the more malignant hypoxic cells. This phenomenon may be explained by the fact that endogenous NO production requires O 2 [8] and that exposure to low O 2 conditions limits cellular NO synthesis [38,39] as well as cGMP production [31,40,41]. Consequently, activation of the NO/cGMP signalling likely targets hypoxic cells (i.e. cells with impaired NO production and signalling) with little or no effect on oxygenated cells (i.e. cells with normal NO production and signalling). The present study revealed that GTN or 8-Bromo-cGMP did not alter the levels of HIF-1α mRNA in either hypoxic (0.2% O 2 ) or oxygenated (20% O 2 ) conditions, suggesting that the NO/cGMP-mediated attenuation of hypoxia-induced HIF-1α accumulation occurs via translational or post-translational mechanisms. It has been reported that NO decreases HIF-1α protein abundance and hence HIF-1 activity via a PHD/pVHL/proteasome-independent mechanism that involves calpain (Ca 2+ -activated protease)-mediated degradation of HIF-1α [26]; however, that report did not address the role of cGMP signalling in the proposed mechanism of HIF-1α regulation. In accordance with and extending those findings, the results of the present study indicate that the NO/cGMP-induced inhibition of HIF-1α accumulation likely requires calpain activity. Cyclic GMP has been shown to increase intracellular levels of Ca 2+ [42][43][44], the primary activator of calpain, and this has been linked to increases in calpain activity [45]. Furthermore, it has been found that the downstream effector of cGMP, PKG, is required for NO/cGMP-mediated generation of the Ca 2+ signal and activation of μ-calpain [46]. In our study, regardless of oxygenation levels, we did not observe changes in calpain activity in DU145 cells incubated in the presence of GTN or 8-Bromo-cGMP. These findings indicate that, while the presence of calpain may be critical for the observed effect of NO/cGMP signalling, the decreased accumulation of HIF-1α in hypoxic cells incubated with NO mimetics is not likely due to increased calpain activity. Thus, the precise mechanism of NO-mediated inhibition of HIF-1α accumulation requires further investigation. PHDs are also subject to regulation by Ca 2+ and it has been shown that chelation of intracellular Ca 2+ induces HIF-1α accumulation and HIF-1 transactivation by inhibiting PHD activity under oxygenated conditions [47]. Based on the results presented in this study, the potential role of PHD/pVHL/proteasome pathway in the NO/cGMP-induced attenuation of HIF-1α accumulation cannot be excluded and further studies are needed to determine whether PHDs and calpains synergistically mediate HIF-1α degradation. In addition to HIF-1α, calpain has been found to mediate the degradation of HIF-2α [48], suggesting that HIFs are among the many calpain substrates and that calpain plays an important role in the regulation of HIFs and hence hypoxic responses. Overall, the findings presented in the present study provide new mechanistic insights into the NO-mediated regulation of HIF-1α accumulation, and support the concept that tumour hypoxic responses involving HIF-1 activity may be prevented by activation of the NO/cGMP signalling pathway. The proposed pathway of HIF-1α modulation may have physiological relevance not only in cancer but also in other pathological conditions as well as normal biological processes characterized by low oxygen levels.
4,307.8
2020-01-01T00:00:00.000
[ "Biology" ]
Synthesis and expression of a gene for the rat glucagon receptor. Replacement of an aspartic acid in the extracellular domain prevents glucagon binding. In order to facilitate structure-function studies of the glucagon receptor by site-directed mutagenesis, we have designed and synthesized a gene for the rat glucagon receptor. The gene codes for the native 485-amino-acid protein but contains 91 unique restriction sites. To characterize gene expression, a highly specific, high affinity antipeptide antibody was prepared against the receptor. The synthetic gene was expressed in transiently transfected monkey kidney (COS-1) cells. COS cells expressing the synthetic receptor gene bound glucagon with affinity and specificity similar to that of hepatocytes containing native receptor. The transfected COS cells also showed increased intracellular cAMP levels in response to glucagon. The functional role of an aspartic acid residue in the NH2-terminal tail of the receptor was tested by site-directed mutagenesis. This site in the related growth hormone releasing factor receptor was shown to be responsible for the little mouse (lit) genetic defect that results in mice of small size with hypoplastic pituitary glands. Mutant glucagon receptors with amino acid replacements of Asp64 were expressed at normal levels in COS cells but failed to bind glucagon. These results indicate that amino acid Asp64 may play a key role in glucagon binding to receptor. * This work was supported by United States Public Health Service Grant DK24039. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s1 reported in this paper has been submitted to the GenBankmIEMBL Data Bank with accession number(s) U14012. ll Assistant Investigator of the Howard Hughes Medical Institute. To whom correspondence should be addressed: Box 284, Rockefeller University, 1230 York Ave., New York, NY 10021. Tel.: 212-327-8288;Fax: 212-327-8370. efforts have identified specific residues in glucagon that are responsible for either receptor binding or signaling. Sera and Asp15 were shown to be important determinants of receptor binding (Unson and Merrifield, 1994;Unson et al., 199413). His', Aspg, and S e P constitute a putative triad responsible for activation of the receptor and subsequent biological effect (Unson and Merrifield, 1994). The recent isolation of glucagon receptor cDNA clones from rat and human liver has confirmed that the receptor is a member of the superfamily of seven-transmembrane domain G protein-coupled receptors (Jelinek et al., 1993). According to tentative structural models of related receptors, the hormonebinding site probably consists of a contribution from the large extracellular domain of the receptor (Fig. 11, which includes the NH,-terminal tail and loops connecting transmembrane helices. However, transmembrane signaling must involve ligandmediated communication between the extracellular domain and the intracellular domain where heterotrimeric G proteins are activated by the receptor. To investigate the molecular mechanism of hormone-receptor interaction and of receptor activation, we have designed and synthesized a gene for the rat glucagon receptor. COS cells expressing the synthetic receptor gene and purified COS cell membranes bound glucagon with high affinity and displayed the appropriate peptide hormone specificity. The transfected COS cells also showed increased intracellular CAMP levels in response to glucagon. Site-directed mutant glucagon receptors with amino acid replacements of Asp64 were expressed a t normal levels in COS cells but failed to bind glucagon. These results indicate that Asp64 may play a key role in glucagon binding to receptor and provide a framework for future pharmacological and biochemical characterization of the glucagon receptor. EXPERIMENTAL PROCEDURES Design ofthe Synthetic Rat GZucagon Receptor Gene-Synthetic gene design was carried out using strategies that have been described elsewhere (Ferretti et al., 1986;Sakmar and Khorana, 1988;Carruthers and Sakmar, 1995). The nucleotide sequence was determined with the aid of sequence analysis software (MacVector) with a reverse translation algorithm. The Khorana (1978Khorana ( , 1979 method was employed for gene synthesis. Thus, both upper and lower DNA strands were totally chemically synthesized. The synthetic gene was flanked by EcoRI and NotI restriction sites and was assembled from three fragments, A, B, and C. Fragment A (EcoRI to MluI) was 362 bp,' fragment B (MZuI to BamHI) was 564 bp, and fragment C (BamHI to NotI) was 546 bp. Duplexes within a fragment were designed to have four or five base unique nonpalindromic 5"overhangs. Fragments A-C were constructed from 8, 14, and 14 oligonucleotides, respectively. The oligonucleotides ranged in size from 72 to 92 bases. The abbreviations used are: bp, base pairs; G protein, guanine nucleotide-binding regulatory protein; GLP-l, glucagon-like peptide l; GRF, growth hormone releasing factor; VIP, vasoactive intestinal peptide; PAGE, polyacrylamide gel electrophoresis. 29321 Oligonucleotide Preparation-Oligonucleotide synthesis was carried out on an Applied Biosystems model 392 synthesizer. Purification and characterization of synthetic DNA was carried out essentially as described (Ferretti et al., 1986;Sakmar and Khorana, 1988). Oligonucleotide 5'-End Phosphorylation-Synthetic oligonucleotides involved in joining reactions for a particular fragment were 5'-end phosphorylated batchwise. The 5"terminal oligonucleotide on the upper strand and the 5'4erminal oligonucleotide on the lower strand of each fragment were not phosphorylated. A mixture of 100 pmol of each oligonucleotide in 50 m~ Tris-HC1, pH 8.0, was heated for 3 min a t 90 "C and quick chilled on ice. Solutions were added to give final concentrations of 10 mM MgCl,, 2 m~ spermidine, 10 mM dithiothreitol, 1 mMATP, and 4 units of T4-polynucleotide kinase (New England Biolabs). The mixture was incubated at 37 "C for 45 min, heated a t 90 "C for 3 min, then quick chilled on ice. The phosphorylation reaction was repeated a second time after readdition of dithiothreitol and kinase. Annealing and T4 DNA Ligase-catalyzed Duplex Joining Reactions-Complementary oligonucleotides for an entire gene fragment were annealed by adding 100 pmol each of the 5"terminal upper strand and 5"terminal lower strand oligonucleotides to the phosphorylated oligonucleotide mix in a final volume of 100 pl in 50 mM Tris-HC1, pH 8.0,lO mM MgCl,. The annealed oligonucleotides were joined by incubation for 16 h a t 14 "C in 66 mM Tris-HC1, pH 7.5, 5 mM MgCl,, 1 mM dithiothreitol, 1 m M ATP, and 25 units of T4 DNA ligase (Boehringer Mannheim) in a final volume of 100 pl. Purification and Cloning of Synthetic Gene Fragments-The products of the ligation reaction were separated on a preparative 1.5% agarose gel. The full-length synthetic DNA fragment bands were excised from the gel and purified using the Qiaex Gel Extraction Kit (Qiagen). Each fragment was cloned into a modified pGEM-2 vector. XL1-Blue Electrocompetent Escherichia coli cells (Stratagene) were transformed by electroporation. DNA minipreps were prepared for restriction analysis and subsequent dideoxy sequencing on doublestranded plasmid DNA (Sequenase, United States Biochemicals Corp., Cleveland, OH) (Sanger et al., 1977). A clone for each fragment with the correct DNA sequence was selected for assembly of the full-length synthetic gene into an expression vector. Assembly and Characterization of Full-length Synthetic Gene-The appropriate restriction fragments were purified, and the full-length gene was assembled into the cloning vector. The cloned full-length gene was resequenced in both directions using synthetic internal primers. The gene was transferred into the eukaryotic expression vector pMT3 as an EcoRI-Not1 restriction fragment (Franke et al., 1988). The pMT3 vector containing the synthetic rat glucagon receptor gene is referred to as pMT5. Construction of Glucagon Receptor Mutant Genes-The glucagon receptor mutant genes D64E, D64G, D64K, and D64N were prepared by restriction fragment replacement of the synthetic gene in the pGEM cloning vector (Lo et al., 1984). Each mutant was prepared by replacement of a 74-bp BsiWI-KpnI restriction fragment with a synthetic duplex containing the desired codon alteration. The four mutant genes were obtained by cloning two synthetic duplexes. The GAC codon for Asp was replaced by G(A/G)A to obtain the Glu and Gly substitutions and byAA(G/T) to obtain the Lys and Asn substitutions. Molar ratios of synthetic duplex to linearized vector were 3:l. Each mutant gene was sequenced entirely to confirm the correct nucleotide sequence. Construction of the Glucagon Receptor Gene with the 104 Antibody Epitope-The ID4 epitope refers to the 8-amino-acid sequence at the C terminus of bovine rhodopsin. A monoclonal antibody that recognizes this epitope was kindly provided by Dr. R. Molday (Molday and Mackenzie, 1983). A mutant receptor gene containing the ID4 epitope at its 3'-end was prepared by replacement of a 55-bp AurII-Not1 restriction fragment with a synthetic duplex containing the desired codon additions. Peptide Synthesis-The octadecapeptide ST-18 (SAKTSLASSL-PRLADSPT) corresponding to the extreme C terminus of the glucagon receptor residues (positions 467-485) was assembled stepwise by the Merrifield solid-phase method (Barany and Merrifield, 1979) using tertbutyloxycarbonyl (t-BOO-based chemistry on an Applied Biosystems 430A automated peptide synthesizer. Peptide Conjugation, Immunization, and Affinity Purification of Anti-glucagon Receptor Antibody-Conjugation of peptide ST-18 to keyhole limpet hemocyanin with glutaraldehyde followed by intradermal injection of two New Zealand White rabbits in complete Freund's adjuvant was performed as described (Goldsmith et al., 1987). Crude ST-18 antisera were affinity purified (Goldsmith et al., 1987;Shenker, et al., 1991) on a column of ST-18 peptide covalently linked to agarose (MI-Gel 15, Bio-Rad). Expression of Glucagon Receptor and Mutant Receptor Genes in COS-1 Cells-Receptor genes were expressed transiently in COS-1 cells according to the DEAE-dextran procedure previously reported for the expression of rhodopsin (Oprian et al., 1987;Sakmar et al., 1989). Preparation of Plasma Membranes from Dansfected COS-1 Cells-Cells from six 100-mm culture plates were collected in a 15-ml Falcon tube and spun for 30 s in a clinical centrifuge. Cells were resuspended in 15 ml of cold hypotonic buf€er (1 mM Tris-HC1, pH 6.8, 0.1 mM phenylmethylsulfonyl fluoride, 5 pg/ml leupeptin, 10 pg/ml aprotinin, 0.7 pg/ml pepstatin, 10 m~ EDTA) and forced through a 26-gauge needle three times. The total volume was increased to 20 ml with hypotonic buffer to give a hypotonic cell lysate fraction. The hypotonic cell lysate (10 ml) was layered onto 15 ml of a 38% (w/v) sucrose solution in buffer A(150 m~ NaC1,l m~ MgCl,, 10 mM EDTA, 20 mM Tris-HC1, pH 6.8,O.l mM phenymethysulfonyl fluoride, 5 pg/ml leupeptin, 10 pg/ml aprotinin, 0.7 pg/ml pepstatin) in a 1 x 3.5-inch SW-28 ultracentrifuge tube. After centrifugation at 15,000 revolutions/min at 4 "C, the interface band was collected into a 10-ml syringe with an 18-gauge needle. The volume was brought to 50 ml with buffer A, transferred to a Ti-45 tube, and spun at 40,000 revolutions/min for 30 min at 4 "C. The membrane pellet was resuspended in 20 ml of buffer A and spun a second time. The washed pellet was resuspended in 0.6 ml of buffer A, frozen on dry ice, and stored in 0.1-ml aliquots at -80 "C. Deatment of Expressed Glucagon Receptor and Mutants with N- Glycosidase F-Detergent cell lysate (30 pl, or approximately one-tenth of a plate of transfected COS cells) was treated with 0.3 units of Nglycosidase F (Boehringer Mannheim). The digest was shown to be complete after 2 h of incubation a t 37 "C. Plasma membrane preparation (10 pl, approximately one-tenth of a plate of transfected COS cells) was mixed with 10 pl of buffer A and treated with 0.3 units of Nglycosidase F. The digest was shown to be complete after 3 h of incubation at 37 "C. Zmmunoblot Analysis ofExpressed Glucagon Receptor and Mutants-Cell lysate samples were loaded without boiling onto a 1-mm thick 4% stacking, 10% separating SDS-polyacrylamide gel, and electrophoretically separated. Rainbow prestained molecular weight standards (Amersham Corp.) or Kaleidoscope prestained standards (Bio-Rad) were included on some gels. After electrophoresis, proteins were electrophoretically transferred to Immobilon-P transfer membrane (Millipore) with a Trans-Blot S.D. Semi-Dry Transfer Cell (Bio-Rad) set at 15 V for 30 min a t 4 "C. The membrane was blocked in 6% powdered milk (Carnation) in TBS (10 mM Tris-HC1, pH 8.0, 150 m~ NaC1) for 14 h at 4 "C in a glass bottle rotating in a hybridization chamber (Robbins Scientific). A 1:20,000 dilution of the ST-18 anti-peptide glucagon receptor antibody in 10 ml of TTBS (TBS, 0.05% (v/v) Tween 20) was incubated with the membrane for 20 min at room temperature in a hybridization chamber. After three washes of 5 min in 10 ml of TTBS, the membrane was incubated with a 1:10,000 dilution of goat anti-rabbit IgG peroxidase in 10 ml of TTBS for 20 min. After an additional three washes in 10 ml of TTBS, immunoreactive bands were visualized by enhanced chemiluminescence treatment (ECL, Amersham Corp.) and exposure to X-OMAT AR film (Kodak). Binding of *25Z-Glucagon to Dansfected Cells-Monoiodinated lZ5Iglucagon was obtained from NEN Dupont. Secretin, vasoactive intestinal peptide (VIP), and glucagon were from Sigma and glucagon-like peptide 1 (GLP-1, residues 7-37) was kindly provided by Dr. S. Mojsov. On day 4 after transfection, COS cells transfected with pMT5 or pMT3 and the AspbL mutant gene constructs were washed once with sodium phosphate-buffered saline, pH 7.4, detached with phosphate-buffered saline containing 1 mM EDTA, centrifuged (400 x g), and resuspended a t 4 "C in RPMI 1640 buffer containing 25 mM HEPES, pH 7.4, 1 mg/ml bovine serum albumin, and 1 mg/ml bacitracin, for use in radioligand binding and CAMP assays. Aliquots (100 pl) of cell suspension containing lo6 cells were incubated for 60 min at 30 "C with i251-glucagon (0.5 nM) in the absence or presence of increasing concentrations of unlabeled glucagon in a final assay volume of 200 pl. Cells were subsequently washed three times with 1 ml of cold buffer by filtration on Durapore membrane filters (0.45 pm) using a vacuum filtration manifold (Millipore). Radioactivity retained on the filters was counted on a Wizard 1470 gamma-counter (Wallac, Inc.). Nonspecific binding measured in T R N C V L E T P P P L L S L N H H C Q D S Y L K W K E F L F D FIG. 1. Schematic representation of the rat glucagon receptor primary and secondary structure. Seven putative transmembrane helices (helix A through heliz G) based on previous models of G protein-coupled receptors are shown. The amino terminus and extracellular surface is toward the top, and the carboxyl terminus and cytoplasmic surface is toward the bottom of the figure. The four sites of potential N-linked glycobeled with asterisks (*). Aspu, which was sylation on the amino terminus are lareplaced by site-specific mutagenesis, is numbered and labeled with an arrow. The carboxyl-terminal 18 amino acids that were used to design a peptide ST-18 for antibody production are boxed. S S G T G C E P I S A K T S L A S S L P R L A D S P T l S the presence of 10 1.1~ glucagon amounted to less than 10% of total counts bound. To test for specificity, increasing concentrations of the peptide hormones secretin, VIP, and GLP-1 were allowed to compete with '251-labeled glucagon for binding to transfected cells. Binding of *251-Glucagon to Dansfected COS Cell Membranes-Plasma membranes were prepared from transfected COS-1 cells as described above. Membrane protein content was determined by a modified Lowry procedure (Markwell et al., 1978). Competitive binding with radiolabeled glucagon on COS cell membranes was performed as described above in 25 m~ Tris-HC1, pH 7.4, l mg/ml bovine serum albumin, and 1 mg/ml bacitracin. Binding was initiated in a final volume of 200 p1 by the addition of membrane suspension containing 40 pg of protein. Intracellular CAMP Assay-cAMP levels in 100-pl aliquots of transfected COS cell suspensions were determined in triplicate in final assay volumes of 200 pl in RPMI 1640 containing 25 m M HEPES, pH 7.4, 1 mg/ml bovine serum albumin, 1 mg/ml bacitracin. In addition, samples contained 5 m M theophylline (Sigma) either alone or with the indicated concentration of glucagon. The mixtures were incubated at 37 "C for 45 min. The reaction was terminated by the addition of 300 pl of cold ethanol and evaporated to dryness. The residue was resuspended in 100 1. 11 of distilled water and centrifuged. Aliquots (50 $1 of the supernatant fraction were assayed for the presence of CAMP using a method (Amersham Corp.) that measures the ability of CAMP in each sample to compete with [8-3HlcAMP for a high affinity CAMP-binding protein. Data Analysis for Competition Binding Assays and CAMP Assay-Symbols in all figures represent data plotted as the mean of triplicate samples from single experiments. Experiments were repeated on identical frozen samples to verify reproducibility. When appropriate, curvefitting was carried out to a 4 parameter logistic function (sigmoid) of the Values for IC,,, EC,,, and Kd were determined from the inflection point (c) of the best-fit curve. RESULTS Design and Synthesis of a Gene Encoding the Rat Glucagon Receptor-The synthetic rat glucagon receptor gene was designed to encode the reported amino acid sequence (Fig. l). However, because of codon degeneracy it was possible to introduce a total of 91 unique restriction endonuclease cleavage sites that would facilitate site-directed mutagenesis by restriction fragment replacement. The reported cDNA sequence contained 30 potentially useful unique restriction sites (Jelinek et al., 1993). The gene was synthesized according to a strategy devised by Khorana and co-workers (1978, 19791, which involves the total chemical synthesis of both upper and lower DNA strands. The completed gene was 1472 bp in length and was constructed from three fragments which were cloned independently. The correct nucleotide sequences were confirmed by dideoxy DNA sequencing, and the three fragments were subcloned to assemble the complete gene. Preparation of Anti-peptide Antibody Directed against the COOH-terminal Tail of the Rat Glucagon Receptor-A rabbit antibody (ST-18) against a synthetic peptide was prepared and purified. The peptide sequence was derived from the COOHterminal 18 amino acid residues of the rat glucagon receptor. The antibody detected the presence of an immunoreactive band of the appropriate molecular weight in rat hepatoctye preparations (not shown). The specificity of the antibody is demonstrated by immunoblot analysis (Fig. 2) as described below. -I ) . The immunoreactive band a t 25 kDa is likely to be due to a cross-reacting COS cell protein. Plasma membrane preparations also contain a broad 55-75 kDa glucagon receptor band, an apparent dimer band a t 110 kDa, and the 35 kDa band. However, the 35 kDa band appears relatively less intense. The lanes labeled pMT5 contain plasma membrane from onetenth and three-tenths of a 100-mm culture plate, respectively. The 25 kDa band is not visualized in the plasma membrane preparations. The molecular weight estimates of the immunoreactive bands were determined relative to the mobility of prestained Rainbow protein molecular weight markers. also apparent. The presence of protease inhibitors in the detergent lysis buffer (lanes labeled +I) greatly increased the yield of immunoreactive material relative to the yield in the absence of inhibitors (lanes labeled -I). Lysates of cells transfected with vector alone (pMT3) showed a single band migrating with an apparent molecular mass of 25 kDa. The immunoreactive band a t 25 kDa is likely to be due to a cross-reacting COS cell protein. The band patterns seen in the hypotonic lysis lanes is similar to those seen in the detergent cell lysates. This result was expected since the hypotonic lysis procedure does not remove the cellular membrane fraction from the sample loaded onto the gel. Immunoblot analysis of the plasma membrane preparations showed a broad 55-75 kDa glucagon receptor band, an apparent dimer band at 110 kDa, and the 35 kDa band. However, the 35 kDa band appeared relatively less intense than in the detergent cell lysate lanes. The 25 kDa band was not visualized in the plasma membrane preparations of either pMT5or pMT3-transfected cells. Deglycosylation of the Expressed Glucagon Receptor with N-Glycosidase F-The enzyme N-glycosidase F cleaves N-linked carbohydrates from glycoproteins and leaves an aspartic acid residue at the position originally occupied by asparagine. Plasma membrane and cell lysate preparations from COS cells transfected with pMT5 or pMT3 were digested with N-glycosidase F. Immunoblot analysis of deglycosylated synthetic glucagon receptor in the plasma membrane and cell lysate preparations is shown in Fig. 3 kDa. The slight difference between the migrations of the plasma membrane and cell lysate bands results from the exposure of the cell lysate sample to dodecyl maltoside detergent before treatment with SDS gel-loading buffer. The apparent dimer band also migrated more rapidly after N-glycosidase F treatment. The faint band a t 35 kDa was not affected by N-glycosidase F treatment. Estimation of Level of Expression of Glucagon Receptor in Dansiently Dansfected COS Cells-The amount of glucagon receptor expressed in COS cells was estimated. A chimeric receptor tagged at the 3'-end with a nucleotide sequence encoding the epitope of an anti-rhodopsin monoclonal antibody, ID4, was constructed (Molday and MacKenzie, 1983). Quantitative immunoblot analysis was carried out on cell lysates transfected with the chimeric receptor using purified bovine rhodopsin as an internal standard. The most precise results were obtained from samples where both the chimeric glucagon receptor and rhodopsin were deglycosylated using N-glycosidase F so that each lane contained essentially only a distinct monomer and dimer band (not shown). The expression level of the glucagon receptor was estimated to be an average of 3.5 x lo5 receptorsf cell, assuming that 100% of the cells were expressing receptor. Fig. 4. COS cells were transiently transfected with an expression vector containing the synthetic glucagon receptor gene (pMT5). Competitive displacement of 12sI-labeled glucagon bound to transfected cells was determined by incubation with '251-glucagon (0.5 nM) alone and with the indicated concentrations of unlabeled glucagon, and the related peptide hormones GLP-1 (residues 7-37), secretin, and VIP. Cells transfected with control vector pMT3 showed insignificant binding of '2sI-glucagon. The displacement curves for both glucagon and GLP-1 fit well to ideal sigmoid curves with four parameters. The concentration of unlabeled glucagon required to displace 50% of receptor-bound '251-glucagon, the IC,, value, was determined from the curve fit to be 10.8 nM for glucagon and 9.9 p~ for GLP-1. Secretin and VIP did not compete with 1251-glucagon for receptor-binding sites. residues 7-37), secretin, and VIP. Data are presented as percent of total binding of the radiolabeled hormone uersus the log of peptide concentration. Maximum binding (100% on they axis) was less than 10% of total added radioactivity. Each symbol represents the mean of triplicate determinations and was curve-fitted where appropriate based on a single ligand-binding site model as described under "Experimental Procedures." Cells transfected with control vector pMT3 showed insignificant binding of '251-glucagon that was considered to be nonspecific. The concentration of unlabeled glucagon required to displace 50% of receptor-bound '251-glucagon, the IC,, value, was calculated to be 10.8 m for glucagon and 9.9 p for GLP-1 based on the fits of the curves shown. Secretin and VIP did not compete with lZ6I-glucagon for receptor-binding sites. Glucagon-dependent Stimulation of Adenylyl Cyclase in Dansfected COS Cells-Adenylyl cyclase activity of COS cells expressing the synthetic glucagon receptor gene is shown in Fig. 5. COS cells were transfected with vector containing the synthetic glucagon receptor gene (pMT5) or with vector alone (pMT3). The increase in intracellular CAMP level was determined when cells were incubated with increasing concentrations of glucagon in the presence of 5 R~M theophylline. CAMP was quantitated using an assay method that measured the ability of CAMP in each sample to displace [8-3HlcAMP from a CAMP-binding protein. COS cells expressing the glucagon receptor responded to glucagon with a maximal increase in CAMP levels of approximately 6-fold over cells transfected with control vector pMT3. The effective concentration at 50% stimulation of adenylyl cyclase (EC,,) for cells expressing the glucagon receptor was 0.22 nM as determined from the fit of the curve shown in Fig. 5. Preparation and Expression of Mutant Glucagon Receptor Genes- The glucagon receptor mutant genes D64E, D64G, D64K, and D64N were prepared by site-directed mutagenesis of the synthetic gene for the rat glucagon receptor as described under "Experimental Procedures." The location of Asp64 on the NH,-terminal tail of the receptor is shown schematically in Fig. 1 and in a primary structure alignment in Fig. 6. The genes were expressed in COS cells in parallel with the native gene following transient transfection by a DEAE-dextran procedure. using an assay method that measured the ability of CAMP in each sample to displace [8-3HlcAMP from a CAMP-binding protein. Each symbol represents the mean of triplicate determinations and is plotted as picomoles CAMP/106 cells uersus the log of glucagon concentration. In COS cells transfected with pMT5, the rise in CAMP levels in response to treatment with the maximal glucagon concentration shown was approximately six times that in cells transfected with control vector pMT3. The effective concentration at 50% stimulation of adenylyl cyclase (EC,,) for cells expressing the glucagon receptor was 0.22 MI as determined by the fit of the curve shown. is shown in Fig. 7. Detergent cell lysates were prepared using dodecyl maltoside hypotonic buffer from COS cells expressing native glucagon receptor (pMTB), D64E, D64G, D64K, and D64N. Samples were divided and one fraction was treated with N-glycosidase F to remove N-linked carbohydrates. The pattern of immunoreactive bands for the native glucagon receptor (pMT5) is similar to that seen in Figs. 2 and 3. Digestion with N-glycosidase F caused the broad band migrating with an apparent molecular mass of 55-75 kDa to collapse to a single monomer band migrating with an apparent molecular mass of about 48 kDa. The band at 35 kDa was generally less apparent in this particular preparation. The band patterns of the undigested mutant receptors were similar to each other but differed somewhat from that of the native receptor. The mutant receptors showed four immunoreactive bands that might represent heterogeneity of glycosylation at the four putative N-linked glycosylation sites on the N-terminal tail of the receptor. After N-glycosidase F treatment, each of the mutant receptors showed a single distinct monomer band with the same apparent molecular weight as that of the native receptor. The levels of expression of the mutant receptors were also similar to that of the native receptor as judged by the immunoblot analysis. Characterization of the Mutant Glucagon Receptors- Immunoblot analysis of plasma membrane preparations of COS cells expressing native or mutant glucagon receptor genes is shown in Fig. 8. Samples were divided, and one fraction was treated with N-glycosidase F to remove N-linked carbohydrates. The pattern of immunoreactive bands for the native glucagon receptor (pMTS), with or without N-glycosidase F treatment, is similar to that seen in Fig. 3. The band patterns of the mutant receptors were similar to those seen in Fig. 7. 6. Primary structure alignment of related G proteincoupled receptors. The amino-terminal region of the rat glucagon receptor centered around Asp".' is aligned with other related G proteincoupled receptors. Abbreviations are as follows: rGR (rat glucagon receptor) (Jelinek et al., 1993), hGR (human glucagon receptor) (unpublished data; GenBank accession No. L20316), rSR (rat secretin receptor) (Ishihara et al., 19911, rGLPR (rat glucagon-like peptide 1 receptor) (Thorens, 1992), h V P R (human vasoactive intestinal peptide receptor) (Shreedharan et al., 1993), pCTR (porcine calcitonin receptor) (Lin et al., 1991J, oPTHR (opossum parathyroid hormone receptor) (Juppner et al., 1991), mCRFR (mouse growth hormone-releasing factor receptor) (Lin et al., 1993). The numbering is from the deduced amino acid sequence of each reported receptor clone. Amino acid residues Cys", Asp"", Cys", and Trp"!' (numbering based on the rat glucagon receptor) are conserved in all receptors listed. ATrp residue is found in all but one receptor a t position 63, and a Pro residue is found in all but one receptor a t position 70. An Asp to Gly mutation a t position 64 was shown to be responsible for the little (lit) mouse phenotype (Lin et al., 1993). Lanes labeled + were treated with N-glycosidase F, and lunes labeledwere untreated. Each lane contains material from one-tenth of a 100-mm culture plate. Glucagon Receptor Expression However, the plasma membrane preparations for each of the mutants showed three prominent immunoreactive bands rather than the four bands apparent in the detergent cell lysates (Fig. 7). In addition, the band with the highest electrophoretic mobility was enriched relative to the other two bands. Ligand Binding of Mutant Glucagon Receptors-Competition for '"I-glucagon binding to COS cell membranes expressing native or mutant glucagon receptor genes is shown in Fig. 9. Membranes from COS cells transiently transfected with the synthetic glucagon receptor gene (pMT51, mutant D64E, D64N, D64K, D64G, or vector alone (pMT3) were incubated with radiolabeled glucagon and increasing concentrations of unlabeled glucagon. The levels of receptor were essentially identical in each case as shown in Fig. 8. Each of the four replacement mutants at Asp"j resulted in the complete loss of the mutant receptor ability to bind glucagon. The IC,, value for inhibition of "'1-glucagon binding of native receptor in COS cell Membranes from COS cells transiently transfected with the synthetic glucagon receptor gene (pMT5), D64E, D64N, D64K, D64G, or vector alone (pMT3) were incubated with radiolabeled glucagon and increasing concentrations of unlabeled glucagon. Immunoblot analysis of each of the samples assayed is shown in Fig. 8. Total radioactivity bound (counts/min) is plotted uersus log of glucagon concentration where each symbol represents the mean of triplicate measurements. Each of the four replacement mutations a t Asp",' resulted in the complete inability of the mutant receptor to bind glucagon. The IC,, value for inhibition of ""I-glucagon binding of native receptor in COS cell membranes was 19.7 nM based on the fit of the curve shown. membranes was 19.7 nM as determined from the fit of the sigmoid curve plotted in Fig. 9. DISCUSSION The aim of this work was to develop a system to facilitate structure-function studies of the glucagon receptor and related peptide hormone receptors. The first phase of the work involved the design and synthesis of a gene for the rat glucagon receptor. The synthetic gene consists of a nucleotide sequence that encodes the proper amino acid sequence but contains a relatively large number of unique restriction endonuclease recognition Glucagon Receptor Expression 29327 sites. A synthetic gene is generally useful to facilitate sitedirected mutagenesis by restriction fragment replacement and to optimize for a particular expression system based on codon usage or base composition considerations. Synthetic genes have been used successfully for the study of other G protein-coupled receptors, particularly those of the opsin family of receptors (Ferretti et al., 1986;Oprian et al., 1991). A tentative schematic representation of the rat glucagon receptor primary and secondary structure is shown in Fig. 1. Seven putative transmembrane helices (helix A through helix G) based on previous models of G protein-coupled receptors are shown (Dratz and Hargrave, 1983;Sakmar et al., 1989;Baldwin, 1993). Four sites of potential N-linked glycosylation on the amino terminus are labeled with asterisks (*I. The COOH-terminal 18 amino acids of the rat glucagon receptor were used to design a peptide, ST-18, for anti-peptide antibody production. A highly specific, high affinity antibody was obtained, which was used for immunoblot analysis of the expression of the glucagon receptor gene and site-directed mutant genes. The immunoblot in Fig. 2 shows that the antibody reacted with the products of expression of the vector containing the synthetic rat glucagon receptor. A band with an apparent molecular mass of 25 kDa was the only background visible in cells transfected with vector alone. The antibody showed affinity for the monomer and dimer forms of the receptor (Fig. 2). As expected, the extent of receptor glycosylation did not affect the affinity of the antibody since the antibody epitope was the COOH-terminal tail of the receptor (Fig. 3). The affinity of the antibody was not precisely quantitated, but we estimate the affinity to be similar to that of the antirhodopsin monoclonal antibody ID4 for rhodopsin based on immunoblot analyses of a mutant glucagon receptor containing the ID4 epitope sequence added as a tag to the COOH-terminal tail of the glucagon receptor. A more complete description of the properties of this antibody and other antiglucagon receptor antibodies will be reported separately. High level expression of the synthetic rat glucagon receptor gene in a vector (pMT5) where transcription was under the control of the human adenovirus major-late promoter was obtained in transiently transfected COS cells. Immunoblots of COS cell detergent lysates, hypotonic lysates, and plasma membrane preparations are shown in Fig. 2. The major receptor band migrated as a broad band with an apparent molecular mass of 55-75 kDa. This range is consistent with the predicted molecular weight of the receptor (Jelinek et al., 1993) and with the electrophoretic mobility of native receptor labeled with a photoactivated glucagon analog (Iyengar and Herberg, 1984;Iwanij and Hur, 1985). A second major immunoreactive band probably corresponds to a receptor dimer. The presence of a glucagon receptor dimer was also reported (Iwanij and Vincent, 1990). Some G protein-coupled receptors, such as rhodopsin, have been reported to dimerize readily even under denaturing and reducing conditions (Oprian et al., 1987). The binding affinity and specificity of the expressed glucagon receptor was evaluated as shown in Fig. 4. COS cells transfected with the glucagon receptor gene bound labeled glucagon, which was competed with unlabeled glucagon according t o an ideal four parameter logistic function with an apparent dissociation constant of 10.8 m. This value is similar to those reported previously under a variety of conditions for native receptor in hepatocytes (Sonne et al., 1978;Bharucha and Tager, 1990) and for an expressed cDNA clone (Jelinek et al., 1993). The concentration of unlabeled GLP-1 required to displace 50% of receptor-bound '251-glucagon was determined to be 9.9 p~, which is also consistent with previous reports of glucagon receptors in hepatocytes (Hossein and Gurd, 1984). As expected, secretin and VIP did not compete with 1251-glucagon for receptor binding sites (Rodbell et al., 1971;Bataille et al., 1974). A partial dose-response curve for glucagon-dependent adenylyl cyclase stimulation in transfected COS cells is shown in Fig. 5. As expected, treatment of cells expressing the glucagon receptor with glucagon resulted in an increase in CAMP levels compared with control cells. The EC,, value of this response was 0.22 nM. This value is similar to those reported previously and is less than the apparent glucagon dissociation constant (Jelinek et al., 1993). This result indicates that the heterologously expressed glucagon receptor gene can couple to endogenous Gs and adenylyl cyclase to produce the expected CAMP response t o glucagon. A calcium flux in transfected COS cells was also demonstrated in preliminary experiments (not shown), and the complete characterization of this response is the subject of ongoing work. The molecular basis of the little ( l i t ) mouse phenotype was recently demonstrated (Lin et al., 1993). An Asp to Gly mutation at position 60 in the hypothalamic growth hormone releasing factor (GRF) receptor was shown to correlate to a reduced hormone-dependent CAMP response in transfected cells (Lin et al., 1993). It was postulated that the mutation affected ligand binding properties of the mutant receptor. The GRF receptor is structurally related to the glucagon receptor as shown in a primary structure alignment of related G protein-coupled receptors (Lin et al., 1993) (Fig. 6). The amino-terminal region of the rat glucagon receptor centered around Asp64 is aligned with other related G protein-coupled receptors. AspGo in the GRF receptor corresponds to Asp64 in the glucagon receptor. In order to test the role of Asp64 in glucagon binding, 4-amino-acid replacement mutants (D64E, D64N, D64K, and D64G) in the synthetic rat glucagon receptor gene were prepared and characterized. The mutant receptor genes were expressed at levels similar to that of the native receptor (Fig. 7). The mutants were also found in the plasma membrane at levels similar to that of the native receptor (Fig. 8). The glycosylation pattern of each of the mutant receptors was somewhat altered as described above. This could be due to the fact that the Asp64 residue is located between the four putative glycosylation sites and particularly close t o Asn'' (see Fig. 1). As shown in Fig. 9, each of the glucagon receptor mutants with a single amino acid replacement at Asp64 failed to bind glucagon. This ligand binding defect could indicate a direct interaction between AspG4 and glucagon in the native receptor. However, the same result might be expected if a mutation caused a more general structural perturbation of the extracellular domain of the receptor. In any case, it is likely that the structure of the extracellular domain is important for glucagon binding. These results are also consistent with the explanation proposed for the molecular defect in the GRF receptor (Lin et al., 1993) and with mutagenesis studies of the ligand-binding domains of other related receptors (Braun et al., 1991). The system described in this report should allow more detailed characterization of the pharmacology of the glucagon receptor using additional receptor mutants in conjunction with glucagon peptide analogs. In addition, it is expected that permanent cell lines expressing the synthetic gene for the rat glucagon receptor will facilitate the biochemical characterization of the dual G protein signaling systems coupled to the receptor.
8,559.6
1994-11-18T00:00:00.000
[ "Biology" ]
EGFR Exon-Level Biomarkers of the Response to Bevacizumab/Erlotinib in Non-Small Cell Lung Cancer Activating epidermal growth factor receptor (EGFR) mutations are recognized biomarkers for patients with metastatic non-small cell lung cancer (NSCLC) treated with EGFR tyrosine kinase inhibitors (TKIs). EGFR TKIs can also have activity against NSCLC without EGFR mutations, requiring the identification of additional relevant biomarkers. Previous studies on tumor EGFR protein levels and EGFR gene copy number revealed inconsistent results. The aim of the study was to identify novel biomarkers of the response to TKIs in NSCLC by investigating whole genome expression at the exon-level. We used exon arrays and clinical samples from a previous trial (SAKK19/05) to investigate the expression variations at the exon-level of 3 genes potentially playing a key role in modulating treatment response: EGFR, V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS) and vascular endothelial growth factor (VEGFA). We identified the expression of EGFR exon 18 as a new predictive marker for patients with untreated metastatic NSCLC treated with bevacizumab and erlotinib in the first line setting. The overexpression of EGFR exon 18 in tumor was significantly associated with tumor shrinkage, independently of EGFR mutation status. A similar significant association could be found in blood samples. In conclusion, exonic EGFR expression particularly in exon 18 was found to be a relevant predictive biomarker for response to bevacizumab and erlotinib. Based on these results, we propose a new model of EGFR testing in tumor and blood. Introduction The prognosis of patients with stage IV non-small cell lung cancer (NSCLC) continues to be poor. Despite standard cytotoxic chemotherapy, almost 50% will not survive more than 12-14 months [1,2]. In the past few years, improvements in survival rates have primarily been achieved by the discovery of predictive molecular markers which identified subgroups of patients deriving a substantial benefit from targeted treatment. Several randomized phase III trials have recently shown a significant benefit of epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs) in chemotherapy naïve patients harboring an activating EGFR mutation [3][4][5][6]. EGFR mutations are found in about 10-15% of Caucasian patients [7]. In EGFR wild-type patients the first-line treatment with an EGFR-TKI might even harm compared to conventional chemotherapy [8]. However, in unselected chemotherapy-naïve patients the role of EGFR-TKIs is less clear and previous studies have demonstrated inferior outcomes with TKIs with or without bevacizumab compared to chemotherapy [9][10][11]. These results indicate, that there is a subgroup of EGFR wild-type patients who might benefit from treatment with a TKI or a TKI plus an anti-angiogenic agent. The same holds true for unselected and pretreated patients where the role of TKIs has been addressed in numerous trials and the efficacy and survival rates have shown to be comparable to conventional chemotherapy [12][13][14]. Furthermore, recent biomarker analyses of three large trials testing maintenance therapy with erlotinib clearly demonstrated, that a subset of EGFR wildtype patients also derive a significant benefit from EGFR-TKI therapy [15][16][17]. Beside EGFR other druggable oncogenic mutations in advanced NSCLC have been described [18,19]. Unfortunately, most patients with NSCLC do not harbor a corresponding molecular target hence chemotherapy continues to be their first treatment of choice. Therefore, the identification of further subgroups of patients who may derive benefit from targeted treatment by exploring additional molecular markers is crucial. Treatment with bevacizumab and erlotinib (BE) has potential benefits over chemotherapy, particularly in regard to its more favorable toxicity profile. There is evidence, that the addition of the vascular endothelial growth factor (VEGF) targeting monoclonal antibody bevacizumab to the EGFR-TKI erlotinib exhibits increased efficacy compared with erlotinib alone in unselected patients who were previously treated with chemotherapy [20]. This observation likely results from enhanced erlotinib activity, given the lack of efficacy of bevacizumab monotherapy in lung cancer. The Swiss Group for Clinical Cancer Research (SAKK) recently reported a median time to progression (TTP) of 4.1 months in patients with untreated advanced non-squamous NSCLC treated with BE [21]. This result appears to be inferior to what would be expected with modern chemotherapy combinations in similar patient populations [2,22]. In the current substudy, we aimed to identify a potential subgroup of patients participating in the SAKK 19/05 trial, particularly within the EGFR wild-type group, who may benefit from treatment with BE. The main goal of this study was to assess the correlation of exonlevel expression variations of 3 specific genes [EGFR, V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS) and vascular endothelial growth factor A (VEGFA)] and the response to first line BE therapy in patients who participated in the SAKK 19/05 trial. From these patients, tumor tissue for exon array analysis was obtained from 42 patients and blood samples from 75 patients (Table S1 in the Supporting Information). A detailed description of patient characteristics is provided in Table 1 (tumor tissue samples) and in Table 2 (blood samples). Tissue samples corresponded to our primary dataset used for biomarker identification. Blood samples were used for confirmatory purpose (validation set). Target gene expression analysis on exon-level Epidermal growth factor-receptor (EGFR). EGFR gene expression was measured at 451 loci, of which 51 were situated within exons, and 400 were situated outside of exons, i.e. intronic, intergenic or were unreliable (Figure 1, upper panel). Thus, a total of 51 exon probesets expression intensities were measured within the EGFR gene. A summary measure of all these exon-level probesets was provided by PCA (scores on the first PC axis). The association between this score and TS12 and TTP under BE, OS, and TTP under chemotherapy was evaluated. We found a significant correlation between EGFR PCA scores and TS12 after BE treatment (Spearman's r~0:502, p~0:006) (Figure 2A, left panel). A detailed analysis probeset-by-probeset revealed that 86% of the exonic probesets showed a significant correlation with tumor shrinkage without correction for multiple testing (pv0:05) ( Figure 2B, left panel). Two probesets showed a particularly strong correlation with TS12 (exon probesets ID 3002770 and 3002769), which remained significant after Bonferroni correction for multiple testing. These 2 probesets are located on exon 18 (chromosome 7, positions 55'238'440 and 55'238'092, respectively). No other significant associations were found. Six patients had TTP of 15 months or more. Three of those had EGFR del19, and 3 were EGFR and KRAS wild-type. Figure 3 depicts the significant association of exon 18-EGFR expression intensity and TS12. The left panel shows a strong association between the expression intensity of exon 18-EGFR (probeset 3002770) and TS12 (Spearman's r~0:69, pv0:0001). The strong correlation between EGFR exon 18 expression and TS12 remained highly significant (Spearman's r~0:61, p~0:0015) after restricting the analysis to EGFR wild type patients (see Figure S1 in the supporting information). This subanalysis indicates that the association between EGFR exon 18 expression and TS12 was independent from the EGFR mutation status. The ROC analysis (middle panel) shows the relationship between sensitivity and specificity depending on different cut-off levels of exon 18-EGFR (probeset 3002770) expression to classify patients into ''responders'' vs. ''non-responders''. For the purpose of this ROC analysis, the categorization ''responders'' vs. ''nonresponders'' derived from TS12. We proposed 3 alternative definitions to ''responders'' by setting the TS12 cut-off as greater or equals to 0, 20, or 30%, depending on whether or not one included all or a fraction of stable disease patients in the ''responders'' category. Using the median expression of EGFR probeset 3002770 as test-threshold provides a classification accuracy of 75% (sensitivity = 100%, specificity = 67%). As shown in the ROC curve, a higher classification accuracy can be expected by further fine tuning this threshold (area under curve [AUC] = 0.93). The 2 exon 18-EGFR probesets showing the strongest correlation with TS12 also showed a significant association for the same endpoint when measured using blood (pv0:05). The stability of our finding was assessed using bootstrapping, and cross-validation strategies. The procedure confirmed the strong classification accuracy of exon 18 EGFR with a median ROC-AUC of 0.94 (95% CI: 0.70-1.00) and the specific association between the exon 18 region and tumor shrinkage at week 12 (see Figure S2 and Text S1 for detailed procedure). Kirsten rat sarcoma viral oncogene homolog (KRAS) and vascular endothelial growth factor-alpha (VEGFA). In total, 13 and 25 exon probesets expression intensities were measured within KRAS and VEGFA respectively ( Figure 1, central and right panels). The PCA scores obtained for both sets of probeset (KRAS and VEGFA) did not show significant association with any of the clinical endpoints. A detailed analysis probeset-by-probeset did not reveal any significant association with either TS12 (Figure 2A, B, central and right panels) or the other investigated endpoints. Discussion To our knowledge, this is the first study exploring the correlation between gene expression assessed at a subgenic exonic level using Affymetrix Human Exon 1.0 ST arrays and response to treatment with an EGFR-TKI in combination with an anti-angiogenic agent. We investigated the exon intensity variations within 3 key genes (EGFR, KRAS and VEGFA) potentially associated with response to treatment with BE. We were able to demonstrate a strong association between the majority, but not all, of the 51 EGFR exon probesets and TS12 of first-line BE therapy in patients with untreated advanced non-squamous NSCLC. Exon 18-EGFR levels showed the best association with response to BE. Based on our previous experiments we assume that the signal we [23]. Furthermore, there was a quantitative relationshiphigher EGFR mRNA level was correlated with more pronounced tumor shrinkage, independently of EGFR mutational status. EGFR exon-level expression analysis might become a useful biomarker for daily clinical practice as it provides several advantages in comparison to conventional mutational analysis by gene sequencing. Typically, EGFR gene expression is measured using quantitative RT-PCR with primers binding to a single gene region often near the 39-end of the gene. However, as shown in our study, gene expression did significantly vary over the span of the EGFR gene. Reasons for such expression variations include alternative splicing. The EGFR variant type III (EGFRvIII) has an in-frame deletion of exons 2-7 which has been found to be generated by gene rearrangement or aberrant mRNA splicing [24,25]. This alternative splicing form has been found in NSCLC [26,27]. In preclinical experiments, cells expressing EGFRvIII were resistant against reversible EGFR-TKIs, but remained sensitive to irreversible EGFR inhibitors [28]. We found the best correlation with TS12 and exon 18. At the extremities of the EGFR gene several exonic probesets did not show a significant association with outcome. Dziadziuszko and colleagues reported that high EGFR mRNA expression analyzed by quantitative RT-PCR was associated with increased response and prolonged PFS in patients treated with gefitinib [29]. In a Chinese study of 79 unselected patients treated with erlotinib no significant correlation between EGFR mRNA expression, EGFR mutations, KRAS mutations and clinical endpoints was found [30]. Several trials demonstrated that clinical benefit with EGFR-TKIs was not restricted to patients with activating EGFR mutations [13,16,31]. On the other hand, the IPASS trial demonstrated that patients with EGFR wild-type treated with gefitinib had a significantly shorter PFS compared with patients in the chemotherapy arm (hazard ratio (HR): 2.85; 95% CI: 2.05-3.98; pv0:001) [8]. In the present study, we were able to identify 3 patients with EGFR wild-type and high exon 18-EGFR expression levels (2 measured in biopsies and blood, and 1 measured in blood only) who had significant TS12 after treatment with BE. We believe that these results are of interest, because the incidence of activating EGFR mutations in Caucasian patients is 10-15% and our test may identify additional patients who could fare better with first-line EGFR-TKIs compared with chemotherapy. This hypothesis needs prospective validation. Interestingly, patients with rarer EGFR-mutations (e.g. del L747-S751 and del R748-S752) for which the response to EGFR-TKIs has yet to be explored were also found to have higher exon-level EGFR expression levels which was correlated with TS12. Two probesets located on exon 18 showed the strongest association with tumor shrinkage. In an Italian single institution study, rare EGFR-mutations (exon 18 and 20 and uncommon mutations in exons 19 and 21 and/or complex mutations) were found in 2.6% of patients. They reported PR to erlotinib in a patient with a E709A+G719C double mutation and a response to erlotinib in a patient with a G719S mutation [32]. Other groups reported sensitivity to EGFR-TKI for the E709A+G719C double mutation and for the G719S mutation in exon 18 [33][34][35]. Interestingly, we observed tumor shrinkage in one patient with a KRAS mutation. This patient had a high EGFR exon expression. Patients with KRAS mutations represent approximately 25% of NSCLC patients and have been described as highly resistant to EGFR-TKI treatment with RR close to 0% and worse outcome for mutated patients treated with EGFR-TKIs in some trials [36,37]. The biomarker analysis of the SATURN trial showed no detrimental effect on PFS with erlotinib in patients with KRAS mutant tumors [17]. Thus, high exon EGFR expression levels may be able to identify patients with KRAS mutations who derive benefit from first-line BE. Other potential molecular markers beyond EGFR-mutations have been investigated for their predictive role for treatment with TKIs or TKIs in combination with VEGFR inhibitors. EGFR protein expression detected by immunohistochemistry (IHC) is present in 60-90% of NSCLC patients [13,38] and therefore unlikely to be of use for clinical selection for TKI therapy. Although subgroup analyses of placebo controlled phase III studies in pre-treated patients showed some predictive value of EGFR protein expression [13,39], these results were not confirmed either in the first line or maintenance setting [17,40]. Similarly, high EGFR copy number, which occurs in 30-50% of patients with NSCLC, and gene amplification, which occurs in about 10% [41], have recently been shown to be JoverruledJ by EGFR mutations with respect to their predictive value for the response to EGFR-TKIs [40]. Determination of EGFR mRNA expression by quantitative PCR was correlated to EGFR FISH and IHC and was shown to be a predictive biomarker for gefitinib [29]. Neither EGFR protein expression nor EGFR FISH testing are currently used in clinical practice and better molecular markers are therefore urgently needed. The EGFR gene gives rise to multiple RNA transcripts through alternative splicing and the use of alternate polyadenylation signals [42]. The EGFR gene spans nearly 200 kb and the full-length 170 kDa EGFR is encoded by 28 exons. Several alternative splicing variants have been described [43]. The most commonly used method to detect EGFR-mutations is direct sequencing of the PCR-amplified exon sequences. The copy number of mutant allele, imbalanced PCR amplification and the relative amount of contaminating wild-type allele of non-tumor cells can influence the sensitivity of mutant detection by direct sequencing [44]. Owing to concern regarding the sensitivity of the direct-sequencing method, a variety of other methods have been investigated to increase the sensitivity of the mutation assay. Here we investigated for the first time exon expression analysis. The array used enables gene expression analysis as well as detection of different isoforms of a gene. In this study we retrospectively identified a correlation between exon intensity levels within EGFR and patient outcome. The mechanism through which EGFR exon 18 expression determines an increased sensitivity to bevacizumab-erlotinib is unknown, although different hypotheses can be proposed. Exon array is still very recent with high potential technology. It brakes with the common idea that gene expression is stable over the span of a whole gene. Therefore, it is not surprising that we obtained a stronger statistical correlation EGFR expression near the region coding for the functional transmembrane part of EGFR. If the predictive value of this assay could be confirmed in a prospective trial, exon-level gene expression might identify patients deriving benefit from EGFR-and VEGFR-targeted therapies beyond the patients selected by conventional gene sequencing. There are certain limitations within the current study. It is a single arm design and has a relatively low number of patients from which tumor biopsies were available for analysis. In the first half of the SAKK 19/05 trial a treatment-naive biopsy was not required for study inclusion. In this period practically no biopsies were collected. After an amendment (October 2006) the biopsy became mandatory for study inclusion as a treatment-naive biopsy can be taken in almost every patient including advanced-stage NSCLC patients [23]. Exon array analyses were done with mixed cell tumor biopsies without any tumor-cell enriching technique like laser-capture microdissection. This is likely to lead to a certain dilution of the true tumor signal. Tumor-cell enriching techniques might further optimize the efficiency of biomarkers derived from exon array analyses. The validity of EGFR exon expression analysis as a biomarker of response to BE will need to be confirmed both using RT-PCR analysis targeting EGFR exon 18. The full accomplishment of the validation of the novel biomarker eventually requires further investigation using an independent prospective randomized trial. In conclusion, with the aid of a novel gene expression array technology with exonic coverage, we were able to identify exon 18-EGFR expression as a potential predictive biomarker for erlotinib and bevacizumab in patients with advanced, untreated NSCLC. SAKK 19/05 The SAKK 19/05 trial (ClinicalTrials.gov: NCT00354549) enrolled 103 patients with advanced non-squamous NSCLC, 101 patients were evaluable for further analysis [21]. Eligibility criteria included age w18 years, adequate bone marrow function, normal kidney and liver function and measurable disease. Patients with immediate need of chemotherapy, with large centrally located tumors, pre-existing tumor cavitations and brain metastases were excluded. Extra pre-treatment bronchoscopic biopsies for translational studies were taken in 49 patients, from which 42 were of sufficient quality for subsequent exon array analysis. For the present substudy, pretreatment blood samples were available from 95 patients, and samples from 75 patients had sufficient quality for exon arrays. Overall, 76 patients with either tumor or blood samples or both, were included in the current substudy. Written informed consent for translational research was obtained from all patients. The clinical trial as well as the current substudy were approved by the IRB of St. Gallen (EKSG 06/012). Pathology analysis The formalin-fixed and paraffin embedded specimens were reviewed and classified according to World Health Organisation (WHO) criteria. Mutational analyses of EGFR (exon [18][19][20][21] and KRAS (exon 12) were carried out from unstained tissue sections (3 mm) or Papanicolaou-stained cytological specimens using direct sequencing as previously described [45,46]. Tumor cell enrichment was achieved either by macrodissection or laser-capture microdissection and DNA sequence analysis. Exon-level gene expression analysis Total RNA from whole bronchoscopic biopsy samples were extracted and provided sufficient quality for microarray hybridization in 42 of 49 samples. Circulating RNA from peripheral blood samples was extracted and provided sufficient quality for microarray hybridization in all 75 samples. mRNA was hybridized on Affymetrix Human Exon 1.0ST arrays (Affymetrix, Santa-Clara, CA, USA) following standard recommendations from the manufacturer (detailed procedure available in Text S1). Raw data have been deposited in NCBIs Gene Expression Omnibus (GEO), and are accessible through GEO Series accession number GSE37138. The exon and gene level probesets were preprocessed, quality checked and normalized using the RMA procedure [47]. The tissue and blood datasets were analyzed independently without pooling the data. The tissue dataset was used for biomarker discovery whereas the blood dataset was used for internal validation. Statistical considerations The initial sample size calculation was based on the primary endpoint of the clinical study (DSR at week 12 (DSR12) under BE treatment). The 101 evaluable patients accrued guaranteed a high precision in the estimation of DSR12. In a targeted gene approach, 3 genes were specifically investigated: EGFR (ENSG00000146648), KRAS (ENSG00000133703) and VEGFA (ENSG00000112715). EGFR included 51, KRAS 13, and VEGFA 25 exonic probesets (Figure 1). The endpoints considered in this biomarker study included tumor shrinkage after 12 weeks (TS12) of BE treatment, TTP under BE and OS. OS was measured from registration until death of any cause. The result of previous tumor EGFR sequencing was used for substudy analysis. The univariate association between the exon-level intensities and time-to-event endpoints was assessed by Cox proportional hazards regression. The correlation between exon-level intensities and tumor shrinkage was measured using the Spearman's correlation coefficient r and tested for significant difference from 0. Bonferroni corrections were used to account for multiple testing. Principal component analysis (PCA) was used to summarize the information included in several exon-level probesets into composite scores (scores on the first principal components). Receiver Operating Characteristic (ROC) curves were used to estimate the sensitivity, specificity and accuracy of exon expression based predictors. In order to assess the stability of our findings, a crossvalidation strategy was used. The accuracy of the classification model was evaluated using bootstrapping. All analyses were done using the R statistical software (version 2.13.0; packages xmapcore, ade4, ROCR, Daim and survival) [48]. Figure S1 Association between EGFR exon 18 expression and tumor shrinkage at week 12 -sub-analysis. Only EGFR wild type patients were included in this analysis. The scatter plot depicts the correlation between the expression of EGFR exon 18 (probeset 3002770) and the tumor shrinkage at week 12. The vertical line shows the median expression intensity of EGFR exon 18. (TIF) Figure S2 Stability of the prediction ability of EGFR biomarkers using cross-validation strategies. The left panel depicts the ability of the EGFR biomarker most significantly associated with TS12 (#/.20%) using the original dataset (probeset 3002770) to classify BE responders. The best cut-off value, together with the associated false positive rate (FPR), true positive rate (TPR) and area under ROC curve (AUC) are given. The right panel depicts the averaged ROC curve obtained after .632 bootstrap cross-validation procedure. The boxplots show the distribution of the FPR throughout the re-sampled datasets. (TIF) Text S1 Additional material and methods information. Supporting Information The first paragraph provides an extended description of the exonlevel gene expression analysis. The second paragraph gives details about the assessment of the stability of the obtained results. (PDF)
5,021.8
2013-09-10T00:00:00.000
[ "Biology", "Medicine" ]
Conjugate heat transfer of laminar mixed convection of a nanofluid through an inclined tube with circumferentially non-uniform heating Laminar mixed convection of a nanofluid consisting of water and Al2O3 in an inclined tube with heating at the top half surface of a copper tube has been studied numerically. The bottom half of the tube wall is assumed to be adiabatic (presenting a tube of a solar collector). Heat conduction mechanism through the tube wall is considered. Three-dimensional governing equations with using two-phase mixture model have been solved to investigate hydrodynamic and thermal behaviours of the nanofluid over wide range of nanoparticle volume fractions. For a given nanoparticle mean diameter the effects of nanoparticle volume fractions on the hydrodynamics and thermal parameters are presented and discussed at different Richardson numbers and different tube inclinations. Significant augmentation on the heat transfer coefficient as well as on the wall shear stress is seen. Introduction Many different industries such as electronic, automotive and aerospace have been facing heat transfer limitation for improving performance of their thermal systems. Heat transfer enhancement has been considered as one of the key parameter for developing more efficient and effective thermal devices. Thus this issue has been studied extensively. Different active and passive methods have been considered for the heat transfer augmentation. Improving the thermo-physical properties of the working fluids such as water, oil and ethylene glycol mixture is one of the possible methods. Therefore, there has been a strong motivation to develop a new heat transfer fluids with substantially higher thermal conductivity. Choi [1] presented a new generation of solidliquid mixtures that is called nanofluid. It demonstrates significant improvement over the thermal characteristics of the base fluids. Various nanofluids with different nanoparticle and base fluid materials have been prepared and their thermo-fluid characteristics have been investigated by many researchers. Among them, experimental studies of [2][3][4][5] on confined geometries could be cited. In general they found that the Nusselt number increases with the nanoparticle concentrations and significant heat transfer enhancement has been achieved. Many works have been dedicated to determine and model the effective physical properties of different nanofluid. For instance, investigations of Refs. [6][7][8][9][10][11][12][13] on the effective thermal conductivity or the works that have been done by [14,15] on the nanofluid effective viscosity could be mentioned. Convective heat transfer with nanofluids can be modelled using the two-phase or single-phase approach. The first provides the possibility of understanding the function of both the fluid phase and the solid particles in the heat transfer mechanisms. The second assumes that the fluid phase and particle are in thermal and hydrodynamic equilibrium. This approach is simpler and requires less computational time. Thus it has been used in several theoretical studies of convective heat transfer with nanofluids [16][17][18]. However, the concerns in single-phase modelling consist in selecting the proper effective properties for nanofluids and taking into account the chaotic movement of ultra fine particle. To partially overcome this difficulty, some researches [19][20][21] used the dispersion model which takes into account the improvement of heat transfer due to the random movement of particles in the main flow. In addition several factors such as gravity, friction between the fluid and solid particles and Brownian forces, the phenomena of Brownian diffusion, sedimentation and dispersion may coexist in the main flow of a nanofluid. This means that the slip velocity between the fluid and particle may not be zero [22]. Therefore, it seems that the two-phase approach could better model nanofluid behaviours. Behzadmehr et al. [23] studied the turbulent forced convection of a nanofluid in a circular tube by using a two-phase approach. They implemented the two-phase mixture model for the first time to study nanofluid. Their comparison with the experimental results showed that the two-phase mixture model is more precise than the single-phase model. Mirmasoumi and Behzadmehr [24] studied the laminar mixed convection of a nanofluid in a horizontal tube using two-phase mixture model. They showed that the twophase mixture model could better simulate the experimental results than the single-phase model. Recently, Lotfi et al. [25] studied two-phase Eulerian model that has been implemented to investigate such a flow field. Their comparison of calculated results with experimental values shows that the mixture model is more precise than the two-phase Eulerian model. This work intends to investigate conjugate mixed convection-conduction of a nanofluid though an inclined tube. The tube is subjected to a uniform heat flux on its top surface; it is insulated on its bottom surface. Therefore, the effects of tube inclinations and particle volume fractions on the hydrodynamic and thermal parameters have been presented over a wide range of Re-Gr combinations. Mathematical formulation Mixed convection of a nanofluid consists of water and Al 2 O 3 in a long copper tube with uniform heat flux at the top surface of tube wall has been considered. Figure 1 shows the geometry of the considered problem. The physical properties of the fluid are assumed constant except for the density in the body force, which varies linearly with the temperature (Boussinesq's hypothesis). Dissipation and pressure work are neglected. Thus, with these assumptions the conservation equations for steady state mixture model are as follows: Continuity equation: Momentum equation: Energy equation for fluid: The heat conduction throughout the solid wall: Volume fraction: Where are the mean axial velocity and shear stress, respectively, and j is the volume fraction of phase k. In Equation 2, V dr,k is the drift velocity for the secondary phase k, i.e. the nanoparticles in the present study: The slip velocity (relative velocity) is defined as the velocity of a secondary phase (p) relative to the velocity of the primary phase (f): The drift velocity is related to the relative velocity: The relative velocity is determined from Equation 10 proposed by Manninen et al. [26] while Equation 11 by Schiller and Naumann [27] is used to calculate the drag coefficient: The acceleration a in Equation 10 is: where: The physical properties in the above equations are: Effective density: Chon et al. [12] correlation which considers the Brownian motion and nanoparticle mean diameter has been used for calculating the effective thermal conductivity: (15) where Pr and Re in Equation 15 are defined as: L f = 0.17 nm is the mean free path of water, B c is the Boltzmann constant (1.3807 × 10 -23 J/K) and μ is calculated by the following equation: Thermal expansion coefficient Khanafer et al. [16]: An accurate equation is used for calculating the effective heat capacity [28]. Effective viscosity is calculated by the following equation proposed by Masoumi et al. [15] which considers the effects of volume fraction, density and average diameters of nanoparticle and physical properties of the base fluid: , δ = 3 π 6φ dp, VB = 1 dp Boundary condition This set of nonlinear elliptical governing equations has been solved subject to the following boundary conditions: At the tube inlet (Z = 0): At the interface between the tube wall (copper) and the fluid (r = r i ), the continuity condition for temperature and heat flux are applied so that: At the tube outlet: atmospheric static pressure is assumed. Numerical method and validation This set of coupled non-linear differential equations was discretized with the finite volume technique. For the convective and diffusive terms a second order upwind method was used while the SIMPLEC procedure was introduced for the velocity-pressure coupling. The discretization grid is uniform in the circumferential direction and non-uniform in the other two directions. It is finer near the tube entrance and near the wall where the velocity and temperature gradients are important. Several different grid distributions have been tested to ensure that the calculated results are grid independent. The selected grid for the present calculations consists of 160, 32 and 36 nodes, respectively, in the axial, radial and circumferential directions. As shown in Figure 2 increasing the grid numbers does not significantly change the velocity and temperature of the nanofluid. The grid test on the nanoparticle volume fraction is shown in Figure 2d. It is seen that the nanoparticle concentration does not change with increasing the grid numbers in the radial direction. Other axial and radial profiles have also been verified to be sure the results are grid independent. In order to demonstrate the validity and also precision of the model and the numerical procedure, comparisons with the previously published experimental and numerical results have been done. Figure 3a,b shows the comparison of the calculated Nusselt number with the experimental results of Barozzi et al. [29] and Peutkhov et al. [30] in a horizontal tube, respectively. As shown good agreement between the results are seen. A comparison has also been performed with the numerical results obtained by Ouzzane and Galanis [31]. As shown in Figure 4, axial evolution of the dimensionless temperatures and velocity is in good concordance with the present results. It should be mentioned that our numerical results were obtained using the two-phase mixture model and considering a very small volume fraction for the solid particles. Therefore, the numerical procedure is reliable and can well predict developing mixed convection flow in a tube. Results and discussions Calculations have been performed over wide range of Re-Gr combinations and nanoparticle concentrations. The Grashof number (or Richardson number) has been limited in order to respect the validity of the Boussinesq's approximation for the fluid density variation. The results presented here are for different Richardson numbers and three nanoparticle volume fractions. For a given nanoparticle mean diameter and concentration (d p = 28 nm, F = 0.04) the effect of tube inclinations on the secondary flow vector and dimensionless temperature are shown in Figure 5 for two different Richardson numbers. As mentioned the tube is considered to be made of copper which is a high thermal conductive metal. This transfers the heating energy from the top half surface of tube to the bottom half. Thus, the fluid at the bottom section could also be warm. The latter generates the secondary flow if it would be enough temperature differences. Since the warmer fluid tends to move upward and the colder goes down. In the case of higher Richardson number, the secondary flow is well established and significantly affects the fluid flow. Hot flow from the near wall region goes up and then backs downward at the centreline region. While at the lower Ri, where the circumferential temperature variation is low the strength of secondary flow is low. By tube inclination the warmer To see how the axial velocity profile is affected by the secondary flows, Figure 6 is presented. This figure shows the effect of tube inclination and the Richardson number on the axial velocity profile and dimensionless temperature profile. At a = 0, increasing the Richardson number shifts the position of maximum axial velocity toward the bottom section. Since, the strength of secondary flow augments with the Richardson number. As intended, by increasing the Richardson number the bottom half of tube is also more affected by the energy that is transferred from the top half of tube. The temperature variation at the tube cross-section augments. By increasing the tube inclination temperature variation at the tube cross-section becomes more uniform. The latter tends to shift the maximum axial velocity towards the upper part of tube. Where, axial component of the buoyancy forces is more important. As seen in Figures 7 and 8, these forces and the secondary flow induced by the cross-sectional component of the buoyancy forces affects the homogeneity of the dispersed nanoparticles. At the near wall region where the effect of viscous layer is more significant, nanoparticle concentration is more evident. In the other hand, secondary flow causes to see a region of lower nanoparticle concentration at the top of tube where the direction of circular cell changes and goes back toward the bottom of tube. Thus, higher tube inclinations improve the homogeneity of the nanoparticles distribution. As shown in Figure 8 using larger particle accelerates the migration of the nanoparticle and deteriorates the nanofluid homogeneity. For the particles with smaller mean diameter, this variation is not significant and thus homogeneous distribution could be considered. While increasing nanoparticle mean diameter, non-uniformity on the particles distribution becomes more important and single-phase approach may fail. These effects could significantly affect heat transfer throughout the tube. Axial evolution of the average peripheral convective heat transfer coefficient along the tube length is shown in Figure 9. In general h decreases and monotonically goes to its asymptotic value. Buoyancy forces components (axial and radial) significantly affected the variations of heat transfer coefficient. At the lower Ri for which the effect of buoyancy force is weak, maximum heat transfer coefficient could be seen in the case of horizontal tube (pure radial buoyancy force). While at the higher Richardson number the buoyancy forces augments and so both axial and radial components become considerable. Based on the value of the axial and radial components of the buoyancy force, the best tube inclination for which the highest heat transfer coefficient is achieved could be determined. For instance, at the low Ri, horizontal configuration gives the best heat transfer coefficient (among the other angle in Figure 9) while for the higher Richardson number (Ri = 5) it appears at tube inclination of a = 30. This behaviour is also seen for different nanofluids. However, using nanofluid enhances heat transfer coefficient. This enhancement becomes more important at the higher Richardson number. In spite of increasing heat transfer coefficient the peripheral average shear stress is also augmented. This is shown in Figure 10 for different tube inclination and nanoparticle volume fraction. As seen, increasing nanoparticle concentration augments the shear stress which means more pumping power is needed for the fluid pumping. This partially arises from the fact that nanofluid viscosity increases with the nanoparticle concentration. The axial buoyancy forces and near wall fluid acceleration has an important effect on the shear stresses. Thus, by increasing the tube inclinations axial buoyancy forces augments and the higher value of the average shear stress is observed. This is more evident in the case of higher Richardson number. To have a comparison between the heat transfer enhancement and pressure drop augmentation with nanoparticle concentration detail analysis of the corresponding data in the case of Re = 300 and Ri = 5, is presented as an example. It shows that increasing the nanoparticle volume fraction from 0 to 2%, pressure drop augments by about 31% while the heat transfer coefficient increases by about 5%. This is showed that despite of stability and homogeneity of the nanofluids the pumping power is also an important concern that must be well addressed. Conclusion Conjugate laminar mixed convection of water/Al 2 O 3 nanofluid in an inclined copper tube has been investigated numerically by using two-phase mixture model. The top half of tube wall is heated while the other half of tube is considered to be adiabatic. Copper is a good conductive material and transfers the heating energy from the upper part of tube to the lower half of tube by heat conduction mechanism. This could also increase the fluid temperature at this region. The latter could generate the secondary flow for which its strength depends on the nanoparticle volume fraction, the Richardson number and tube inclination angle. The buoyancy induced secondary flow augments with the nanoparticle volume fraction and the Richardson number. However, by tube inclination the axial component of the buoyancy forces increases and so the strength of secondary flow decreases. Nanoparticle concentration does not have significant effect on the axial velocity profile. However, at the high value of the Richardson number for which the effect of thermal energy is become more important than the hydrodynamic energy, nanoparticle concentration could affect the axial velocity profiles. Heat transfer coefficient is augmented with the nanoparticle volume fraction as well as the Richardson number. Combinations of the axial and radial component of the buoyancy forces could determine the inclination angle for which the maximum heat transfer enhancement occurs. However, the wall shear stress is significantly increased with the nanoparticle volume fraction. It is also augmented with the tube inclination because of increasing the axial component of the buoyancy forces.
3,756.2
2011-04-26T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Multi-lepton signatures for scalar dark matter searches in coannihilation scenario We revisit the scalar singlet dark matter (DM) scenario with a pair of dark lepton partners which form a vector-like Dirac fermionic doublet. The extra doublet couples with the SM leptonic doublet and the scalar singet via a non-SM-like Yukawa structure. As a result, (1) since the extra fermionic states interact with other dark sector particles as well as the SM via gauge and Yukawa interactions, it gives rise to new DM annihilation processes including pair annihilation as well as coannihilation channels, and (2) such a Yukawa structure opens up new production channels for leptonic final states giving much enhancement in cross sections to search for dark matter in the LHC. Using suitable kinematic observables, we train a Boosted Decision Tree (BDT) classifier to separate enhanced but still feeble light leptonic signal from the backgrounds in an effective manner. On the other hand, same technique is applied to study $\tau$-tagged jets in search for DM signals. I. INTRODUCTION Cosmological considerations and astrophysical observations have established beyond any reasonable doubt, the existence of the dark matter (DM). The satellite-borne experiments such as WMAP [1] and PLANCK [2] measured extremely precisely the cosmological relic abundance and it is given by Ω DM h 2 = 0.1199±0.0027, h being the reduced Hubble constant. Though DM constitutes about 27% of the energy budget of the Universe, the particle nature of it remains an enigma. The search for a suitable candidate for particle dark matter is a longstanding problem [3][4][5]. The so-called Weakly Interacting Massive Particle (WIMP) is the most widely explored sector to resolve the discrepancy. Within the WIMP paradigm, the scalar singlet dark matter or scalar "Higgs-portal" scenario is perhaps the most studied of all the relevant scenarios of dark matter to explain the relic density [6][7][8]. Consequently, it went through immense scrutiny theoretically as well as experimentally (see for example Refs. [9,10] for recent reviews of the current status of the Higgs portal scenario). We now know that the direct detection (DD) [11][12][13][14], indirect detection (ID) [15][16][17] and invisible Higgs decay [18][19][20] searches put a strong bound on the coupling of the Standard Model (SM) Higgs boson, h with the said scalar singlet, say S. Let us call this coupling λ hS . These experiments constrain λ hS to be very small. As a result, it gives an overabundance of relic density except around a small window around the resonance region, m S ∼ m h /2. However, one can improve the situation with scalar singlet DM using various alternatives, such as considering other symmetries within the dark sector [21][22][23] or adding new particles in the particle spectrum so as to arrange other portals [24][25][26] for DM annihilation without worsening the existing constraints. An interesting possibility in this context, called the coannihilation [27], is a widely studied feature in DM dynamics where the DM annihilates with another dark sector particle and the chemical equilibrium between the annihilating particles ensures the substantial depletion of DM number density. This feature is a very useful handle to revive the scenarios where direct detection bounds push relic density to overabundance. In such scenarios, coannihilation works efficiently as a DM number changing process without affecting the direct search measurements because the couplings involved in coannihilation are insensitive to direct detection channels. In the present work, we will revisit the scenario of the scalar singlet dark matter with a pair of accompanying dark leptons which form a vector-like Dirac fermionic doublet. This dark sector doublet couples with the SM leptonic doublet and the scalar singlet via a novel Yukawa interaction which is less explored in the literature. There are two distinct interesting features of this model: (1) Since the new dark sector fermions form a doublet they will interact with the SM via gauge interaction as well as the new Yukawa coupling, which in turn will give rise to new annihilation channels, and (2) Such a Yukawa structure will open up new production channels for leptonic final states giving much enhancement in crosssections to search for dark matter in collider environments like LHC through the said channel. Depending on the choice of parameters, here the DM annihilation can have three distinguishable stages, namely pair annihilation, coannihilation, and mediator annihilation. Here, it is to be noted that coannihilation scenarios in WIMP are mostly studied in the literature in the context of SUSY [28][29][30] and coloured coannihilating particles [31][32][33]. Our model discusses a leptophilic context and the coannihilation channels play an important role here due to the gauge interaction in the dark sector in addition to the new Yukawa coupling. This feature is significantly different from the cases explored in literature where the leptophilic Yukawa structure involves singlet dark sector partners with the DM candidate [34,35]. As mentioned above, since the coannihilating partner 1 couples to the SM with gauge as well as Yukawa coupling, the leptonic search channels get a boost in cross-section from it and it is only logical to probe the said channel for collider signatures. Moreover, the leptonic channel gives cleaner signals than the other channels. Still, the collider searches of dark matter is a very challenging prospect. Note that any lepto-philic DM model like ours contributes to the calculation of muon g − 2. Very good agreement between the theoretical calculation and experimental measurements of muon g − 2, ∆a µ = a Exp µ − a SM µ = 268(63)(43) × 10 −11 [36] put a strong constraint on the new Yukawa couplings of the light SM leptons. However, there is no such bound for the production of τ-leptons. Hence, it would be a good prospect to probe that channel for dark matter signatures in colliders. Our case is similar to the supersymmetric (SUSY) theories where stau is the coannihilating partner [28,[37][38][39][40][41][42][43]. Despite the leptonic channel getting a boost, the cross-section can still be smaller. So to probe light leptonic channels effectively, one must follow sophisticated techniques to separate signals from the backgrounds. The multivariate analysis is one such prospect. We perform Boosted Decision Tree (BDT) response to separate feeble light leptonic signal from the backgrounds in an effective manner. On the other hand, despite further enhancement in cross-section, the τ-leptons mostly decay into hadronic jets resulting in difficulty in their reconstruction. We used τ-tagged jets from the detector simulation with 60% τ-tagging efficiency to perform the BDT response. We organized the paper as follows. In Section II, we describe the contents of our model. The dark matter phenomenology, its formalism, and the observations from the relic density, direct and indirect detection calculation are discussed in Section III. Section IV contains the study of collider signatures at the LHC through multivariate analysis of light di-lepton as well as di-τ-lepton channel. Finally, we conclude our results in Section V. II. MODEL DESCRIPTION As we described briefly in the Introduction, we want a model where the dark sector will consist of one or more coannihilating partners in addition to the scalar singlet dark matter. So, we consider a vector-like Dirac fermionic doublet, Ψ T = (ψ 0 , ψ − ) and a real scalar singlet φ in addition to the SM particles. To achieve the stability of the dark sector, both the new fields are odd under 2 symmetry whereas the SM fields are 2 -even. In Table I The real scalar singlet φ which is our DM candidate interacts with the SM via the Higgs portal. As the other dark sector particles, (ψ 0 , ψ ± ) form an SU(2) L doublet, it interacts with the SM through gauge bosons. Finally, the coannihilating doublet Ψ couples with the SM leptonic doublet L and the scalar singlet φ via a Yukawa interaction. This is novel in the sense that the widely used Yukawa structure in any new physics model consists of a scalar doublet which is the replica of the SM Yukawa interaction. Although this particular Yukawa structure is less explored in the literature, it fits the bill for all our requirements for this study. Hence the resulting Lagangian takes the form where SM is the SM Lagrangian, M Ψ = diag (m ψ 0 , m ψ − ), is the diagonal mass matrix and D µ = ∂ µ + i g W t a W a + i g Y B µ is the covariant derivative of the fermionic doublet Ψ. The mass of the scalar singlet φ is given by m 2 φ = µ 2 φ + λ hφ v 2 /2. A discussion is in order here on the existing bounds that constrain the model parameters. The couplings which will play a significant role in the DM dynamics are the Higgs portal coupling λ hφ and the Yukawa couplings y , = e, µ, τ. To put the bounds from the direct detection searches at bay we have considered λ hφ 10 −4 which also takes care of the invisible decay measurement. One the other hand, the muon g − 2 measurement put the value of the Yukawa couplings of light leptons at y e ∼ y µ 10 −9 . This leaves the third generation Yukawa coupling y τ to be the only one free from experimental constraints. However, one must note that to keep our model in the perturbative regime, we must have y τ ≤ 4π. A. Formalism In the proposed model, DM number changing processes are (i) pair annihilation (φφ → SM SM), (ii) coannihilation (φψ ±0 → SM SM) and (iii) mediator annihilation (ψ ±0 ψ ∓0 → SM SM). The choice of parameters will determine the relative contribution of these processes towards the relic density as we discuss in the following sections. In agreement with the common assumption of thermal freeze-out, the dark sector particles are in equilibrium with the thermal bath in the early Universe. At the same time, they are also in chemical equilibrium with each other, due to substantial interaction strength between themselves. Keeping all these in mind, one can write the Boltzmann equation as follows [27] where n and n eq are the DM number density and the equilibrium number density respectively. Now, the effective velocity averaged annihilation cross-section, 〈σ eff v〉 specific to this model can be written as whereḡ In the expressions above, g φ = 1, g ψ 0 = g ψ ± = 2 are the internal degrees of freedom and x = m φ /T . ∆m's are dimensionless mass splitting parameters defined as As previously mentioned, the pair annihilation and coannihilation channels predominantly control the DM freeze-out. The mass splitting between φ and other dark sector particles and the Yukawa couplings mainly determine the contribution of these processes towards the total DM annihilation cross-section. These mass splits, play a very important role, especially for the coannihilation and mediator annihilation processes as the Boltzmann factor in Eq. (3) gives rise to significantly increased annihilation cross-section for small values of ∆m's. Before we go into the details of the freeze-out mechanisms, let us discuss the parameters used in the analysis. Since the mass splitting parameters defined in Eq. (5) between DM and other dark sector particles play an important role in freeze-out of φ, we will use them as independent parameters along with DM mass. As discussed in the previous section, the only important Yukawa coupling here will be y τ , which couples φ to the third generation SU(2) L lepton doublet and the new fermionic doublet Ψ. In summary, we take the following values of the parameters throughout our analysis, Free parameters: m φ , ∆m 0 , ∆m ch , y τ , Fixed parameters: λ hφ = 10 −4 , y e ∼ y µ ∼ 10 −9 . B. Analysis and observations Relic density In addition to the Higgs portal annihilation channels of scalar singlet DM, the present model introduces a Yukawa interaction between the dark sector particles and SM. Unlike the Higgs-DM quartic coupling (λ hφ ), the new Yukawa coupling ( y τ ) is unconstrained except for the perturbative limits. This provides an excellent tool to explain the relic density for a wide parameter space even with negligible λ hφ , which in turn alleviates the direct search bounds. We have performed DM analysis using micrOMEGAs [44]. Due to minuscule λ hφ , the Higgs portal annihilation channels have a negligible contribution towards DM relic density and hence we will focus on the newly introduced channels only. All these annihilation channels can be broadly classified into 3 categories: • Pair annihilation (φφ → SM SM) ( Fig. 1) • Coannihilation (φψ ±0 → SM SM) (Fig. 2) • Mediator annihilation (ψ ±0 ψ ∓0 → SM SM) (Fig. 3) All these categories can coexist or supersede each other, depending on the choice of parameters. As obvious from the figures below as well Eq. (3), the two ∆m's account for the strength of the coannihilation and mediator annihilation channels for two heavier dark sector particles ψ 0 and ψ ± . Apart from this, the Yukawa coupling y τ also plays a significant role. In this context, it is worth noting that for the pair annihilation channels in Fig. 1, the cross-section depends on y 4 τ while for coannihilation channels, it is only a y 2 τ dependence and the mediator annihilation channels, being mostly gauge mediated, has a very little dependence on y τ . One can easily verify this from the analytical expressions in Eqs. (A1) and (A2). Since all the above three categories of DM annihilation can coexist in the parameter space, it would be interesting to identify the limiting cases where the transition from one category to another is perceivable. It is worth mentioning here that the interaction channels between φ and two other dark sector particles are exactly similar (see Figs. 1 and 2), so in the degenerate mass limit, m ψ 0 ∼ m ψ ± , their contribution will be the same. Hence, to see the effect of their interaction with DM, it is sufficient to take one ∆m small and fix the other at large values so that the large propagator suppression in Fig. 1 or Boltzmann suppression in Fig. 2 practically makes the relevant channels negligible. In Fig. 4, the above mentioned transition is depicted in the y τ vs. m φ plane for some fixed values of δm = m ψ ± − m φ . m ψ 0 is fixed at a much larger value (2 TeV) than m φ max so that only φ − ψ ± interactions are important. For pair annihilation, 〈σ eff v〉 has m ψ ± dependence only in the t-channel propagator, but for coannihilation, m ψ ± appears in the propagator and the initial state along with the Boltzmann factor (Eq. (4)). Now, for a very large value of δm, the coannihilation processes will be negligible due to substantially large Boltzmann suppression, and so the pair annihilation will predominantly dictate the DM annihilation. For these channels also, due to large propagator suppression, to obtain sufficient annihilation cross-section for the right relic, one has to go to very large coupling. This feature is clear from the green line in Fig. 4. For the magenta line, however, since the split is comparatively smaller, one can see an immediate effect in the reduction of the corresponding coupling. However, an interesting feature is rather around a higher mass range of m φ where one can see that for a fixed value of m φ , y τ is falling more sharply as we decrease the mass split. This is attributed to the exponential factor in the expression of 〈σ eff v〉 for coannihilation channels (Eq. (3)). For a fixed δm, larger m φ in this factor will make 〈σ eff v〉 more enhanced because of e −δm/m φ and hence, to obtain right relic, y τ has to decrease. This feature is gradually prominent as we decrease the mass split because of stronger dependence on the exponential factor, e.g in the red line, the exponential tail is visible even for a smaller value of m φ than the rest of the cases. However, it is important to mention that only for demonstration purposes, we have taken y τ up to 10. To stay well within perturbative limits, throughout our analysis, we have fixed the upper limit on y τ at 3. For convenience, we have shaded the allowed parameter space in the figure. From this constraint, however, one can get a limit on the maximum range of the mass splits between φ and ψ ± while m φ is varied over the entire range. But the more realistic picture will be for the case where both ψ 0 and ψ ± vary instead of fixing one. In Fig. 5a, the variation of ∆m 0 vs. m φ is plotted for all points satisfying right relic. The mass splitting between φ and ψ ± is fixed at 100 GeV to rule out significant contribution from φψ ± coannihilation channels. Major share in relic density comes from φψ 0 → τ ± W ∓ and φψ 0 → ν τ Z channels provided those are kinematically allowed. Larger values of ∆m 0 causes Boltzmann suppression in 〈σ eff v〉, which in turn is compensated by larger values of the coupling, as clearly seen in the plot. However, this feature is more prominent for smaller values of m φ , as for a fixed ∆m 0 , larger values of m φ imply large m ψ 0 − m φ which rules out any substantial effect from the coannihilation channels. In Fig. 5b, a similar variation is observed between ∆m ch and m φ which shows the effect of the φψ ± coannihilation channels. These channels include Fig. 2) and φψ ± → hτ ± . The last one, however, will have very negligible contribution due to tiny hττ coupling. m ψ 0 − m φ is again fixed at 100 GeV to avoid large φψ 0 coannihilation contribution. However, unlike Fig. 5a, here y τ is varied over the entire range, ie, from 0 to 3. As mentioned previously, for very small values of ∆m's, the freeze-out of ψ 0 and ψ ± also contributes to the relic density of φ. This annihilation process is mediator driven. However, as observed from Eq. (3), the dependence on ∆m's is stronger in the Boltzmann factor of 〈σ eff v〉 than coannihilation. This leads to the fact that for very small values of ∆m's, the mediator driven annihilations almost entirely dominate the total DM annihilation. It is also worth noting that being mostly gauge mediated these channels substantially contribute to the DM annihilation even for very small values of y τ . For our choice of parameters, we have observed that mediator annihilation is effective for (m ψ ±0 − m φ ) 5 GeV and then the coannihilation processes take over. This feature is clear from Fig. 6 where we can see that the dependence on y τ is negligible for ∆m ch 0.1 and beyond that range, as ∆m ch increases, the required coupling also increases gradually. m φ and m ψ 0 are fixed at 100 and 250 GeV respectively. In Fig. 7, the variation of the relic density is plotted with ∆m 0 for some fixed Yukawa couplings and DM mass. As already argued, for a fixed m φ , larger coupling corresponds to larger ∆m 0 due to Boltzmann suppression in 〈σ eff v〉 as well as larger mass suppression of ψ 0 in the t-channel propagator of the coannihilation channels. This explains the shift along the X-axis from the red to the blue line where m φ is 100 GeV and the Yukawa coupling y τ varies from 0.5 to 1.0. We see the same trend for the green and magenta lines, but the amount of shift is relatively less. Because, in this case, m φ is larger (500 GeV), which automatically implies a fairly large splitting between m 0 ψ and m φ and consequently the coannihilation effect is not so prominent. We can argue that for a fixed value of y τ larger DM mass obtains the correct relic density with a relatively smaller ∆m 0 , hence the red line with m φ =100 GeV shifts left towards the green line with the same y τ but larger m φ =500 GeV. The same logic applies to the shift between the blue and the magenta line.This trend also agrees with Figs. 5a and 5b. m ψ ± is fixed at 1 TeV to rule out the effect of the φψ ± coannihilation channels. As expected, very small values of ∆m 0 gives underabundance for the choice of parameters due to a fairly large increase in the Boltzmann factor of Eq. (4). Direct and indirect detection • Direct search prospect : As known from the direct detection of scalar DM models, DM undergoes elastic scattering with detector nuclei through Higgs mediation. The spin-independent scattering cross-section in our model is [45], where the form factor ( f ∼ 0.3) contains all the contributions from the nuclear matrix elements. Throughout the study, we have fixed the DM-Higgs coupling λ φh at 10 −4 . This keeps σ SI well below the experimental bounds [46] as seen in Fig. 9. The new physics Yukawa coupling y τ being lepto-philic plays no role in direct searches. This is also quite clear from the plot. • Indirect search prospect : The indirect detection experiments further constrain the DM velocity averaged crosssection for relevant channels contributing to high energy γ ray flux in the Universe. In the context of our model, as far as these possibilities are concerned, due to DM-Higgs coupling λ φh = 10 −4 , 〈σv〉 γγ and 〈σv〉 bb contributions will be minuscule. However, the annihilation channels in Fig. 1 give rise to 〈σv〉 τ + τ − possibility. In Fig. 10, we have varied m φ and y τ over the full range to check that the parameter space region considered remains safely below the experimental limits from Fermi-LAT data [16]. The larger ∆m's considered in Fig. 10b suggest larger propagator suppression for the relevant channels and consequently shift the parameter space downwards along the Y-axis compared to Fig. 10a, which is visible from the plots. IV. COLLIDER SIGNATURES The challenges of discovering dark matter in colliders are manifold. They manifest themselves as missing energy (E miss T ). Hence, the focus shifts entirely on the characteristics, and precise measurements of associated production of visible particles. The charged multi-lepton channels are the most suitable to probe dark matter because of its clean signal, whereas, QCD backgrounds overshadow the multi-jet channel, and it is very difficult to separate signals from the backgrounds. Here, we are going to study the collider signatures of DM though charged multi-lepton +E miss T channels. Our analysis will include both light charged leptons as well as τ-leptons. We all know that τ-leptons mostly decay hadronically and hence demand a separate analysis. Hence, in this section, we separately discuss both scenarios. We have used FeynRules [47] to generate model files for our model. Events have been generated using MadGraph5 [48] and showered with Pythia 8 [49]. Finally, the detector simulation has been performed using Delphes [50]. We carried out our analysis for the LHC at the CM energy S = 13 TeV. We used the dynamic factorisation and renormalisation scale for the signal as well as the background events. For the generation of parton-level events, we apply minimum or maximum cuts on the transverse momenta p T and rapidities η of light jets, b-jets, leptons, photons, and missing transverse momentum. Also, distance cuts between all possible final objects in the rapidity-azimuthal plane are applied, with the distance between two objects i and j defined as where φ i and η i are the azimuthal angle and rapidity of the object i, respectively. The preliminary selection cuts used in the analysis are: • p T > 10 and |η| < 2.5 for all charged light leptons, • p T > 20 and |η| < 5 for all non-b-jets, and • ∆R i j > 0.4 between all possible jets or leptons. After this, the .LHE files obtained through parton level events are showered with final state radiation (FSR) with Pythia 8 where initial state radiation (ISR) and multiple interactions are switched off and fragmentation/hadronization is allowed. where, in general, stands for all three generations of charged leptons, namely, e, µ, τ. We will take up the study of light di-lepton in this subsection and will henceforth will mean only e and µ unless specifically mentioned. Please note the distinction between the signal process sets (1) and (2) as they will manifest themselves with more clarity in the studies of light-and τ-leptons. To highlight the features of our model clearly, we have selected the following benchmark points (see Table II). The significance of the choice in benchmark points will be clear as we elaborate on our analysis in the following subsections. For the benchmark points given in Table II we get the cross-sections as shown in Table III. Among the two classes of signal processes, the set (1) is subdominant. To understand its reason let us enumerate the subprocesses that take part in this. (i) pp → ψ + ψ − , followed by the decay of both ψ ± as ψ ± → ± φ ; The couplings which play a role in the above processes are only the ones involving light leptons and hence are very suppressed as shown in Eq. (6). As the couplings involved can also be either gauge couplings or that involving τ-lepton, both being considerably large, the set (2) of processes give us sufficient cross-sections to proceed with our analysis. This is clear from Table III. The major backgrounds at the LHC for the light di-lepton channel are as follows Bkg4 : pp → Z Z(γ * ), followed by leptonic decays Z → νν and Z(γ * ) → + − . Before getting involved in more intricate analysis, we shall first discuss the kinematic distributions for this channel. The kinematic observables at our disposal are only the 4-momenta of the leptons and the missing energy E miss T . We order them according to the magnitude of their transverse momentum p T . As a result, a leading lepton would always mean the leadingp T lepton. We can also construct other observables from them, such as the invariant mass of the lepton pair. In a similar vein, we would construct the so-called transverse mass of the lepton-E miss T system. We shall call this quantity the missing transverse mass and define it for each lepton-E miss T system as where E T = p T , which is the magnitude of the transverse momentum of a given lepton and ∆φ E miss T is the difference between the azimuthal angles of the lepton and missing transverse momentum. The missing transverse mass plays an important role in distinguishing the massless invisible particles (such as neutrinos) from the massive ones (as is the case for our dark matter candidates) and hence is very crucial for our analysis. In Fig. 11, we show some of the distributions for this channel. Before going further into the analysis, let us discuss the features of the benchmark points which we mentioned previously. It is clear from the distributions that the benchmark points 1 and 4 have similar patterns, whereas 2 and 3 are similar. For the first case, the distributions are more populated in the lower region of each observable. On the other hand, the distributions are more populated in the higher values for the latter case. In the next level of our study, we use Toolkit for Multivariate Data Analysis (TMVA) [51] in ROOT, to distinguish the signal events from the backgrounds efficiently. For this, we use the distributions of Fig. 11 and some other kinematic observables to train a boosted decision tree (BDT). The complete list of observables used to train BDT are as follows: • p T and η of the leading and sub-leading leptons and the invariant mass of the pair. • missing transverse momentum E miss T . • missing transverse mass M miss T of the leading and sub-leading leptons. • the difference of the azimuthal angles ∆Φ E miss T of the leading and sub-leading leptons with the missing transverse energy. We use these distributions as discriminators to the BDT analysis. The discrimination of the signal and background can be improved further using proper cuts in addition to the preliminary selection cuts to the signal and/or background events. The resulting BDT-response functions give us an estimate of the signal efficiency vs. the rejection of the background. Fig. 12 shows the BDT-response curves (solid filled histograms are for the signal and the hollow ones are for the background) for the di-lepton channel for each benchmark points. Here we show two sets of BDT-responses: (1) the solid purple and the hollow red ones are before any additional cuts with only taking into account the preliminary selection cuts, whereas (2) the solid blue and the hollow black ones are after implementing carefully chosen additional set of cuts to improve the distinguishability of the signal from the backgrounds. We will elaborate on the cuts chosen later on in this subsection. We observe that the signal is separable from the background from the BDT-response curves of Fig. 12 after the use of additional cuts. However, we have not yet quantified the improvement. For this, we draw the Receiver Operating Characteristic (ROC) curves for each benchmark point using the gradual use of additional cuts. Fig. 13 shows the resulting curves for the signal efficiency vs. the rejection of the background. The area under each curve gives the quantitative estimate of the goodness of the separation of the signal from the backgrounds. The inset of Fig. 13 shows the area under the ROC curve for each cut. The value of the cut is in addition to all the preceding cuts. We can see the improvement in the separation of the signal from the backgrounds from these numbers. This feature is consistent with that of Fig. 12. The important point to be noted here is that the cuts were used only on the signal events for the benchmark points 1 and 4 leaving the background events untouched, whereas the opposite has been done for the benchmark points for 2 and 3. The reason for this can be understood as the population of events in the distribution plots of Fig. 11. B. Analysis for the di-τ-jet channel Although τ-lepton analysis poses more challenges, at the same time it unravels more unique features that can come handy in the analysis for any collider like LHC. The τ-lepton is the charged lepton of the third generation and the heaviest among them. It is even heavier than most of the light quark mesons. As a result, τ-leptons decay hadronically, which sets it apart from all other leptons. Due to the lepton number conserving weak interactions, the τ final states are always accompanied by one neutrino in the hadronic final states and two neutrinos in the leptonic final states. Since the neutrinos add to the missing energy, the full τ energy cannot be measured. The leptonic decays of the τ is difficult to distinguish from prompt leptons in a + E miss T final state. Therefore only the hadronically decaying τ's are suitable for the collider signatures. We use the τ-tagged jets from Delphes and reconstruct them with the help of FastJet [52] using anti-k T algorithm. The separation ∆R of two adjacent τ-jets is taken to be 0.4 and the τ-tagging efficiency is taken to be 60%. In this subsection, we will take up the discussion of τ-jet channel. Here we follow the same line of analysis as the light di-lepton. As in the case above, here also the most important modes of production are (1) pp → τ + τ − 2φ ; (2) pp → τ + τ − νν 2φ . Contrary to the light di-lepton channel, the signal process set (1) here is more dominant than set (2) as mentioned previously. This is because although the large value of y τ and gauge couplings dictate both the processes, process (2) is suppressed by branchings and phase space. Table V shows the signal cross-sections for di-τ-jet channel. As mentioned previously we can see the distiction between the cross-sections of process (1) which is substantially greater than that of process (2) for this case. This can be understood from the sub-processes enumerated in the previous subsection with τ in the final states instead of light lepton. Processes cross-section (pb) The major backgrounds at the LHC for the τ-jet channel are as follows Bkg1 : pp → tt, followed by the top (anti-)quark decaying into the τ-jet channel, t(t) → τ ± ν(ν)b(b). Bkg4 : pp → Z Z(γ * ), followed by Z → νν and jet decays Z(γ * ) → τ + τ − /2 j. Next we use TMVA to distinguish the signal events from the backgrounds. The distributions used as the discriminatior to train a BDT are as follows: • p T and η of the leading and sub-leading τ-jets and their invariant mass. • missing transverse momentum E miss T . • missing transverse mass M miss T of the leading and sub-leading τ-jets. • the difference of the azimuthal angles ∆Φ τE miss T of the leading and sub-leading τ-jets with the missing transverse energy. Same as above, we use these distributions as discriminators to the BDT analysis and draw the BDT response curves. Fig. 15 shows the BDT-response curves (similar to the previous case, here also the solid filled histograms are for the signal and the hollow ones are for the background). Two sets of BDT-responses corresponds to: (1) the solid purple and the hollow red ones are before any additional cuts with only taking into account the preliminary selection cuts, whereas (2) the solid blue and the hollow black ones are after implementing carefully chosen additional set of cuts to improve the distinguishability of the signal from the backgrounds. Fig. 16 shows the ROC curve for the signal efficiency vs. the rejection of the background. The inset of Fig. 16 shows the area under the ROC curve for each cut. As in the previous case, the value of the cut is in addition to all the preceding cuts. We can see the visible improvement in the separation of the signal from the backgrounds from these numbers. Please note that, unlike the previous case, here the cuts were used only on the signal events. V. CONCLUSION In this work, we have proposed a singlet scalar DM with a vector-like fermionic doublet having the same dark symmetry. The minuscule Higgs portal coupling with scalar DM keeps the direct detection cross-section below the experimental bound which is an important handle in reviving the scenario of scalar singlet DM models. The new Yukawa coupling, on the other hand, which is irrelevant to direct search prospects, plays a vital role in dictating the relic density. We have shown that the model can provide a viable DM candidate through pair annihilation, coannihilation, and mediator annihilation channels over a wide of parameter space ranging from GeV up to TeV scale. The transition from pair annihilation to coannihilation regime is demonstrated and the relevant limits of parameters are discussed. We have observed that coannihilation processes have a substantial contribution to relic density for a comparatively larger mass splitting between DM and the dark sector particles than what is usually discussed in the literature. This may be attributed to the gauge couplings involved in these channels which is a substantial contribution thanks to the dark fermion being a doublet. This is an artifact of the unconventional BSM Yukawa structure considered in the proposed model. This arrangement, involving SM and dark sector lepton SU(2) L doublets and a scalar singlet appropriately highlights the important features in the work. Apart from the DM context, the gauge production of the fermionic doublet followed by decay to DM through the Yukawa coupling results in a substantially increased DM production at the colliders compared to scalar singlet scenarios. Using suitable kinematic observables in Boosted Decision Tree (BDT) classifier, we separate the signal events from the backgrounds in an effective manner. We have shown that with the use of proper cuts, we can achieve good results for both the light as well as τ leptonic channels. This model can also provide potential search prospects for long-lived particles because for mediator annihilation and coannihilation regimes, the mass splittings between DM and other dark sector particles are typically considered to be very small. This can lead to suppressed decay width of the coannihilating partner and the delayed decay can facilitate long-lived signatures (LLP) in the colliders which is recently being given wide attention in the literature. One can interpret a limitation of the proposed model in the sense that from the observed results in both dark matter and the collider analysis, there is no way to distinguish between the two coannihilating partners. We are pursuing a possible solution to address this issue in an ongoing work. Appendix A: Appendix Differential cross-section of the pair annihilation process φφ → τ + τ − is
8,754.2
2019-09-26T00:00:00.000
[ "Physics" ]
Electromagnetic field exposure (50 Hz) impairs response to noxious heat in American cockroach Exposure to electromagnetic field (EMF) induces physiological changes in organism that are observed at different levels—from biochemical processes to behavior. In this study, we evaluated the effect of EMF exposure (50 Hz, 7 mT) on cockroach’s response to noxious heat, measured as the latency to escape from high ambient temperature. We also measured the levels of lipid peroxidation and glutathione content as markers of oxidative balance in cockroaches exposed to EMF. Our results showed that exposure to EMF for 24, 72 h and 7 days significantly increases the latency to escape from noxious heat. Malondialdehyde (MDA) levels increased significantly after 24-h EMF exposure and remained elevated up to 7 days of exposure. Glutathione levels significantly declined in cockroaches exposed to EMF for 7 days. These results demonstrate that EMF exposure is a considerable stress factor that affects oxidative state and heat perception in American cockroach. Introduction Exposure to electromagnetic fields has become inescapable, especially at extremely low frequencies (30-300 Hz) given off by electrical appliances and overhead power lines. Therefore, more concerns are given about the potential adverse health effects of EMF exposure. It was shown that EMF can act as a stressor and may activate a wide spectrum of interacting neuronal, molecular and neurochemical systems that underpin behavioral and physiological responses (Levin 2003;Wyszkowska et al. 2006;Blank and Goodman 2009;Zeni et al. 2017). The effects of EMF exposure on insect morphology, physiology and behavior have been proved previously. The EMF exposure induced changes in: mosquito egg hatching (Pan and Liu 2004), ovipositon in Drosophila (Gonet et al. 2009), locomotor activity of desert locust and American cockroach (Wyszkowska et al. 2006(Wyszkowska et al. , 2016 or antioxidant defense in Baculum extradentatum (Todorović et al. 2012). EMF exposure has been also shown to induce a release of octopamine-an insect 'stress hormone' in American cockroach (Wyszkowska et al. 2006), whereas the static electric field exposure elevated octopamine levels in Drosophila brain (Newland et al. 2015). Exposure to EMF, similar to other stress factors, has been shown to trigger oxidative stress, observed as the increase of lipid and protein oxidative damage in various tissues. Moreover, significant changes in levels of antioxidants, such as glutathione, superoxide dismutase or catalase were observed (Kivrak et al. 2017). Zhang et al. (2016) demonstrated that thermal stress (35 °C) and EMF exposure (50 Hz, 3 mT) elicit a synergistic effect, strengthening the negative effect of EMF on lifespan, locomotion and oxidative stress in Drosophila melanogaster. Strong stress reduces the sensitivity to pain. However, it has been demonstrated that in mice, acute exposure to electromagnetic field suppresses the stress-induced analgesia and works in a similar way to nalaxone, an antagonist of the opioid system (Kavaliers and Ossenkopp 1994). Insects' nociceptors that respond to harmful stimuli, such as members of transient receptor potential (TRP) family, are the conserved molecular basis for the perception of noxious stimuli in vertebrates and invertebrates (Im and Galko 2012). It has been shown that nociceptive response is modified after EMF exposure. In rats exposed to 0.25 µT EMF analgesic response, equivalent to the effect of 4 mg/kg of morphine was observed (Martin et al. 2004). Moreover, in land snail Capaea nemoralis, EMF exposure (60 Hz, 100µT) attenuated the response to thermal nociceptive stimuli (Tysdale et al. 1991). Thus, we put forward the hypothesis that electromagnetic field alters the response to noxious heat in insects. To test this hypothesis, the effect of EMF exposure (50 Hz, 7 mT) on cockroach's response to noxious heat was examined. Moreover, we evaluated the level of oxidative stress in cockroaches exposed to EMF. The parameters of exposure used in our experiments are commonly applied in magnetotherapy (Karpowicz 2015). Animals The experiments were performed on adult males of American cockroach Periplaneta americana L. Cockroaches were reared in plastic cages at constant temperature 26 ± 2 °C, with relative humidity 40% and 12:12 light-dark regime. Electromagnetic field exposure system Electromagnetic field (EMF) with the domination of magnetic component was generated by a single 20 cm diameter coil (Elektronika i Elektromedycyna Sp. J.; Poland), as was previously described (Bieńkowski and Wyszkowska 2015) (Fig. 1a). The coil produced homogeneous, sine-wave alternating electromagnetic fields at 50 Hz with the intensity of 7 mT. The distribution of magnetic flux density within the coil along Z and X axes is shown on Fig. 1b-d. The maximum homogeneity inside the coil was 10%. The magnetic field level was controlled before each experiment using a Gaussmeter (Model GM2, AlphaLab, Inc, USA). Animals were exposed to EMF inside the coil. The cockroaches (n = 20) were placed together in a cylindrical glass Fig. 1 The exposure system. Cockroaches in the magnetic coil (a). The coordinate system (b). The magnetic flux density distribution inside the solenoid along Z-longer axis (c) and X-radial axis (d) chamber (10 cm × 7.5 cm; volume 0.589 L) and their movement was not restricted. The cockroaches were divided into three control (CON) and three experimental (EMF) groups according to the duration of EMF exposure: (1) 24 h EMF exposure; (2) 72 h EMF exposure and (3) 7 day EMF exposure. In each animal, escape reaction time was measured only once. Control groups of insects were handled in an identical manner (glass chamber was located in the shame exposure system for the same duration) to obtain similar experimental conditions, except for the presence of EMF. The temperature during experiments was monitored using thermocouples mounted under each exposure system. Heat plate apparatus and experimental procedure The heat plate apparatus consisted of two aluminum chambers: 'hot' one (50 °C) and 'cool' one (30 °C) (Fig. 2). Hot chamber adhered to the aluminum container filled with hot water (65 °C) pumped from water thermostat. This allowed to maintain a 50 °C inside the chamber. On the other end, there was a second aluminum container which was filled with cold water (20 °C) pumped from another water thermostat. The temperature decreased linearly from hot to cold end, and in the cool area a temperature of approximately 30 °C was maintained. Chambers were separated by 5-mm thick dark glass with a 1-cm hole that enabled the insect to escape. After placing the testing cockroach inside hot chamber, a dark glass top cover was located and the escape reaction time started to be measured. The end of escape reaction time was determined when the cockroach's head appeared in the cold chamber. Sample preparation Homogenates of the whole-body cockroaches were prepared using a glass Potter homogenizer (Kleinfeld Labortechnik, Gehrden, Germany). The samples were homogenized in icecold phosphate buffer pH 7.2 (Sigma) for 2-3 min and then were centrifuged at 12,000g for 10 min at 4 °C. Supernatants were used for determination of MDA content and reduced glutathione (GSH) concentrations. Malondialdehyde (MDA) assay To determine lipid peroxidation level, the thiobarbituric acid reacting substance (TBARS) was measured according to the method of Buege and Aust (1978), modified by Cheeseman and Slater (1993) and expressed in terms of MDA content. The samples were incubated with 15% trichloroacetic acid (TCA) and 0.37% thiobarbituric acid (TBA). The mixture was heated on boiling water bath for 20 min with butylated Fig. 2 Scheme of the equipment used in the heat nociception assay hydroxytoluene (BHT) in ethanol that prevented from artefactual lipid peroxidation during the boiling step. After centrifugation (12,000×g for 15 min), the absorbance of samples was measured spectrophotometrically at 535 nm. The molar extinction coefficient used to calculate MDA concentrations was 156 mM −1 L −1 cm −1 . MDA content was expressed as µM/mg tissue. Reduced glutathione (GSH) assay To determine the reduced GSH concentration, the Ellman method (1959) was used. Whole-body homogenates were mixed thoroughly with a stock solution containing: 10% (TCA) and 10 mM ethylenediaminetetraacetic acid (EDTA) and were centrifuged for 10 min at 10,000g. After centrifugation, the supernatants were added to 2.3 mL of deionised water, 100 mL of 0.3M EDTA, 300 mL of 0.32M tris(hydroxymethyl)aminomethane (TRIS) and 100 mL of 0.086 mM 5,5′-dithiobis-2-nitrobenzoic acid (DTNB), and were maintained at 10 °C for 10 min. The absorbance of samples was measured spectrophotometrically at 412 nm. The GSH concentration was expressed in µmol/g tissue. Data analysis All data were tested for normality (Kolmogorov-Smirnov test) and homogeneity of variance (Levene's test). Escape reaction time was analyzed using Kruskal-Wallis test and pairwise comparisons were determined using Mann-Whitney U test. To assess the effect of EMF on lipid peroxidation and glutathione levels, two-way ANOVA was used with: (1) exposure to EMF and (2) duration of exposure as fixed factors, followed by pairwise comparisons with Bonferroni correction. In all cases, p < 0.05 was considered as statistically significant. All analyses were made using IBM SPSS Statistics 24 software. Electromagnetic field alters cockroaches' response to noxious heat As shown on Fig. 3, exposure to electromagnetic field significantly affects the insect response to noxious high ambient temperature. Time of exposure to EMF (24 vs. 72 h vs. 7 days) revealed a significant effect on insects response to high ambient temperature (Kruskal-Wallis test: χ 2 = 14.73; df = 2; p = 0.001). In control groups, significant increase in latency to escape from noxious heat with duration of exposure was also observed (Kruskal-Wallis test: χ 2 = 15.04; df = 2; p = 0.001). However, in EMF-exposed cockroaches, significant prolongation of time spent at 50 °C comparing to control groups was observed. Escape reaction time in cockroaches exposed to EMF for 24 h was twice as long as observed in control insects (12.9 ± 3.0 s; Mann-Whitney U test: U = 68.0, z = − 3.43, p = 0.001). Time spent at noxious heat in cockroaches exposed to EMF for 72 h tripled in comparison to value for control group (26.8 ± 8.2 s; Mann-Whitney U test: U = 92.5, z = − 2.19, p = 0.03). Increase in latency to escape was also observed in cockroaches exposed to EMF for 7 days (70.8 ± 15.4 s; Mann-Whitney U test: U = 14.5, z = − 2.23, p = 0.02 comparing to control group). Exposure to EMF induces oxidative stress MDA levels in cockroaches were significantly increased after exposure to EMF and their value depended on the time Fig. 3 Latency to escape (s; mean ± SEM) from noxious heat in cockroach Periplaneta americana L. exposed to electromagnetic field (EMF) for 24, 72 h or 7 days. *Significant differences between EMF-exposed and control groups (*p < 0.05; **p < 0.01 vs. control group; Mann-Whitney U test; n = 20); # significant differences between EMF-exposed groups ( ## p < 0.01; ### p < 0.001) of exposure (Fig. 4). Two-way ANOVA showed that EMF exposure affects MDA level (F 1,91 = 17.59, p < 0.001). However, there was no significant interaction between exposure to EMF and its duration (F 2,97 = 0.28, p = 0.75). Significantly elevated MDA levels in comparison to control group were observed after 24 h (1.56 µM/mg; p = 0.02), 72 h (2.31 µM/mg; p = 0.03) and 7 days of exposure (2.04 µM/ mg; p = 0.003). The highest MDA level was observed in cockroaches exposed to EMF for 72 h and was significantly higher than that observed in cockroaches after 24 h of EMF exposure (p = 0.04). Exposure to EMF reduces glutathione levels Exposure to electromagnetic field resulted in significant decrease in glutathione levels in the examined cockroaches (two-way ANOVA: F 1,72 = 5.97, p < 0.05). 24-and 72-h exposure to EMF did not affect the glutathione levels compared to control groups (Fig. 5). Marked effect of EMF was observed after 7 day exposure, observed as decline in glutathione level comparing to control cockroaches (p < 0.001). Significant difference in glutathione level was also observed between cockroaches exposed to EMF of different durations. The lowest value of GSH was observed after 7 days of exposure and it was significantly different from that observed after 24 h exposure (p < 0.001) and 72 h exposure (p < 0.001). Discussion The results of our study demonstrate that in cockroaches exposed to electromagnetic field, the response to noxious heat is altered. The longer the exposure to EMF was continued, the stronger effect was observed. After exposure to stressful stimuli, the phenomenon of pain suppression is observed, known as stress-induced analgesia (Butler and Finn 2009). Exposure to EMF affects both pain sensitivity and pain inhibition. Increased pain sensitivity after exposure to a different ranges of magnetic environments has been shown to occur in a variety of animal species, including humans (Jeong et al. 2000). EMF exposure has been shown to reduce both exogenous, as well as endogenous opioids effects in mediating analgesia. However, the effect of EMF on nociception depends on its intensity and duration of exposure (Del Seppia et al. 2007). For example, in the land snail Fig. 4 MDA levels (µmol/g; mean ± SD) in cockroaches exposed to electromagnetic field (EMF) for 24, 72 h or 7 days. *Significant differences between EMF-exposed and control groups (*p < 0.05, **p < 0.01); # indicates significant differences between EMF-exposed groups ( # p < 0.05) Stress in insects including thermal stress leads to a marked increase of oxidative stress as well as of heat shock protein (HSP) levels (Barclay and Robertson 2000;Robertson 2010) that play a key role in thermoprotection. Numerous studies have shown that exposure to EMF increases oxidative stress in mammals (Consales et al. 2012). Our results clearly show that oxidative stress is a response to EMF exposure also in American cockroach. EMF induced the increase of MDA level, a marker of lipid peroxidation, in cockroach. The increase in this lipid peroxidation marker was observed after 24-h exposure and remained elevated until 7-day exposure. Zhang et al. (2016) have shown that effect of EMF on MDA levels is sex-dependent. They observed decline in MDA level in male, but not in female Drosophila exposed to EMF (50 Hz, 3 mT) for 12 h. However, in our experiments, the intensity and duration of EMF exposure was higher, what could act as a marked stressor. In our studies, we evaluated the level of low molecular antioxidant glutathione. The short-term EMF exposure did not affect its level. However, the prolonged exposure resulted in the glutathione decline. Reduced glutathione levels after EMF exposure was observed in mice (Arendash et al. 2010) and guinea pigs (Meral et al. 2007). Our results demonstrate that EMF (50 Hz, 7 mT) exposure may act as a stressor inducing oxidative stress observed as increase of the lipid peroxidation level and reduction of the glutathione level. However, how the EMF-induced changes in oxidative state are related to the function of nociceptors need to be further elucidated. There are also reports showing that EMF affects heat shock protein (HSP) accumulation in cells (Tokalov and Gutzeit 2004;Alfieri et al. 2006;Bernardini et al. 2007;Li et al. 2013;Wyszkowska et al. 2016). Thus, the increase in latency to escape from heat in the examined cockroaches may be a result of heat shock proteins accumulation, which act as molecular chaperones and help denaturated proteins to refold. It was suggested that a cellular response to EMF mimics the heat shock response (Kang et al. 1998); however, the data are inconsistent. Extremely low frequency magnetic fields affect heat shock proteins (HSPs) accumulation in cells, what suggests that at the molecular level, stress processes are affected by exposure to high levels of EMF (50 Hz, 680 µT-7 mT) (Alfieri et al. 2006;Wyszkowska et al. 2016). Recent studies on the effect of EMF of over 1 mT intensity have shown an increase in HSP70 transcription that affected neuronal activity in mice (Sun et al. 2016). On the other hand, Morehouse and Owen (2000) showed no significant effect of EMF (60 Hz, 8µT) on HSP70 level in HL60 cells. The studies on chick embryos have shown that repeated exposure to EMF (60 Hz, 8µT) led to reduced HSP70 levels and decline in cryoprotection (Carlo et al. 2001). These data suggest that the effect of EMF on HSP level depends on the type of the cell and dose of EMF (frequency and density, as well as duration exposure). In summary, our results proved that EMF alters the response of cockroaches to noxious heat. We presume that research on cockroach model in determining the role of EMF in pain sensitivity would be a useful tool for developing the strategies for pain inhibition.
3,809.4
2018-05-02T00:00:00.000
[ "Physics" ]
How Does China’s Economic Policy Uncertainty Affect the Sustainability of Its Net Grain Imports? : China is a considerable grain importer in the world. However, the sustainability of China’s grain imports has been greatly challenged by its increasing economic policy uncertainty (EPU). This paper constructs the indicators of economic and environmental sustainability of China’s net grain imports and analyzes the impact of its EPU index on these indicators with a Time-Varying Parameter Stochastic Volatility Vector Autoregression (TVP-SV-VAR) model to explore how China’s EPU affects the sustainability of its net grain imports. The main conclusions are as follows. (1) The sustainability of China’s net grain imports fluctuated from 2001 to 2019. (2) China’s EPU has a negative impact on the economic sustainability of its net grain imports. A higher EPU index leads to a lower net import potential ratio and higher trade cost. (3) China’s EPU has a significant negative impact on the environmental sustainability of its net grain imports. It has the greatest negative impact on virtual water imports and smaller impact on virtual land imports and embodied carbon emission. Therefore, China’s EPU affects the sustainability of its net grain imports negatively through its impact on its net grain import potential ratio, trade cost, and virtual land, virtual water, and embodied carbon emissions in net grain imports. Introduction China is a major grain importer in the world. Grain imports have increased remarkably since its WTO accession, from USD 3.42 billion in 2001 to USD 40.48 billion in 2019. China's grain imports have highly concentrated markets, with Brazil, the United States, Argentina, Canada, Ukraine, Uruguay, Australia, France, Thailand, Russia, Vietnam, and Pakistan as leading markets, accounting for 98.33% of its total grain imports in 2019 (as shown in Figure 1). Brazil and the United States are the two greatest markets and account for 57.01% and 17.20% of its total grain imports, respectively, in 2019. China's grain imports are also highly concentrated in products. Soybean is by far the biggest product for import, accounting for 82.24% of its total grain imports in 2001 and 86.99% in 2019 (UN-Comrade 2020). With a population of more than 1.4 billion and the transformation of people's consumption structure, China's demand for grain will continue to grow, and grain imports will increase accordingly [1]. The sustainability of China's net grain imports has a significant impact on its own food security, agricultural resources, and environment [2]. China's food security is greatly challenged, as its demand for grain expanded continuously with a growing population, rapid urbanization, and changing consumption patterns [3]. Therefore, the Chinese government has stipulated that the self-sufficiency rate of staple grain products (rice and wheat) should be maintained above 95% to ensure food security. Grain is highly water consumptive [4], and China's grain production is highly dependent on irrigation. The scarcity of water resources in China has brought great challenges to agricultural irrigation [5]. In addition, the spatial distribution of arable land and water resources in China does not match, with 65% of cultivated land located in northern China, which accounts for only 18% of its total water resources [6]. Therefore, China's grain production brings great pressure on land and water resources [7]. International grain trade minimizes consumption of agricultural resources and influences the environment of every country, by encouraging the most efficient production. Therefore, grain imports have become not only an important means for ensuring food security in China but also an important means for alleviating a shortage of resources and reducing pressure on the environment while pursuing sustainable development. Global grain trade, however, is increasingly influenced by economic policy uncertainty (EPU). For example, the outbreak of COVID-19, which led to obstruction of agricultural production and logistics interruption, has also led to restrictions on grain exports in many countries. Many ASEAN (Asia and Southeast Asia Nations) members have adopted policies for stabilizing price, limiting exports of grain products, and increasing financial support for agriculture to ensure effective supply and market stability of agricultural products. As a result, the approach of ensuring China's food security and achieving sustainable development through grain imports has been greatly challenged. Therefore, the current study aimed to examine the impact of China's EPU on the economic and environmental sustainability of China's net grain imports. It first evaluates the economic sustainability of China's net grain imports through trade potential ratio and trade cost, as well as through environmental sustainability through the flow of virtual water, virtual land, and embodied carbon emission in grain trade, and then adopts a Time-Varying Parameter Stochastic Volatility Vector Autoregression (TVP-SV-VAR) model to study the impact of its EPU on these two aspects of sustainability. It is expected that the findings can provide thoughtful advice that decision-makers can use to regulate and manage China's net grain imports and to achieve food security and sustainable development. The current study contributes to the existing literature in two ways. First, it examines the sustainability of China's net grain imports, from both an economic perspective, including import potential ratio and trade cost, and from an environmental perspective, including the flow of virtual water, virtual land, and embodied carbon emissions. Secondly, it focuses on the impact of China's EPU on different indicators of the sustainability of its net grain imports. Literature Review The research on sustainable trade can be traced back to Grossman andKrueger's research (1991) on the environmental impact of the North American Free Trade Agreement (NAFTA) [8]. According to International Chamber of Commerce (ICC 2018), Global trade: Securing Future Growth, "sustainable trade" is defined as "the business behavior or activity of trading commodities, goods and services that benefits all parties and minimizes the negative impact on society and environment while promoting global sustainable development" [9]. Similarly, the beginning of the WTO agreement put forward the idea of sustainable development of foreign trade-that is, to make reasonable use of world resources in accordance with the goals of sustainable development, seek the protection and maintenance of the environment, and strengthen the measures taken for this purpose in a way consistent with their respective needs and concerns at different levels of economic development. Therefore, sustainable development of foreign trade must consider both economic development and environmental protection. From the economic perspective, sustainability of trade is reflected in the realization of trade potential and the reduction of trade cost. Trade potential is the maximum trade flow that can be achieved through free trade without trade barriers [10]. Egger [11] took the fitted value of bilateral trade estimated by the traditional gravity model as "trade potential", which was widely accepted and adopted [12,13]. The trade potential ratio is the ratio of actual trade value to the fitted value [11].Trade cost is the total transaction and transportation cost related to the cross-border goods exchange, which, in a broad sense, includes transportation costs (freight and time costs), policy barriers (tariff and non-tariff barriers), information costs, contract implementation costs, return costs, laws and regulations costs, and local distribution costs [14,15]. As a form of transaction, international trade involves high transaction costs, which hinders international trade and economic integration [16]. Therefore, sustainable development of trade depends on the increase of the trade potential ratio and the decrease of trade cost. From the environmental perspective, international grain trade entails the transfer of virtual resources. Globally, Tuninetti et al. [17] pointed out that from 2050 to 2080, the world agricultural products trade network will change significantly, and nevertheless, compared with national self-sufficiency, international trade can save 40-60 m 3 of water per person per year. China, however, had a surplus in the virtual water trade of agricultural products from 2001 to 2013 [18]. For example, in agricultural trade with Trans-Pacific Partnership Agreement (TPPA) countries, China's virtual water trade surplus has been expanding [19]. China's net export of virtual water in agricultural products trade with Italy further aggravates the pressure on water resources [20]. In terms of virtual land resources, the total amount of virtual land trade in global agricultural products trade has been increasing, from 128 million ha in 1986 to 350 million ha in 2016 [21]. China's agricultural trade contributed an average of 3.27 million ha per year to global land conservation from 1986 to 2009 [22]. It is predicted that agricultural trade will continue to save water and land resources in China and the world [23]. China's grain imports, soybeans in particular, have been the leading contributor to China's virtual water and land imports. In addition, grain production also involves carbon footprint, which is the net embodied carbon emission of grain production [24], so grain trade also involves the transfer of carbon emissions. China and the United States are the largest importers of land resources and carbon emissions [25], and Brazil exported about 223.46 million tons of carbon emissions from soybean exports in 2010-2015, half of which were imported by China [26]. Economic policy uncertainty (EPU) is the risk of economic policy change that cannot be accurately predicted by market participants and that leads to economic fluctuations and changes in the macroeconomic environment [27].Since the formulation of the EPU index by Baker et al. [28],this index has been borrowed in a number of empirical applications [29,30]. The theoretical and empirical literature shows that high economic uncertainty can harm economic activity [31,32]. With the advancement of globalization and increasing interdependence of economies, international trade is strongly influenced by EPU, especially trade policy uncertainty. Trade policy uncertainty adds to the fixed cost of trade, so exporters are more cautious and often delay their entry into the market [33,34], while the decrease of trade policy uncertainty can remarkably reduce export cost, improve efficiency, and promote enterprise innovation [35]. Trade policy uncertainty is only a very small part of EPU [28]. Exporters are faced not only with trade policy uncertainty but also with the more general economic policy uncertainty in the world. Greenland et al. [36]found that a high EPU of a target market will reduce export to that country significantly. Researchers have also studied how EPU events influence grain production and trade [37].For example, the outbreak of COVID-19ledto the decline of agricultural production [38,39], instability of grain market, and price fluctuation [40]. Yao et al. [41] pointed out that if the pandemic hindered China's soybean imports, China would need to increase the cultivation area by 6.9 times to meet the demand, which would reduce its grain self-sufficiency rate to 63.4%, seriously affecting its food security. Moreover, EPU also has an important impact on the natural resources and environment in every country. For example, He et al. [] pointed out that the Sino-US trade conflict led to a surplus of soybeans and an increase of grain transportation mileage in the United States, which caused a significant increase in global environmental costs in the short term. Studies on China's grain trade show that changes in China's grain demand structure and production conditions have transformed China's position in international grain trade from a net exporter to a net importer. Increase of income and advancement of urbanization in China have brought great changes in food demand and consumption structure [42], while scarce agricultural resources and deterioration of the ecological environment constrain agricultural production [43]. To ensure of food security and protect agricultural resources and the environment, China must rely on the international market for supply. Since China's domestic market and international food markets are closely linked, China's food security will inevitably be greatly influenced by the international trade environment. Therefore, China's food security must be considered from a global perspective [44]. Despite the research interest in China's grain trade, few have studied the sustainability of China's net grain imports from both an economic and environmental perspective. In addition, EPU has considerable impact on trade sustainability, yet it is less studied. This paper studies how EPU affects the sustainability of China's net grain imports, and the following hypotheses were proposed. Data and Methods To study the sustainability of China's net grain imports, based on previous research, this study selected China's grain import potential ratio [11] and trade cost [16] as indicators of economic sustainability, and selected virtual land imports [23], virtual water imports [17], and embodied carbon emissions [26] as indicators of its environmental sustainability. The mechanism of the impact of EPU on China's net grain imports was ex-plored through the analysis of the dynamic impact of the EPU index on these indicators( Figure 2). Different techniques were adopted to achieve the objectives. The gravity model was used to measure import potential ratio [12,13], an indicator of economics sustainability. The Novy [45] model was adopted to measure the trade cost of China's grain imports, the other indicator of economic sustainability. Virtual land content, virtual water content, and embodied carbon emissions were adopted to measure virtual land imports, virtual water imports, and embodied carbon emissions in China's net grain imports, indicators of the environmental sustainability of China's net grain imports. Measurement of Import Potential Ratio Import potential ratio is the ratio of actual import value to potential value. The trade gravity model proposed by Tinbergen [46] was adapted to measure potential value. This paper selected the imports of wheat, rice, corn, soybean, and other major grain varieties as the explained variable. Explanatory variables first included: the per capita GDP, population and geographical distance, indicators of economic development, domestic demand, and transportation cost, respectively. Moreover, the exchange rate reflects the purchasing power of currency and influences bilateral trade [47]; thus, the exchange rate was included as an explanatory variable. FTA was also included considering that it helps to reduce the trade barrier and promotes trade [13]. The number of TBT/SPS notification to WTO on behalf of China reflects the non-tariff barriers in grain trade [48], so it was also included as an explanatory variable. The hypothesis behind this variable is that the higher the notification number, the lower the trade. The following trade gravity model was then established: smaller import potential ratio means that the actual trade value is much smaller compared with the fitted value. Measurement of Trade Cost The Novy model [45] is a micro-founded measure of bilateral trade costs that indirectly infers trade frictions, and it was used to calculate China's grain trade cost: where t ij TCO is the tariff equivalent, which represents the grain trade cost between China and partners, iit cx , t jj cx is the domestic grain trade volume of China and partners respectively, ijt cx , jit cx represents their exports to each other, and  represents the substitution elasticity. Given the trade flow between the two countries, the higher the substitution elasticity, the lower the trade cost between the two countries. This paper referred to Novy [45] for the value of substitution elasticity. It can be inferred from the equation that an increase of bilateral trade as opposed to domestic trade means a decrease of trade costs between the two countries. It should also be noted that this trade cost measure is a relative value of bilateral trade costs to domestic trade costs, rather than an absolute value of bilateral trade costs. Measurement of Virtual Land The formula for calculating the virtual land content of grain is as follows: The green water footprint was calculated by CROPWAT 8.0. The green water evaporation for every 10 day period equals the minimum value between the effective precipitation ( eff P ) and crop evaporation ( c i ET ). Effective precipitation was calculated by the method provided by the USDA SCS in CROPWAT 8.0, and the effective precipitation is different in different cities. eff t P is the effective precipitation in the growing period of grain ( mm ). Blue Water Footprint of Grain Production Blue water footprint of grain production refers to the consumption of blue water during the growing period of grain. Blue water mainly comes from rivers, lakes, and underground aquifers. ET is the evaporation of blue water. Total Water Footprint of Grain Production Water footprint of grain production refers to the total amount of water resources consumed in the process of crop growth per unit mass, including green water footprint and blue water footprint, also known as grain water footprint. Measurement of Carbon Emissions Grain land embodied carbon emissions refers to carbon emitted from the application of fertilizer, pesticide, and agricultural film as well as from irrigation and agricultural machinery. The formula for calculating the embodied carbon emissions per unit of crops is as follows: (13) where it CAB is the embodied carbon emissions per unit of agricultural land of grain i and k i A is the k main carbon source factors of unit grain species i , including the amount of chemical fertilizer, pesticide, agricultural film and diesel oil, the total planting area, and effective irrigation area, and k E is the embodied carbon emissions coefficient of various carbon sources. Based on the calculation method proposed in West et al. [49] and Dubey et al. [50], the corresponding emissions coefficients of six carbon sources were obtained, as is shown in Table 1. where t i INCA represents reduction in total carbon emissions due to net grain imports, and i INX is net grain imports. TVP-SV-VAR Model The TVP-SV-VAR model can be derived from the general Structural VAR model, which can be expressed as: Assuming that , the structural VAR model can be written as: where  stands for the Kronecker product; thus, the reduced-form structural VAR can be expressed as: Furthermore, assuming that all parameters are dynamically changing, the model is extended to the form of time-varying parameters: . Thus, assume that the parameters in the TVP-SV-VAR model are following the random walk process: Therefore, the impact disturbance between the available time-varying parameters is irrelevant. The MCMC method was used to estimate the parameters of the TVP-SV-VAR Data Description The grain products in this study included wheat (HS 1001);barley(HS1003);oats (HS 1004);maize (HS 1005);rice (HS 1006);grain sorghum (HS 1007);buckwheat, millet, and canary seeds (HS 1008); and soya beans (HS 1201).The sample interval in this study-was2001-2019. Because China's grain trade features imports, accounting for 97.10% of total grain trade in 2019, this study focused on net grain imports, the difference between exports and imports. China's grain imports data were from the UN-Comtrade database. Data on the per capita nominal GDP of China and grain export countries, population, and the exchange rate of export countries' currencies to RMB were from the World Bank. Data on the geographical distance between China and export countries came from Distance Calculator. Data about FTA came from the Ministry of Commerce of China. TBT and SPS notification on grain products submitted to WTO on behalf of China were from WTO/TBT-SPS Notification and Enquiry of China. Data on grain production were from FAO, and a country's domestic trade volume was considered as equivalent to the difference between a country's total grain production and exports, following the method of Wei [52]. Data of planting area, yield per unit area, and total production of grain crops in China were from the National Bureau of Statistics of China. Data on China's rainfall came from the China Meteorological Administration. Data on chemical fertilizer, pesticide, plastic film, diesel oil consumption, and irrigation area of China's grain crops were from the Compilation of Cost-Benefit Data of Agricultural Products in China. The annual CPI fixed base index based on 2001 was used to deflate the various types of data expressed by prices, in order to eliminate the impact of inflation. The EPU index was from FRED Economic Data and was revised according to the research of Huang et al. [53]. Major events of EPU included China's WTO accession, the outbreak of SARS, the financial crisis, China's abolishment of agricultural taxes, 2010 Shanghai World EXPO, change in RMB fixing mechanism, Sino-US trade conflict, and the outbreak of COVID-19 ( Figure 3).China's EPU index fluctuated frequently but showed a significant upward trend from 129. 16 Economic Sustainability of China's Net Grain Imports 1. Net Grain Import Potential Ratio Panel data of 12 partner countries from 2001 to 2019 were used to estimate the fitted value of China's net grain imports, and it was important to select the specific form of the grain import model to improve the validity of the empirical results. The F test and Hausman test were used for this purpose. In China's net grain import model, the P value of the Hausman random effect test was 0.0000, so the individual fixed effect variable intercept model was adopted. This paper estimated the fitted value of net grain imports based on national panel data. The factors influencing net import included per capita nominal GDP, population, geographical distance, exchange rate, FTA, TBT notification, etc., which varied from country to country. Therefore, the choice of the individual fixed effect variable intercept model could examine the regional differences of China's net grain imports more accurately. After the form of the model was determined, regression analysis of the variables in China's grain import equation were conducted, and the estimated results of each parameter are shown in Table 2. It can be seen from the estimation results that the coefficient of China's per capita nominal GDP is significant at the level of 1% and has the greatest positive impact on China's net grain imports. China's population, the population of grain export countries, and China's TBT notification were significant at the level of 5%. China's population had a positive impact on its net grain imports, while the population of export countries had a negative impact. China's TBT had a small negative impact. The coefficient of per capita nominal GDP and exchange rate of export countries were significant at the level of 10%. The negative impact of per capita nominal GDP of export countries was the largest, while the exchange rate had a small positive impact. The coefficient of geographical distance between China and export countries was not significant, indicating that geographical distance was not the main factor affecting China's net grain imports, as a result of the continuous development of shipping technology. FTA had a positive impact on China's net grain imports though not significant, showing that signing of the FTA was beneficial to China's net grain imports. The reason for its insignificance may be that China had not signed an FTA with major grain export countries (such as the United States, Brazil, Argentina, etc.). Note: *、**、***represent significance levels of 10%, 5%, and 1% respectively. China's net grain import potential ratio can be evaluated based on net import potential from 2001 to 2019, which can be calculated with estimated values of the parameters in China's grain import model. The changes of China's net grain imports, fitted value of net imports, and net grain import ratio are shown in In the first period, from 2001 to 2008,net import potential ratio was at a much higher level, despite frequent fluctuations. Since China's accession into the WTO, China's grain import and import potential increased steadily, and the import potential ratio fluctuated frequently. In the second period, from 2009 to 2019, the net import potential ratio decreased more steadily with a slight increase only in 2015. After that, China's grain import potential ratio decreased very rapidly in 2018 and 2019, when the Sino-US trade war greatly impacted China's net grain imports, especially soybean imports. Based on Equation (2), it should be noted that smaller potential ratio means that the actual import value was much smaller compared with the fitted value. Therefore, overall decline of net import potential ratio indicates a widening gap between China's actual grain import and the fitted value. Environmental Sustainability of China's Net Grain Imports The environmental sustainability of China's net grain imports was evaluated through three indicators: virtual land, virtual water, and embodied carbon emissions in China's net grain imports. Figure 6 TVP-SV-VAR Model Test This study adopted a TVP-SV-VAR model to analyze the impact of EPU on the sustainability of China's grain trade because a VAR model is often used to deal with the dynamic relationship between time series and has strong explanatory power for the relationship between variables in an unstable system. Firstly, dimensionless quantization was used to eliminate the influence of variable units on the model results before impulse response. Secondly, stability tests of EPU index, import potential ratio (IPR), trade cost (TCO), virtual land imports volume (TVIL), virtual water imports volume (IWF), and embodied carbon emissions volume (INCA) were conducted, and a further cointegration test was needed for the non-stationary series. Finally, impulse response analysis of TVP-SV-VAR model was carried out for the stationary sequence and non-stationary sequence with cointegration. The ADF test results of each series are shown in Table 3. The series of EPU index, IPR, and TVIL did not pass the stability test, and the first-order difference series all passed the stability test. Therefore, the cointegration test was carried out for these non-stationary series. The cointegration test results of EPU index, import potential value, and virtual land imports volume series are shown in Table 4. It can be seen that there is cointegration among the non-stationary sequences, so they were used for TVP-SV-VAR model analysis. Based on Nakajima [51], the following equation is stipulated: . Furthermore, prior assumption is made that   、   、  h are diagonal matrices and satisfy the following requirements:   The parameters of the TVP-SV-VAR model were estimated by using the MCMC method to simulate 10,000 samples. The results are shown in Table 5. Note:  represent the estimation results of the first two diagonal elements of the posterior distribution, respectively, and the results of the remaining elements on the diagonal are similar; 95% L is the lower limit of the 95% confidence interval, and 95% U is the upper limit of the 95% confidence interval. Table 5 shows the posterior mean value, posterior standard deviation, the bounds of 95% confidence interval, Geweke convergence diagnostic value, and invalid influence factor of each parameter in the two models. It can be seen that the posterior mean values of all parameters in the model are within the 95% confidence interval, and the diagnostic values of Geweke convergence are less than the critical value of 1.96 of 5%. Therefore, the null hypothesis of convergence to posterior distribution was not rejected. In this model, the maximum invalid influence factor of the parameters was6.92, which indicates that the MCMC sampling results satisfied the posterior inference of the TVP-SV-VAR model, with at least 1445 valid samples obtained in 10,000 samples. The first row of Figure 7 represents the autocorrelation coefficient of the sample, the second line represents the sample path, and the third line represents its posterior density. It can be seen that the autocorrelation coefficient of the sample decreases rapidly, and the sample path is basically stable, which indicates that the subsequent inference of the model is reliable. Impact of China's EPU on the Economic Sustainability of Its Net Grain Imports (1)Impact of China's EPU on Its Grain Import Potential Ratio As seen in the impulse response results of the impact of EPU on China's grain import potential ratio, the standard deviation impact of EPU random interference term had a significant negative impact on China's grain import potential ratio, and the negative impact was time-varying, as shown in Figure 8a. Firstly, EPU had a negative impact on China's grain import potential ratio. From 2001 to 2007, the EPU index was relatively low, and China's grain import potential ratio was high; that is, China's actual grain import value was closer to the fitted value. China implemented a series of tariff reduction measures after its WTO accession at the end of 2001, and EPU decreased, leading to the increase of grain imports [54] and the significant increase of import potential ratio. The EPU index decreased as China began to abolish agricultural taxes in 2006, and China's grain import increased afterwards. The EPU index increased greatly with the outbreak of the world food crisis and financial crisis in 2008, which played a significant role in global trade collapse [55]. Many countries imposed restrictions on grain exports to ensure domestic grain supply, and China's grain import potential ratio decreased significantly. After 2008, the EPU index showed significant upward trend, and China's grain import potential ratio showed a significant downward trend. The EPU index rose sharply especially during the Sino-US trade war in 2018 and the outbreak of COVID-19 in 2019, resulting in remarkable decline in China's grain import potential ratio. Therefore, the increase of EPU index led to the decrease of China's grain imports and the decrease of import potential ratio and had a negative impact on the economic sustainability of China's net grain imports. Secondly, the impact of EPU on China's grain import potential ratio was time-varying. The overall change of the impulse response value of the whole sample shows that the negative impact of EPU on China's grain import potential ratio reached the maximum in the second response period of all sample years (2001-2019), indicating that EPU had a significant negative impact on grain import potential in a short period (lag two periods), and then the negative impact gradually decreased to zero with the increase of response periods. In terms of the change of impulse response value in each year, the negative impact of EPU on the grain import potential ratio was the biggest in the second response period of 2019, with a value of −0.288, while the impact was relatively small in the second response period of 2005, with a value of only −0.242. This shows that the negative impact of EPU on China's grain import potential ratio varies with time. (2)Impact of China's EPU on Its Grain Trade Cost As seen in the impulse response results of the impact of EPU on China's grain trade cost, a standard deviation impact of EPU random interference term had great positive impact on China's grain trade cost, which was remarkably time-varying, as shown in Figure 8b. Firstly, the positive impact of EPU on China's grain trade cost shows that when EPU increases, trade cost increases accordingly. China's grain trade cost was the highest in 2001 and then decreased as EPU decreased after China's WTO accession at the end of 2001.The outbreak of the world food crisis and financial crisis in 2008 led to higher EPU, and the world's major grain export countries increased control of grain exports through export tariffs to ensure domestic supply. Consequently, China's grain trade cost increased significantly. After 2011, China's grain trade cost decreased as EPU decreased. From 2015 to 2017, EPU decreased with China's RMB exchange rate mechanism reform, and trade cost declined accordingly. However, Sino-US trade conflict in 2018 and COVID-19 in 2019 led to the increase of EPU and the rise of China's grain trade cost [33]. Secondly Therefore, EPU had a negative impact on the economic sustainability of China's net grain imports. The rise of EPU index decreased China's grain import potential ratio while it increased trade costs, which reduced the economic sustainability of China's net grain imports. (a) Impact on import potential ratio (b) Impact on trade cost Firstly, EPU had a negative impact on virtual land, virtual water, and embodied carbon emissions imports. The decrease of EPU promoted the growth of China's net grain imports, so it also had a positive impact on the import of virtual land, virtual water, and the reduction of carbon emissions. For example, EPU decreased during China's RMB exchange rate system reform from 2015 to 2017, and virtual land, virtual water imports, and embodied carbon emissions increased significantly. In 2015, the substantial increase of China's net grain imports, a result of the appreciation of the RMB to the Brazil Real and the Canadian Dollar and the growing soybean demand in Chinese market, led to a significant increase in virtual land imports, virtual water imports, and carbon emissions reduction. The decrease of China's net grain imports in 2016 because of the deterioration of international commodity trade environment and the deprecation of the RMB to the USD, led to the decrease of virtual land imports, virtual water imports, and embodied carbon emissions. The Sino-US trade war in 2018 and the outbreak of COVID-19 in 2019 led to a sharp rise in the EPU index and to a sharp decline in virtual land, virtual water imports, and embodied carbon emissions in China's net grain imports, and thus increased the environmental costs of agricultural production [56]. Secondly, the negative impacts of EPU on virtual land, water imports, and embodied carbon emissions were different in magnitude. EPU had the greatest negative impact on virtual water imports but had less impact on virtual land imports and embodied carbon emissions. The maximum negative response of virtual water imports to EPU impact was −0.422, while those of virtual land imports and embodied carbon emission were−0.195 and −0.208, respectively. The main reason for this difference is that grain is highly water-consuming, so the import of virtual water in grain imports is more significant compared with import of virtual land and the reduction of carbon. Thirdly, the negative impact of EPU on China's virtual land and water imports and embodied carbon emission was time-varying. It can be seen from the overall change of the impulse response value of the whole sample that the negative impact of EPU on the three factors reached the maximum value in the third response period and then the negative impact gradually decreased to zero with the increase of response periods. Therefore, EPU has a negative impact on the environmental sustainability of China's net grain imports. With the rise of the EPU index, virtual water and virtual land imports and embodied carbon emissions in China's net grain imports decreases, which reduces the environmental sustainability of China's net grain imports. 3. Impact of China's EPU on Overall Sustainability of Its Net Grain Imports EPU has a very significant negative impact on the sustainable development of China's net grain imports (Table 6). On the one hand, a higher EPU index leads to lower import potential ratio and higher trade cost. Therefore, EPU has a negative impact on the economic sustainability of China's net grain imports. On the other hand, the rise of the EPU index also leads to a decrease of virtual land, virtual water imports, and embodied carbon emissions, so it reduces the environmental sustainability of China's net grain imports. Therefore, the rise of EPU poses challenges to the sustainability of China's net grain imports. With the rise of anti-globalization and protectionism, EPU will affect the sustainability of China's net grain imports even more significantly. The Sino-US trade war, in particular, had a lasting and profound negative impact on the sustainability of China's net grain imports. The outbreak of COVID-19 and its global spread in 2020 urged all countries to lay more emphasis on national food security, with emergency measures to restrict grain exports. Therefore, it is imperative for China to find countermeasures in the face of increasing EPU in order to promote the sustainability of China's net grain imports and to ensure its food security. Conclusions This study evaluated the sustainability of China's net grain imports from both an economic and an environmental perspective through the estimation of its grain import potential ratio, trade cost, virtual land imports, virtual water imports, and embodied carbon emissions from 2001 to 2019. It analyzed the impact of the EPU index on these indicators with the help of the TVP-SV-VAR model, exploring the impact of EPU on the sustainable development of China's net grain imports. The main conclusions are as follows. (1) Among the events included in EPU, China's WTO accession, the outbreak of the world food crisis and financial crisis, the reform of China's exchange rate system and the outbreak of Sino-US trade conflict can cause significant changes in the economic and environmental sustainability of China's net grain imports. (2)From an economic perspective, EPU has a negative impact on the sustainability of China's grain import. A higher EPU index leads to a low import potential ratio and higher trade costs, which harms the economic sustainability of China's net grain imports. (3) Environmentally, the EPU index has a significant negative impact on environmental sustainability. It has a greater negative impact on virtual land and virtual water imports but less impact on embodied carbon emissions. The outbreak of Sino-US trade conflict, in particular, led to a remarkable increase in the environmental cost of grain production in China. Implications EPU significantly affects the sustainability of China's grain trade. In recent years, the frequent occurrence of trade conflicts between China and the United States and the outbreak of COVID-19, in particular, have severely impacted the sustainability of China's net grain imports. In order to make full use of the international market and resources to ensure domestic food security, release the pressure on the resources and environment of grain production, and meanwhile, avoid the severe impact of large grain imports on the domestic market, it is necessary for China to optimize its grain trading system and continuously improve grain production efficiency. China should further optimize its grain trade policy, strengthen international cooperation, and promote the construction of a trade information platform so as to enhance the sustainability of China's net grain imports. First, China should further optimize its trade market structure and strengthen international cooperation. China's grain imports are severely challenged by American trade policy and international trade protectionism, so the domestic grain market is facing enormous pressure. In order to ensure food security and stabilize the international supply of food, China should strengthen strategic cooperation with major grain export countries through trade consultation and should develop bilateral trade based on comparative advantages to stabilize cooperative relations. Secondly, China should optimize its grain trade system and straighten out the relationship between the participants in all aspects of grain trade. It is important for China to optimize the grain tariff quota system, relax the market self-regulation, weaken administrative factors, strengthen market supervision, and improve the efficiency of grain import and export customs. Thirdly, China's agricultural products trade market faces increasing uncertainty given the growing trend of anti-globalization and trade protectionism. Therefore, China should accelerate the construction of a Customs grain trade big data platform by subdividing the types of grain products and unifying the caliber of the data to enhance the reliability and availability of data. A data collection and analysis department should be established to analyze and forecast food security in the world and in China, and a supporting system of grain trade monitoring and early warning should also be set up. Meanwhile, China should strive to improve domestic grain production efficiency and improve the utilization rate of agricultural resources to enhance international competitiveness. Advanced production and processing technology and management experience should also be introduced and applied by strengthening exchanges and cooperation with research institutions to improve grain yield. Improvement of the utilization rate of agricultural resources can be achieved through the application of advanced technologies. The efficiency of water utilization can be improved by strengthening the management of farmland soil moisture as well as through improving the efficiency of water management through the application of sprinkle irrigation, drip irrigation, and other more efficient irrigation technologies. China should strictly safeguard the area of cultivated land and should conduct intensive management of land to improve the efficiency of land utilization, for example, through the development of three-dimensional agriculture and inter-cropping. Meanwhile, it is also important to reduce the residue of harmful substances to improve land quality and productivity and to ensure the environmental sustainability of China's grain production.
9,167.2
2021-06-18T00:00:00.000
[ "Economics", "Environmental Science" ]
Towards Better Entity Linking with Multi-View Enhanced Distillation Dense retrieval is widely used for entity linking to retrieve entities from large-scale knowledge bases. Mainstream techniques are based on a dual-encoder framework, which encodes mentions and entities independently and calculates their relevances via rough interaction metrics, resulting in difficulty in explicitly modeling multiple mention-relevant parts within entities to match divergent mentions. Aiming at learning entity representations that can match divergent mentions, this paper proposes a Multi-View Enhanced Distillation (MVD) framework, which can effectively transfer knowledge of multiple fine-grained and mention-relevant parts within entities from cross-encoders to dual-encoders. Each entity is split into multiple views to avoid irrelevant information being over-squashed into the mention-relevant view. We further design cross-alignment and self-alignment mechanisms for this framework to facilitate fine-grained knowledge distillation from the teacher model to the student model. Meanwhile, we reserve a global-view that embeds the entity as a whole to prevent dispersal of uniform information. Experiments show our method achieves state-of-the-art performance on several entity linking benchmarks. Introduction Entity Linking (EL) serves as a fundamental task in Natural Language Processing (NLP), connecting mentions within unstructured contexts to their corresponding entities in a Knowledge Base (KB). EL usually provides the entity-related data foundation for various tasks, such as KBQA (Ye et al., 2022), Knowledge-based Language Models and Information Retrieval (Li et al., 2022). Most EL systems consist of two stages: entity retrieval (candidate generation), which retrieves Figure 1: The illustration of two types of entities. Mentions in contexts are in bold, key information in entities is highlighted in color. The information in the first type of entity is relatively consistent and can be matched with a corresponding mention. In contrast, the second type of entity contains diverse and sparsely distributed information, can match with divergent mentions. a small set of candidate entities corresponding to mentions from a large-scale KB with low latency, and entity ranking (entity disambiguation), which ranks those candidates using a more accurate model to select the best match as the target entity. This paper focuses on the entity retrieval task, which poses a significant challenge due to the need to retrieve targets from a large-scale KB. Moreover, the performance of entity retrieval is crucial for EL systems, as any recall errors in the initial stage can have a significant impact on the performance of the latter ranking stage (Luan et al., 2021). Recent advancements in pre-trained language models (PLMs) (Kenton and Toutanova, 2019) have led to the widespread use of dense retrieval technology for large-scale entity retrieval (Gillick et al., 2019;. This approach typically adopts a dual-encoder architecture that embeds the textual content of mentions and entities independently into fixed-dimensional vectors (Karpukhin et al., 2020) to calculate their relevance scores using a lightweight interaction metric (e.g., dot-product). This allows for pre-computing the entity embeddings, enabling entities to be retrieved through various fast nearest neighbor search techniques (Johnson et al., 2019;Jayaram Subramanya et al., 2019). The primary challenge in modeling relevance between an entity and its corresponding mentions lies in explicitly capturing the mention-relevant parts within the entity. By analyzing the diversity of intra-information within the textual contents of entities, we identify two distinct types of entities, as illustrated in Figure 1. Entities with uniform information can be effectively represented by the dual-encoder; however, due to its single-vector representation and coarse-grained interaction metric, this framework may struggle with entities containing divergent and sparsely distributed information. To alleviate the issue, existing methods construct multi-vector entity representations from different perspectives Zhang and Stratos, 2021;Tang et al., 2021). Despite these efforts, all these methods rely on coarse-grained entity-level labels for training and lack the necessary supervised signals to select the most relevant representation for a specific mention from multiple entity vectors. As a result, their capability to effectively capture multiple fine-grained aspects of an entity and accurately match mentions with varying contexts is significantly hampered, ultimately leading to suboptimal performance in dense entity retrieval. In order to obtain fine-grained entity representations capable of matching divergent mentions, we propose a novel Multi-View Enhanced Distillation (MVD) framework. MVD effectively transfers knowledge of multiple fine-grained and mentionrelevant parts within entities from cross-encoders to dual-encoders. By jointly encoding the entity and its corresponding mentions, cross-encoders enable the explicit capture of mention-relevant components within the entity, thereby facilitating the learning of fine-grained elements of the entity through more accurate soft-labels. To achieve this, our framework constructs the same multi-view representation for both modules by splitting the textual information of entities into multiple fine-grained views. This approach prevents irrelevant information from being over-squashed into the mentionrelevant view, which is selected based on the results of cross-encoders. We further design cross-alignment and self-alignment mechanisms for our framework to separately align the original entity-level and finegrained view-level scoring distributions, thereby facilitating fine-grained knowledge transfer from the teacher model to the student model. Motivated by prior works (Xiong et al., 2020;Zhan et al., 2021;, MVD jointly optimizes both modules and employs an effective hard negative mining technique to facilitate transferring of hard-to-train knowledge in distillation. Meanwhile, we reserve a global-view that embeds the entity as a whole to prevent dispersal of uniform information and better represent the first type of entities in Figure 1. Through extensive experiments on several entity linking benchmarks, including ZESHEL, AIDA-B, MSNBC, and WNED-CWEB, our method demonstrates superior performance over existing approaches. The results highlight the effectiveness of MVD in capturing fine-grained entity representations and matching divergent mentions, which significantly improves entity retrieval performance and facilitates overall EL performance by retrieving high-quality candidates for the ranking stage. Related Work To accurately and efficiently acquire target entities from large-scale KBs, the majority of EL systems are designed in two stages: entity retrieval and entity ranking. For entity retrieval, prior approaches typically rely on simple methods like frequency information (Yamada et al., 2016), alias tables (Fang et al., 2019) and sparse-based models (Robertson et al., 2009) to retrieve a small set of candidate entities with low latency. For the ranking stage, neural networks had been widely used for calculating the relevance score between mentions and entities (Yamada et al., 2016;Ganea and Hofmann, 2017;Fang et al., 2019;Kolitsas et al., 2018). Recently, with the development of PLMs (Kenton and Toutanova, 2019; , PLMbased models have been widely used for both stages of EL. Logeswaran et al. (2019) and Yao et al. (2020) utilize the cross-encoder architecture that jointly encodes mentions and entities to rank candidates, Gillick et al. (2019) employs the dualencoder architecture for separately encoding mentions and entities into high-dimensional vectors for entity retrieval. BLINK improves overall EL performance by incorporating both architectures in its retrieve-then-rank pipeline, making it a strong baseline for the task. GERENE (De Cao et al., 2020) directly generates entity names through an auto-regressive approach. To further improve the retrieval performance, various methods have been proposed. Zhang and Stratos (2021) and Sun et al. (2022) demonstrate the effectiveness of hard negatives in enhancing retrieval performance. Agarwal et al. (2022) and GER (Wu et al., 2023) construct mention/entity centralized graph to learn the fine-grained entity representations. However, being limited to the single vector representation, these methods may struggle with entities that have multiple and sparsely distributed information. Although Tang et al. (2021) and MuVER construct multi-view entity representations and select the optimal view to calculate the relevance score with the mention, they still rely on the same entity-level supervised signal to optimize the scores of different views within the entity, which limit the capacity of matching with divergent mentions. In contrast to existing methods, MVD is primarily built upon the knowledge distillation technique (Hinton et al., 2015), aiming to acquire finegrained entity representations from cross-encoders to handle diverse mentions. To facilitate finegrained knowledge transfer of multiple mentionrelevant parts, MVD splits the entity into multiple views to avoid irrelevant information being squashed into the mention-relevant view, which is selected by the more accurate teacher model. This Framework further incorporates cross-alignment and self-alignment mechanisms to learn mentionrelevant view representation from both original entity-level and fine-grained view-level scoring distributions, these distributions are derived from the soft-labels generated by the cross-encoders. Task Formulation We first describe the task of entity linking as follows. Give a mention m in a context sentence s =< c l , m, c r >, where c l and c r are words to the left/right of the mention, our goal is to efficiently obtain the entity corresponding to m from a large-scale entity collection ε = {e 1 , e 2 , ..., e N }, each entity e ∈ ε is defined by its title t and description d as a generic setting in neural entity linking (Ganea and Hofmann, 2017). Here we follow the two-stage paradigm proposed by : 1) retrieving a small set of candidate enti-ties {e 1 , e 2 , ..., e K } corresponding to mention m from ε, where K ≪ N ; 2) ranking those candidates to obtain the best match as the target entity. In this work, we mainly focus on the first-stage retrieval. Encoder Architecture In this section, we describe the model architectures used for dense retrieval. Dual-encoder is the most adopted architecture for large-scale retrieval as it separately embeds mentions and entities into high-dimensional vectors, enabling offline entity embeddings and efficient nearest neighbor search. In contrast, the cross-encoder architecture performs better by computing deeply-contextualized representations of mention tokens and entity tokens, but is computationally expensive and impractical for first-stage large-scale retrieval (Reimers and Gurevych, 2019;Humeau et al., 2019). Therefore, in this work, we use the cross-encoder only during training, as the teacher model, to enhance the performance of the dual-encoder through the distillation of relevance scores. Dual-Encoder Architecture Similar to the work of for entity retrieval, the retriever contains two-tower PLMbased encoders Enc m (·) and Enc e (·) that encode mention and entity into single fixed-dimension vectors independently, which can be formulated as: E(e) = Enc e ([CLS] t [ENT] d [SEP]) (1) where m,c l ,c r ,t, and d are the word-piece tokens of the mention, the context before and after the mention, the entity title, and the entity description. The special tokens [M s ] and [M e ] are separators to identify the mention, and [ENT] serves as the delimiter of titles and descriptions. [CLS] and [SEP] are special tokens in BERT. For simplicity, we directly take the [CLS] embeddings E(m) and E(e) as the representations for mention m and entity e, then the relevance score s de (m, e) can be calculated by a dot product s de (m, e) = E(m) · E(e). Cross-Encoder Architecture Cross-encoder is built upon a PLM-based encoder Enc ce (·), which concatenates and jointly encodes mention m and entity e (remove the [CLS] token in the entity tokens), then gets the [CLS] vectors as their relevance representation E(m, e), finally fed it into a multi-layer perceptron (MLP) to compute the relevance score s ce (m, e). Multi-View Based Architecture With the aim to prevent irrelevant information being over-squashed into the entity representation and better represent the second type of entities in Figure 1, we construct multi-view entity representations for the entity-encoder Enc e (·). The textual information of the entity is split into multiple fine-grained local-views to explicitly capture the key information in the entity and match mentions with divergent contexts. Following the settings of MuVER , for each entity e, we segment its description d into several sentences d t (t = 1, 2, .., n) with NLTK toolkit 2 , and then concatenate with its title t as the t-th view e t (t = 1, 2, .., n): Meanwhile, we retain the original entity representation E(e) defined in Eq. (1) as the global-view e 0 in inference, to avoid the uniform information being dispersed into different views and better represent the first type of entities in Figure 1. Finally, the relevance score s(m, e i ) of mention m and entity e i can be calculated with their multiple embeddings. Here we adopt a max-pooler to select the view with the highest relevant score as the mention-relevant view: Multi-View Enhanced Distillation The basic intuition of MVD is to accurately transfer knowledge of multiple fine-grained views from a more powerful cross-encoder to the dual-encoder to obtain mention-relevant entity representations. First, in order to provide more accurate relevance between mention m and each view e t (t = 1, 2, ..., n) of the entity e as a supervised signal for distillation, we introduce a multi-view based crossencoder following the formulation in Sec 3.2.3: where m enc and e t enc (t = 1, 2, .., n) are the wordpiece tokens of the mention and entity representations defined as in Eq. (1) and (2), respectively. 2 www.nltk.org We further design cross-alignment and selfalignment mechanisms to separately align the original entity-level scoring distribution and finegrained view-level scoring distribution, in order to facilitate the fine-grained knowledge distillation from the teacher model to the student model. Cross-alignment In order to learn entity-level scoring distribution among candidate entities at the multi-view scenario, we calculate the relevance score s(m, e i ) for mention m and candidate entity e i in candidates {e 1 , e 2 , ..., e K } by all its views {e 1 i , e 2 i , ..., e n i }, the indexes of relevant views i de and i ce for dual-encoder and cross-encoder are as follows: here to avoid the mismatch of relevant views (i.e., i de ̸ = i ce ), we align their relevant views based on the index i ce of max-score view in cross-encoder, the loss can be measured by KL-divergence as heres de (m, e i ) ands ce (m, e i ) denote the probability distributions of the entity-level scores which are represented by the i ce -th view over all candidate entities. Self-alignment Aiming to learn the view-level scoring distribution within each entity for better distinguishing relevant view from other irrelevant views, we calculate the relevance score s(m, e t ) for mention m and each view e t i (t = 1, 2, ..., n) of entity e i , the loss can be measured by KL-divergence as: heres de (m, e t i ) ands ce (m, e t i ) denote the probability distributions of the view-level scores over all views within each entity. Joint training The overall joint training framework can be found in Figure 2. The final loss function is defined as L total = L de + L ce + αL cross + βL self (10) Here, L cross and L self are the knowledge distillation loss with the cross-encoder and defined as in Eq. (6) and (8) respectively, α and β are coefficients for them. Besides, L de and L ce are the supervised training loss of the dual-encoder and cross-encoder on the labeled data to maximize the s(m, e k ) for the golden entity e k in the set of candidates {e 1 , e 2 , ..., e K }, the loss can be defined as: Inference we only apply the mention-encoder to obtain the mention embeddings, and then retrieve targets directly from pre-computing view embeddings via efficient nearest neighbor search. These view embeddings encompass both global and local views and are generated by the entity-encoder following joint training. Although the size of the entity index may increase due to the number of views, the time complexity can remain sub-linear with the index size due to mature nearest neighbor search techniques . Hard Negative Sampling Hard negatives are effective information carriers for difficult knowledge in distillation. Mainstream techniques for generating hard negatives include utilizing static samples or top-K dynamic samples retrieved from a recent iteration of the retriever (Xiong et al., 2020;Zhan et al., 2021), but these hard negatives may not be suitable for the current model or are pseudo-negatives (i.e., unlabeled positives) . Aiming to mitigate this issue, we adopt a simple negative sampling method that first retrieves top-N candidates, then randomly samples K negatives from them, which reduces the probability of pseudo-negatives and improves the generalization of the retriever. Datasets We evaluate MVD under two distinct types of datasets: three standard EL datasets AIDA-CoNLL Training Procedure The training pipeline of MVD consists of two stages: Warmup training and MVD training. In the Warmup training stage, we separately train dualencoder and cross-encoder by in-batch negatives and static negatives. Then we initialize the student model and the teacher model with the well-trained dual-encoder and cross-encoder, and perform multiview enhanced distillation to jointly optimize the two modules following Section 3.3. Implementation details are listed in Appendix A.2. Main Results Compared Methods We compare MVD with previous state-of-the-art methods. These methods can be divided into several categories according to the representations of entities: BM25 (Robertson et al., 2009) is a sparse retrieval model based on exact term matching. BLINK ) adopts a typical dual-encoder architecture that embeds the entity independently into a single fixed-size vector. SOM (Zhang and Stratos, 2021) represents entities by its tokens and computes relevance scores via the sum-of-max operation (Khattab and Zaharia, 2020). Similar to our work, MuVER constructs multi-view entity representations to match divergent mentions and achieved the best results, so we select MuVER as the main compared baseline. Besides, ARBORESCENCE (Agarwal et al., 2022) and GER (Wu et al., 2023) construct mention/entity centralized graphs to learn fine-grained entity representations. For Zeshel dataset we compare MVD with all the above models. As shown in Table 1, MVD performs better than all the existing methods. Compared to the previously best performing method MuVER, MVD significantly surpasses in all metrics, particularly in R@1, which indicates the ability to directly obtain the target entity. This demonstrates the effectiveness of MVD, which uses hard negatives as information carriers to explicitly transfer knowledge of multiple fine-grained views from the cross-encoder to better represent entities for matching multiple mentions, resulting in higherquality candidates for the ranking stage. For Wikipedia datasets we compare MVD with BLINK 3 and MuVER . As shown in Table 2, our MVD framework also outperforms other methods and achieves state-of-the-art performance on AIDA-b, MSNBC, and WNED-CWEB datasets, which verifies the effectiveness of our method again in standard EL datasets. Ablation Study For conducting fair ablation studies and clearly evaluating the contributions of each fine-grained component and training strategy in MVD, we exclude the coarse-grained global-view to evaluate the capability of transferring knowledge of multiple fine-grained views, and utilize Top-K dynamic hard negatives without random sampling to mitigate the effects of randomness on training. Fine-grained components ablation results are presented in Table 3. When we replace the multiview representations in the cross-encoder with the original single vector or remove the relevant view selection based on the results of the crossencoder, the retrieval performance drops, indicat- ing the importance of providing accurate supervised signals for each view of the entity during distillation. Additionally, the removal of crossalignment and self-alignment results in a decrease in performance, highlighting the importance of these alignment mechanisms. Finally, when we exclude all fine-grained components in MVD and employ the traditional distillation paradigm based on single-vector entity representation and entity-level soft-labels, there is a significant decrease in performance, which further emphasizes the effectiveness of learning knowledge of multiple fine-grained and mention-relevant views during distillation. Training strategies we further explore the effectiveness of joint training and hard negative sampling in distillation, Table 4 shows the results. First, we examine the effect of joint training by freezing the teacher model's parameters to do a static distillation, the retrieval performance drops due to the teacher model's limitation. Similarly, the performance drops a lot when we replace the dynamic hard negatives with static negatives, which demonstrates the importance of dynamic hard negatives for making the learning task more challenging. Furthermore, when both training strategies are excluded and the student model is independently trained using static negatives, a substantial decrease in retrieval performance is observed, which validates the effectiveness of both training strategies in enhancing retrieval performance. Comparative Study on Entity Representation To demonstrate the capability of representing entities from multi-grained views, we carry out comparative analyses between MVD and BLINK , as well as MuVER . Candidate Retriever U.Acc. Large Version Ranker BLINK 63.03 SOM (Zhang and Stratos, 2021) 67.14 MVD (ours) 67.84 These systems are founded on the principles of coarse-grained global-views and fine-grained localviews, respectively. We evaluate the retrieval performance of both entity representations and present the results in Table 5. The results clearly indicate that MVD surpasses both BLINK and MuVER in terms of entity representation performance, even exceeding BLINK's global-view performance in R@1, despite being a fine-grained training framework. Unsurprisingly, the optimal retrieval performance is attained when MVD employs both entity representations concurrently during the inference process. Facilitating Ranker's Performance To evaluate the impact of the quality of candidate entities on overall performance, we consider two aspects: candidates generated by different retrievers and the number of candidate entities used in inference. First, we separately train BERT-base and BERT-large based cross-encoders to rank the top-64 candidate entities retrieved by MVD. As shown in Table 6, the ranker based on our framework achieves the best results in the two-stage performance compared to other candidate retrievers, demonstrating its ability to generate high-quality candidate entities for the ranking stage. Additionally, we study the impact of the number of candidate entities on overall performance, as shown in Figure 3, with the increase of candidates number k, the retrieval performance grows steadily while the overall performance is likely to be stagnant. This indicates that it's ideal to choose an appropriate k to balance the efficiency and efficacy, we observe that k = 16 is optimal on most of the existing EL benchmarks. Qualitative Analysis To better understand the practical implications of fine-grained knowledge transfer and global-view entity representation in MVD, as shown in Table 7, we conduct comparative analysis between our method and MuVER using retrieval examples from the test set of ZESHEL for qualitative analysis. In the first example, MVD clearly demonstrates its ability to accurately capture the mentionrelevant information Rekelen were members of this movement and professor Natima Lang in the golden entity "Cardassian dissident movement". In contrast, MuVER exhibits limited discriminatory ability in distinguishing between the golden entity and the hard negative entity "Romulan underground movement". In the second example, Unlike Mu-VER which solely focuses on local information within the entity, MVD can holistically model multiple mention-relevant parts within the golden entity "Greater ironguard" through a global-view entity representation, enabling matching with the corresponding mention "improved version of lesser ironguard". Conclusion In this paper, we propose a novel Multi-View Enhanced Distillation framework for dense entity retrieval. Our framework enables better representation of entities through multi-grained views, and by using hard negatives as information car- Entity retrieved by MVD Entity retrieved by MuVER Title: Cardassian dissident movement Title: Romulan underground movement Rekelen was a member of the underground movement and a student under professor Natima Lang. In 2370, Rekelen was forced to flee Cardassia prime because of her political views. The Cardassian dissident movement was a resistance movement formed to resist and oppose the Cardassian Central Command and restore the authority of the Detapa Council. They believed this change was critical for the future of their people. Professor Natima Lang, Hogue, and Rekelen were members of this movement in the late 2360s and 2370s. ... The Romulan underground movement was formed sometime prior to the late 24th century on the planet Romulus by a group of Romulan citizens who opposed the Romulan High Command and who supported a Romulan -Vulcan reunification. Its methods and principles were similar to those of the Cardassian dissident movement which emerged in the Cardassian Union around the same time. ... Title: Greater ironguard Title: Lesser ironguard Known as the improved version of lesser ironguard, this spell granted the complete immunity from all common, unenchanted metals to the caster or one creature touched by the caster. Greater ironguard was an arcane abjuration spell that temporarily granted one creature immunity from all non-magical metals and some enchanted metals. It was an improved version of ironguard. The effects of this spell were the same as for "lesser ironguard" except that it also granted immunity and transparency to metals that had been enchanted up to a certain degree. ... ... after an improved version was developed, this spell became known as lesser ironguard. Upon casting this spell, the caster or one creature touched by the caster became completely immune to common, unenchanted metal. metal weapons would pass through the individual without causing harm. likewise, the target of this spell could pass through metal barriers such as iron bars, grates, or portcullises. ... riers to effectively transfer knowledge of multiple fine-grained and mention-relevant views from the more powerful cross-encoder to the dualencoder. We also design cross-alignment and selfalignment mechanisms for this framework to facilitate the fine-grained knowledge distillation from the teacher model to the student model. Our experiments on several entity linking benchmarks show that our approach achieves state-of-the-art entity linking performance. Limitations The limitations of our method are as follows: • We find that utilizing multi-view representations in the cross-encoder is an effective method for MVD, however, the ranking performance of the cross-encoder may slightly decrease. Therefore, it is sub-optimal to directly use the cross-encoder model for entity ranking. • Mention detection is the predecessor task of our retrieval model, so our retrieval model will be affected by the error of the mention detection. Therefore, designing a joint model of mention detection and entity retrieval is an improvement direction of our method. A.2 Implementation Details For ZESHEL, we use the BERT-base to initialize both the student dual-encoder and the teacher cross-encoder. For Wikipedia-based datasets, we finetune our model based on the model released by BLINK, which is pre-trained on 9M annotated mention-entity pairs with BERT-large. All experiments are performed on 4 A6000 GPUs and the results are the average of 5 runs with different random seeds. Warmup training We initially train a dual-encoder using in-batch negatives, followed by training a cross-encoder as the teacher model via the top-k static hard negatives generated by the dual-encoder. Both models utilize multi-view entity representations and are optimized using the loss defined in Eq. (11), training details are listed in Table 10. MVD training Next, we initialize the student model and the teacher model with the well-trained dual-encoder and cross-encoder obtained from the Warmup training stage. We then employ multiview enhanced distillation to jointly optimize both modules, as described in Section 3.3. To determine the values of α and β in Eq. (10), we conduct a grid search and find that setting α = 0.3 and β = 0.1 yields the best performance. We further adopt a simple negative sampling method in Sec 3.4 that first retrieves top-N candidates and then samples K as negatives. Based on the analysis in Sec 5.1 that 16 is the optimal candidate number to cover most hard negatives and balance the efficiency, we set it as the value of K; then to ensure high recall rates and sampling high quality negatives, we search from a candidate list [50,100,150,200,300] and eventually determine N=100 is the most suitable value. The training details are listed in Table 11 Inference MVD employs both local-view and global-view entity representations concurrently during the inference process, details are listed in Table 12.
6,365.8
2023-05-27T00:00:00.000
[ "Computer Science" ]
The DFT+U: Approaches, Accuracy, and Applications The DFT+U: Approaches, Accuracy, and Applications This chapter introduces the Hubbard model and its applicability as a corrective tool for accurate modeling of the electronic properties of various classes of systems. The attainment of a correct description of electronic structure is critical for predicting fur - ther electronic-related properties, including intermolecular interactions and formation energies. The chapter begins with an introduction to the formulation of density func- tional theory (DFT) functionals, while addressing the origin of bandgap problem with correlated materials. Then, the corrective approaches proposed to solve the DFT band- gap problem are reviewed, while comparing them in terms of accuracy and computational cost. The Hubbard model will then offer a simple approach to correctly describe the behavior of highly correlated materials, known as the Mott insulators. Based on Hubbard model, DFT+U scheme is built, which is computationally convenient for accu- rate calculations of electronic structures. Later in this chapter, the computational and semiempirical methods of optimizing the value of the Coulomb interaction potential ( U ) are discussed, while evaluating the conditions under which it can be most predictive. The chapter focuses on highlighting the use of U to correct the description of the physi- cal properties, by reviewing the results of case studies presented in literature for various classes of materials. Introduction Density functional theory (DFT) is one of the most convenient computational tools for the prediction of the properties of different classes of materials [1,2]. Although its accuracy is acceptable as long as structural and cohesive properties are concerned, it dramatically fails in the prediction of electronic and other related properties of semiconductors up to a factor of 2. Theoretical formulation 2 .1. Standard DFT problem Using exact HF or DFT solutions, the aim is always to reach, as close as possible, the exact description of the total energy of the system. Unluckily, reaching this exact energy description is impossible and approximations have to be employed. In DFT, electronic interaction energies are simply described as the sum of classical Columbic repulsion between electronic densities in a mean field kind of way (Hartree term) and an additive term that is supposed to encompass all the correlations and spin interactions [1]. This additive term, namely the exchange and correlation (xc), is founded on approximations that have the responsibility to recover the exact energy description of the system. This approximated xc functional is a function of the electronic charge density of the system, and the accuracy of a DFT calculation is strongly dependent on the descriptive ability of this functional of the energy of the system [2]. It is generally difficult to model the dependence of the xc functional on electronic charge density, and thus, it can inadequately represent the many-body features of the N-electron ground state. For this reason, systems with physical properties that are controlled by many body electronic interactions (correlated systems) are poorly described by DFT calculations. For these systems, incorrect description of the electronic structure induces the so-called "bandgap problem," which in turn, imposes difficulties in utilizing DFT to predict accurate intermolecular interactions, formation energies, and transition states [7]. The problem of DFT to describe correlated systems can be attributed to the tendency of xc functionals to over-delocalize valence electrons and to over-stabilize metallic ground states [5,6]. That is why DFT fails significantly in predicting the properties of systems whose ground state is characterized by a more pronounced localization of electrons. The reason behind this delocalization is rooted to the inability of the approximated xc to completely cancel out the electronic self-interaction contained in the Hartree term; thus, a remaining "fragment" of the same electron is still there that can induce added self-interaction, consequently inducing an excessive delocalization of the wave functions [5]. For this reason, hybrid functionals were formulated to include a linear combination of a number of xc explicit density and HF exact exchange functionals, that is self-interaction free, by eliminating the extra self-interaction of electrons through the explicit introduction of a Fock exchange term. However, this method is computationally expensive and is not usually practical when larger, more complex systems are studied. Nonetheless, HF method, which describes the electronic structure with variationally optimized single determinant, cannot describe the physics of strongly correlated materials such as the Mott insulators. In order to describe the behavior of these systems, full account of the multideterminant nature of the N-electron wave function and of the many-body terms of the electronic interactions is needed [6]. Therefore, it is predicted that applying DFT calculations using approximate xc functionals, such as LDA or GGA, will poorly describe the physical properties of strongly correlated systems. Mott insulators and the Hubbard model According to the conventional band theories, strongly correlated materials are predicted to be conductive, while they show insulating behavior when experimentally measured. This serious flaw of the band theory was pointed out by Sir Nevil Mott, who emphasized that interelectron forces cannot be neglected, which lead to the existence of the bandgap in these falsely predicted conductors (Mott insulators) [8]. In these "metal-insulators," the bandgap exists between bands of like character i.e., between suborbitals of the same orbitals, such as 3d character, which originates from crystal field splitting or Hund's rule. The insulating character of the ground state stems from the strong Coulomb repulsion between electrons that forces them to localize in atomiclike orbitals (Mott localization). This Coulomb potential, responsible for localization, is described by the term "U," and when electrons are strongly localized, they cannot move freely between atoms and rather jump from one atom to another by a "hopping" mechanism between neighbor atoms, with an amplitude t that is proportional to the dispersion (the bandwidth) of the valence electronic states. The formation of an energy gap can be settled as the competition between the Coulomb potential U between 3d electrons and the transfer integral t of the tight-binding approximation of 3d electrons between neighboring atoms. Therefore, the bandgap can be described by the U, t and an extra z term that denotes the number of nearest neighbor atoms as [6]: Since the problem is rooted down to the band model of the systems, alternative models have been formulated to describe the correlated systems. One of the simplest models is the "Hubbard" model [9]. The Hubbard model is able to include the so-called "on-site repulsion," which stems from the Coulomb repulsion between electrons at the same atomic orbitals, and can therefore explain the transition between the conducting and insulating behavior of these systems. Based on this model, new Hamiltonian can be formulated with an additive Hubbard term that explicitly describes electronic interactions. The additive Hubbard Hamiltonian can be written in its simplest form as follows [6]: As predicted, the Hubbard Hamiltonian should be dependent on the two terms t and U, with 〈i.j〉 denoting nearest-neighbor atomic sites and c i † , c j , and n i are electronic creation, annihilation, and number operators for electrons of spin on site i, respectively. The hopping amplitude t is proportional to the bandwidth (dispersion) of the valence electrons, while the on-site Coulomb repulsion term U is proportional to the product of the occupation numbers of atomic states on the same site [6]. The system's insulating character develops when electrons do not have sufficient energy to overcome the repulsion potential of other electrons on neighbor sites, i.e., when t « U. The ability of the DFT scheme to predict electronic properties is fairly accurate when t » U, while for large U values, DFT significantly fails the HF method, which describes the electronic ground state with a variationally optimized single determinant, that cannot capture the physics of Mott insulators. DFT+U Inspired by the Hubbard model, DFT+U method is formulated to improve the description of the ground state of correlated systems. The main advantage of the DFT+U method is that it is within the realm of DFT, thus does not require significant effort to be implemented in the existing DFT codes and its computational cost is only slightly higher than that of normal DFT computations. This "U" correction can be added to the local and semilocal density functionals offering LDA+U and GGA+U computational operations. The basic role of the U correction is to treat the strong on-site Coulomb interaction of localized electrons with an additional Hubbard-like term. The Hubbard Hamiltonian describes the strongly correlated electronic states (d and f orbitals), while treating the rest of the valence electrons by the normal DFT approximations. For practical implementation of DFT+U in computational chemistry, the strength of the on-site interactions is described by a couple of parameters: the on-site Coulomb term U and the site exchange term J. These parameters "U and J" can be extracted from ab initio calculations, but usually are obtained semiempirically. The implementation of the DFT+U requires a clear understanding of the approximations it is based on and a precise evaluation of the conditions under which it can be expected to provide accurate quantitative predictions [5,6]. The LDA+U method is widely implemented to correct the approximate DFT xc functional. The LDA+U works in the same way as the standard LDA method to describe the valence electrons, and only for the strongly correlated electronic states (the d and f orbitals), the Hubbard model is implemented for a more accurate modeling. Therefore, the total energy of the system (E LDA+U ) is typically the summation of the standard LDA energy functional (E Hub ) for all the states and the energy of the Hubbard functional that describes the correlated states. Because of the additive Hubbard term, there will be a double counting error for the correlated states; therefore, a "double-counting" term (E dc ) must be deducted from the LDA's total energy that describes the electronic interactions in a mean field kind of way [5]. Therefore, it can be understood that the LDA+U is more like a substitution of the mean-field electronic interaction contained in the approximate xc functional. Nonetheless, the E dc term is not uniquely defined for each system and various formulations can be applied to different systems. The most dominant of these formulations is the FLL formulation [10][11][12]. It is based on the implementation of fully localized limit (FLL) on systems with more localized electrons on atomic orbitals. The reason for this formulation popularity is due to its ability to expand the width of the Kohn Sham (KS) orbitals and to effectively capture Mott localization. Based on this formulation, the LDA+U can be written as: where n m l m are the localized orbitals occupation numbers identified by the atomic site index I, state index m, and by the spin σ . In Eq. (4), the right-hand side second and third terms are the Hubbard and double-counting terms, specified in Eq. (3). The dependency on the occupation number is expected as the Hubbard correction is only applied to the states that are most disturbed by correlation effects. The occupation number is calculated as the projection of occupied KS orbitals on the states of a localized basis set: where the coefficients f kv σ represent the occupations of KS states (labeled by k-point, band, and spin indices), determined by the Fermi-Dirac distribution of the corresponding single-particle energy eigen values. According to this formulation, the fractional occupations of localized orbitals is reduced, while assisting the Mott localization of electrons on particular atomic states [5]. Although the above approach described in Eq. (4) is able to capture Mott localization, it is not invariant under rotation of the atomic orbital basis set employed to define the occupation number of n in Eq. (5). This variation makes the calculations performed unfavorably dependent on the unitary transformation of the chosen localized basis set. Therefore, "rotationally invariant formulation" is introduced, which is unitary-transformation invariant of LDA+U [12]. In this formulation, the electronic interactions are fully orbital dependent, and thus considered to be the most complete formulation of the LDA+U. However, a simpler formulation that preserves rotational invariance, which is theoretically based on the full rotationally invariant formulation, had proved to work as effectively as the full formulation for most materials [11]. Based on the simplified LDA+U form, it has been customary to utilize, instead of the interaction parameter U, an effective U parameter: U eff = U−J, where the "J" parameter is known as the exchange interaction term that accounts for Hund's rule coupling. The U eff is generally preferred because the J parameter is proven to be crucial to describe the electronic structure of certain classes of materials, typically those subject to strong spin-orbit coupling. Practical implementations of the Hubbard correction DFT+U is applicable for all open shell orbitals, such as d and f orbitals for transition metal elements with localized orbitals existing in extended states, as in the case of many strongly correlated materials and perovskites, where localized 3d or 4f orbitals are embedded in elongated s-p states. A complicated many-electron problem is made of electrons living in these localized orbitals, where they experience strong correlations among each other and with a subtle coupling with the extended states. Isolating a few degrees of freedom relevant to the correlation is the idea in the Hubbard model, where screened or renormalized Coulomb interaction (U) is kept among the localized orbitals' electrons [13]. In other word, the localized orbitals in the bandgap, which are present as localized states (d-and f-states), are too close to the Fermi energy. From that aspect, the U value should be used to push theses states away from the Fermi level, such as that provided by the GGA+U theory, which adds to the Hamiltonian a term that increases the total energy preventing the unwanted delocalization of the d-or f-electrons, when two d-or f-electrons are located on the same cation [14]. It is worth mentioning that using too large values of U will over-localize the states and lead to an unphysical flattening of the appropriate bands, which unlike fitting to many other properties, will make the fit worse. Also, the increase in the U value can cause an overestimation of the lattice constants as well as a wrong estimation of the ground state energy due to the electronic interaction error. Therefore, applying Hubbard correction to solve the bandgap problem is necessary for predicting the properties of transition metal oxides. Figure 1 shows the effect of U potential on correcting the failure of DFT to predict correct bandgaps for strongly correlated materials. Note the underestimation of the bandgap in case of MnO and the incorrect prediction of the metallic behavior of FeO [15]. Optimizing the U value From the case studies and examples presented within this chapter, one can intuitively conclude that corrective functional LDA+U is particularly dependent on the numerical value of the effective potential U eff , which is generally referred to in literature as "U" for simplicity. However, the U value is not known and practically is often tuned semiempirically to make a good agreement with experimental or higher level computational results. However, the semiempirical way of evaluating the U parameter fails to capture its dependence on the volume, structure, or the magnetic phase of the crystal, and also does not permit the capturing of changes in the on-site electronic interaction under changing physical conditions, such as chemical reactions. In order to get full advantage of this method, different procedures have been addressed to determine the Hubbard U from first principles [13]. In these procedures, the U parameter can generally be calculated using a self-consistent and basis set in an independent way. These different ab initio approaches for calculating U have been applied to different material systems, where the U value is calculated for individual atoms. For each atom, the U value is found to be dependent on the material specific parameters, including its position in the lattice and the structural and magnetic properties of the crystal, and also dependent on the localized basis set employed to describe the on-site occupation in the Hubbard functional. Therefore, the value of effective interactions should be re-computed for each type of material and each type of LDA+U implementation (e.g., based on augmented plane waves, Gaussian functions, etc.). Most programs these days use the method presented by Cococcioni et al. [16], where the values of U can be determined through a linear response method [17], in which the response of the occupation of localized states to a small perturbation of the local potential is calculated. The U is self-consistently determined, which is fully consistent with the definition of the DFT+U Hamiltonian, making this approach for the potential calculations fully ab initio. where J is indirectly assumed to be zero in order to obtain a simplified expression [17]. Nonetheless, J can add some additional flexibility to the DFT+U calculations, but it may yield surprising results including reversing the trends previously obtained in the implemented DFT+U calculations [18]. Despite the limitations of choosing the U value semiempirically for systems, where variations of on-site electronic interactions are present, it is found to be the most common practice used in literature, where the value of U is usually compared to the experimental bandgap. This semiempirical trend in practical implementation of U is present because of the significant computational cost of ab initio calculation of U, and in the cases of studying static physical properties, the results of computed U are not necessarily found to be better than the empirical ones. Within this practice, however, caution should be taken while pursuing the semiempirical method [19]. If it will be possible to describe all the relevant aspects of a system, except the bandgap, with a reasonable U, one might then look into using a scissor operator or rigid shift to the bandgap [20,21]. However, in particular cases, where calculations aim at understanding catalysis, it is natural to choose U to fit the energy of the oxidation-reduction, as catalysis is controlled by energy differences [14]. Conversely, one of the possible solutions is to venture into a negative value for the Hubbard U parameter, there is no obvious physical rationale for that yet, but the results may match with experiment for both the magnetic moment and structural properties, as illustrated later in this chapter. To elaborate the numerical U tuning procedure, three quick examples are presented below that can show the correlation between the value of U and the predicted physical properties: • The compilation of the correlated nature of cobalt 3d electrons in the theoretical studies of Co 3 O 4 gives a good picture of the significant difference in the U value with the difference in most of the calculated properties. The variety of U values have been used ranging from 2 to 6 for the properties including bandgap [22], oxidation energy [23], and structural parameters [24], which affect the choice of the value of U for each of these properties uniquely. The calculated bandgap at the generalized gradient approximation (GGA)+U agrees well with the experimental value of 1.6 eV. On the other hand, the calculated value using the PBE0 hybrid functional (3.42 eV) is highly overestimated, due to neglecting the screening problem of the Hartree-Fock approximation [25]. • From the study made by Lu and Liu [26] on cerium compounds presented some characteristics for U values for Ce atoms in different configurations as isolated atoms and ions. They illustrated that the ion charge (Ce atoms, Ce in Ce 3 H x O 7 clusters, or CeO 2 ) does not significantly affect the value of U and that when ions are isolated, the values are much larger (close to 15 for Ce 2.5+ and 18 eV for Ce 3.5+ ). • Within the study of BiMnO 3 that has strongly distorted perovskite structure with the GGA+U method, calculations show that distortions of the MnO 6 octahedral, which is considered the main unit in the crystal structure, are very sensitive to the value of the Coulomb repulsion U. The study showed that large U value decreases the 3d-2p hybridization, and therefore decreases the bonding effects, which in turn distortion increases the short Mn-O distances, and thus overly expands the MnO 6 octahedral [27]. Variation of U with calculation methods and parameters The parameters assigned for DFT calculations can significantly affect the choice of the optimum U value. These parameters include pseudopotentials, basis sets, the cutoff energy, and k-point sampling. As pseudopotentials are used to reduce computational time by replacing the full electron system in the Columbic potential by a system only taking explicitly into account the "valence" electrons [28], the pseudopotential will strongly affect the U value. Thus, calculations have to be converged very well with respect to the cutoff energy and k-point sampling, while taking into account that the symmetry used in DFT+U calculations, because adding the U parameter often lowers the crystallographic symmetry, thereby the number of k-points needs to be increased. Not only is the U value affected by the parameters applied, but it is also strongly dependent on the DFT method used. In a published review [29], a comparison of different calculated U values using different approaches was highlighted for several transition metal oxides. It was reported that with small U values, the electrons were still not localized, and that the U value depends on the used exchange-correlation functionals (LDA or GGA), the pseudopotential, the fitted experimental properties, and projection operators [30]. In the computational study of strongly correlated systems, it can be usually found in literature that researchers refer to the utilization of (DFT+U) method in their calculations, which may include generalized gradient approximation (GGA+U) [8], local density approximation (LDA+U) [31], or both [32]. To be able to choose the proper method of calculation for a studied system, one should know the limitation of each of the two methods for that specific system and to what extent is each method approved to be closer to the experimental values. Knowing the optimum U value can be reached empirically by applying different values of U for either GGA or LDA. From the following list of examples from literature, an assessment of the performance of different values of U when applied to both GGA and LDA for the same system can be realized: • Griffin et al. [33] studied the FeAs crystal comparing the GGA+U and LDA+U levels of accuracy, using U eff = −2 to 4 eV. The results showed that for the bond distances and angles in the crystal, the GGA+U gave results close to the experimental values when U ≤ 1 eV, whereas using LDA, the structural properties were poorly predicted. It was observed that increasing the value of U in the GGA+U increased the stabilization energy for antiferromagnetic ordering. Both GGA+U and LDA+U overestimated the value of magnetic moment. However, only the GGA+U could attain the experimental values of magnetic moment for negative U eff [5]. • Cerium oxides (CeO 2 and Ce 2 O 3 ) were tested by Christoph et al. [34] comparing GGA+U and LDA+U level of theory meanwhile studying the effect of the U eff value on the calculated properties. It was found that the value of U eff is dependent on the property under examination. The sensitivity toward U eff values was especially high for properties of Ce 2 O 3; because it has an electron in the 4f orbital, which is sensitive to the change in the effective on-site Coulomb repulsion due to the strong localization, in contrary to the CeO 2 that has an empty 4f orbital. The GGA+U showed an acceptable agreement with experiment at lower energies of U eff than LDA did, with values of U eff 2.5-3.5 eV for LDA+U and 1.5-2.0 eV for GGA+U, which can be due to the more accurate treatment of correlation effects within the GGA potential. On the other hand, the structural properties were better represented by the LDA+U method for CeO 2 . Regarding Ce 2 O 3 electronic structure, both LDA+U and GGA+U results showed a similarly good accuracy, while for the calculated reaction energies, LDA+U results showed better accuracy. [34] • Sun et al. [35] studied PuO 2 and Pu 2 O 3 oxides using both GGA+U and LDA+U methods. Although PuO 2 is known to be an insulator [36], its ground state was reported experimentally to be an antiferromagnetic phase [37]. For PuO 2 , at U = 0, the ground state was a ferromagnetic metal, which is different from experimental results. Upon increasing the amplitude of U to 1.5 eV, the LDA+U and GGA+U calculations correctly predicted the antiferromagnetic insulating ground-state characteristics. For the lattice parameters, it was found that higher values of U (U = 4 eV) were needed with the LDA+U than for GGA+U. At U = 4 eV, it is expected that both LDA+U and the GGA+U would show a satisfactory prediction of the ground-state atomic structure of Pu 2 O 3 . However, the study showed that above the metallic-insulating transition, the reaction energy decreases with increasing U for the LDA and the GGA schemes. Therefore, for both Pu 2 O 5 and PuO 2 , the LDA+U and GGA+U approaches, with U as large as 6 eV, failed to describe the electronic structure correctly. When the energy gap increases, the electrons gain more localization that causes a difficulty of making any new reactions, consequently increasing the reaction energy. When U exceeds 4 eV, the conduction band electrons approximately considered to be ionized; thus, the atoms (cores or ions) have got a better chance to react with other atoms which, resulting in a reduction in the reaction energy. As noticed in the previous studies, the U value is material dependent, besides being variable among the level of theory used. In general, the more localized the system is, the more sensitive it is to the value of U. The estimated value of U for a system of material using a specific level of calculation should not be extended to another system; rather, it should be recomputed each time for each material and even upon change of the level of calculation. Researchers will need to perform calculations using different U values within different xc functionals to get the best prediction of the calculated properties in comparison with the experimental measurements or with the other computational results as benchmark. The effect of U on pure and defected systems The chemical properties of transition-metal systems with localized electrons, mainly within d or f orbitals, are typically governed by the properties of the valence electrons. Experimentally, these electrons are observed to be localized in their orbitals due to strong correlations [38], whereas computationally, DFT approximated xc functionals tend to overly delocalize them while over-stabilizing metallic ground states, and thus underestimating the bandgap for semiconductors, and may reach false prediction of metallic behavior for systems like the Mott insulators. U can induce electronic localization due to the explicit account for the on-site electronic interactions. Another common problem that DFT calculations can impose is the prediction of the properties of materials with defects, as the underestimation of the bandgap by DFT can cause the conduction band (CB) or the valence band (VB) to kind of mask the true defect states. This is because defects can cause unpaired electrons and holes to form, which are overly delocalized by DFT, as it attempts to reduce the Coulomb repulsion due to selfinteraction error. Ref. [39] discusses an example of this problem studying the anatase TiO 2 , where they showed that the description of the distribution of electrons in the unit cell, created from oxygen vacancies and hydrogen impurities, is wrongly predicted using GGA-PBE scheme of DFT calculations. In the case of oxygen vacancies, their calculations predicted a 2.6 eV bandgap, which is about 0.6 eV smaller than that reported experimentally. The electrons left in the system upon vacancy formation are completely delocalized over the entire cell. These electrons are incorrectly shared over all the Ti atoms of the cell, and as a result, the atomic displacements around the vacancies are predicted to be symmetric. All these findings indicate the difficulty of DFT methods to describe the properties of defects in wide bandgap metal oxides. Also, the accuracy of the description of the electronic structure of the partially reduced oxide systems was reviewed and discussed within the first principle methods [40]. The electronic structure of TiO 2 -both pristine and doped-is one of the examples that is frequently studied in literature. Typically, in the anatase and rutile phases, computational studies encountered the problem of considerable underestimation of the bandgap, which presented a barrier in the prediction of further related properties. Titania is widely studied in various photoelectrochemical applications, and accurate theoretical assessment is required to be able to enhance its catalytic properties. In addition, to further improve the properties of TiO 2 as a photocatalyst, an optimization of the band structure is required, including narrowing the bandgap (Eg) to improve visible light absorption, and proper positioning of the valence band (VB) and the conduction band (CB) [41]. Efforts on narrowing the bandgap of the TiO 2 have been done through doping with metallic and nonmetallic elements that typically replace Ti or O atoms, and thus change the position of the VB and or the CB leading to a change in the bandgap [42]. In the following subsections, titania will be used as an example to assess the effect of U correction by presenting results from literature for both pristine and doped cases. We will monitor the behavior of the materials before and after U correction, while assessing the significance of the U correction for correct prediction of the material's properties. The Bandgap problem: pristine TiO 2 with U correction Regarding the electronic structure of titania, the bandgap was underestimated by the standard DFT, while found to be overestimated when the hybrid functional Heyd-Scuseria-Ernzerhof (HSE06) was applied. However, the bandgap prediction was markedly improved by adding the Hubbard U correction. The obtained band structures using GGA-PBE showed bandgaps of 2.140 and 1.973 eV for anatase and rutile, respectively. However, upon applying the localization of the excess electronic charge using +U correction, the predicted bandgaps are accurate and in a good agreement with the experimental and the computationally expensive hybrid functional (HSE06) results [43], Figure 2. In another study, for rutile TiO2, the prediction of the experimental bandgap is achieved with a U value of 10 eV, whereas the crystal and electronic structures were better described with U < 5 eV [19]. Dompablo et al. [19] compared the effect of the U parameter value (0 < U < 10 eV) within the LDA+U and the GGA+U on the calculated properties of anatase TiO 2 . Both LDA+U and GGA+U required a small value of U (3 and 6 eV, respectively) to reproduce the experimental measurements, Figure 3. However, using very large U values leads to mismatching, where the lattice parameters (a and c) and the volume of unit cell are increasing with increasing U, due to the Coulomb repulsion increase. Note that standard DFT and the hybrid functional HSE06 failed to calculate the crystal lattice. On the other hand, the calculated bandgap within the GGA+U and LDA+U methods was found to be in better agreement with experiments compared to the conventional GGA or LDA, with small difference in the required U value. The bandgap was shown to increase by increasing the U value till 8.5 eV, which gave a result close to the experimental bandgap and in agreement with those obtained in previous DFT studies [44]. For values of U larger than 8.5, the bandgap was overestimated. It is worth to mention that this value (8.5 eV) is considered high when compared to other U values for other transition metal oxides [29]. In all these calculations, the Hubbard U parameter was used for the d or f orbitals of transition metals. However, when the Ti-O bonding is considered, while applying the correction only to the 3d-states, it can be estimated that this correction might have an influence over the Ti−O covalent bonding, where the Ti states are shifted and the 2p states of oxygen are not changed [45]. In this regard, several first principle calculations were derived to study the electronic, structural, and optical properties of TiO 2 polymorphs by applying the U correction for the oxygen's 2p orbitals and titanium's 3d orbitals [46]. In order to correct the bandgap, while avoiding the use of large U values and the bonding problem, Ataei et al. [47] reported that with values of 3.5 eV for both O 2p and Ti 3d-states, the results for the lattice constants, bandgap, and gap states are in good agreement with the experimental reports. Doped-TiO 2 with U correction In a recent study [40], a comparison was performed to elucidate the effect of different U values in representing the bandgap states produced by interstitial hydrogen atom and oxygen vacancy within the bulk Ti anatase structure. The dependence on the method used was observed, beside the value of U within GGA+U scheme, see Figure 4. When the U correction was not applied, the bandgap is underestimated, as expected, and the electrons caused by the oxygen vacancy or the hydrogen impurity are fully delocalized and have conduction band character. Upon applying the U correction, the states start to localize and are became deeply localized in the gap with increasing the value of U. In all these calculations, the Hubbard U correction was applied for the Ti 3d orbitals only; by applying the correction for the 2p oxygen orbitals with U = 3.5, the results were in agreement with previous results [47]. The intrinsic defects in TiO 2 (vacancies) have been computationally studied, providing a fastcheap method to guide researchers in choosing the defect position in the solid crystal. The oxygen vacancy in the rutile crystal was investigated [48] using the DFT+U with U value of 4.0 eV, indicating that oxygen vacancy in the rutile crystal introduces four local states with two occupied and two unoccupied states, with no change in the bandgap (2.75 eV) [48]. The Ti vacancy effect on the bandgap (E g ) was also studied [49] using the GGA+U with U values of 7.2 eV. It was found that Ti vacancy caused ferromagnetism besides widening the valence band, and switching the TiO 2 from n-type to p-type semiconductor with higher charge mobility [49]. Modeling of organometallics using the (+U) Hubbard correction is a computational tool that can be applied widely not only to crystals but also to the strongly correlated metals attached to other noncorrelated systems such as organic moieties. One of these important systems is metal organic framework (MOF). Metal organic frameworks (MOFs) MOFs are crystalline nanoporous materials where a centered transition metal is linked to different types of ligands, which provide a very large surface area [50] that can allow their use in supercapacitors and water splitting applications. Most of the MOFs have open metal sites, which are coordinative unsaturated metal sites with no geometric hindrance. While the whole material remains as a solid, the structure allows the complex framework to be used in gas capturing and storage, and the binding energy between the MOFs and the gas or water molecules allows the prediction of the capturing mechanism. The cage shape of the MOFs and the organic moiety allow their use in many applications such as drug delivery and fertilizers, while the magnetic behavior of MOFs allows the researchers to correctly predict how it can be used in applications. Quantum mechanics frame of work is usually used to describe the full interaction between the centered metal ion and the surrounding ligands, due to the fact that the synthesis of these materials is both time and money consuming. The complex geometry resulted from the computational calculations is important to predict the small change in electronic structure upon application of external stimuli [51]. Density functional theory (DFT) has been used to model the MOFs as it allows the "mapping" of a system of N interacting electrons onto a system of N noninteracting electrons having the same ground state charge density in an effective potential. However, DFT fails to describe electrons in open d-or f-shells [8]. The pure DFT calculations usually wrongly estimate the bandgap and the ferromagnetic (FM) or antiferromagnetic (AFM) coupling for the centered metal in the MOFs. The reason for this wrong estimation is the localized spin and itinerant spin density that are coupled via the Heisenberg exchange interaction [52,53]. In this interaction, the ferromagnetic sign is assumed if the hybridization of the conduction electrons (dispersive LUMO band) with a doubly occupied or empty d orbital of the magnetic center is sufficiently strong. Owing to Hund's rule, in the d shell, it is energetically favorable to induce spin polarization parallel to the d-shell spin. The itinerant spin density, however, forms at an energy penalty determined by the dispersion of the conduction band; the larger the density of states at the Fermi level, the easier is for the itinerant spin density to form. The addition of an extra interaction term that accounts for the strong on-site coulomb U correction has proved to lead to good results [54]. One more advantage of the DFT+U is that it can be used to model systems containing up to few hundred atoms [55]. The U parameter affects the predicted electronic structure and magnetic properties; in the following paragraphs, we will discuss some of the MOF applications and how to fit a proper magnitude of U in DFT+U calculations: • The magnetic properties of the MOF of the complex dimethyl ammonium copper format (DMACuF) were predicted correctly [56] using the (GGA+U) with convenient U values (U = 4-7 eV) for Cu 3d-states to describe the effect of electron correlation associated with those states. Also, the magnetic properties of MOFs of TCNQ (7,7,8,8-tetracyanoquinodimethane) and two different (Mn and Ni) 3d transition metal atoms were predicted correctly without synthesizing. But in this case to properly describe the d electrons in Ni and Mn metal centers, spin-polarized calculations using the DFT+U with U value of (U = 4 eV) were performed [53]. It can be claimed that the varying of U in the range of 3 to 5 eV does not appreciably change the values of the Ni and Mn magnetic moments, nor the corresponding 3d level occupations, in particular, that of the Ni (3dxz) orbital that crosses the Fermi level [53]. • The binding energy of CO 2 to a Co-MOF-74 was predicted [57] using DFT+U with U values (U = 0-6 eV), and it was found that the value of U between 2 and 5 eV gives lattice parameters matching with experiment due to the fact that the Co-O bond length decreases with U, since U localizes the Co d-states, which allows the CO 2 molecule closer to the charged open metal site, increasing the electrostatic contribution to the binding energy [57]. • The Cu-BTC [58], a material consisting of copper dimers linked by 1,3,5-benzenetricarboxylate C 6 O 9 H 3 (BTC) units, was studied for its ability to absorb up to 3.5 H 2 O per Cu as the Cu binds to the closest oxygen of the water molecule [59]. The U parameter in the meta-GGA+U calculation of the Cu-BTC was adjusted with the experimental crystallographic structure and the bandgap by minimizing the absorption at 2.3 eV. The U values gave the best results at 3.08 eV for Cu and 7.05 eV for O because those values reduced the calculated root mean square residual forces on the ions at their experimental fixed positions to its minimum value. The nonzero U of oxygen greatly reduces the residual forces, while the value of U for Cu ions controls the splitting in the Cu d levels, which have a great effect on calculated bandgap [59]. Spin-crossover (SCO) Spin-crossover (SCO) is a unique feature in which the centered transition metal ion linked to the surrounding ligand has the ability to attain different spin states with different total spin quantum numbers (S), while keeping the same valence state [47]. This property allows MOFs and organometallics generally to reversibly switch between spin states upon application of temperature, pressure, light, or magnetic field, such as changing between low spin and high spin [60,61]. The SCO can be predicted effectively using the U correction as well as the effect of temperature on the SCO. The use of DFT+U to model SCO was first done by Lebègue et al. [62]. SCO is generally appealing for metals that have availability to change between high spin and low spin due to the small difference between the HOMO and LUMO levels. Iron [65] using the DFT+U. It was found that high U values > 8 eV should be applied to the low spin Fe site, while low U value should be applied to the high spin ion. The results showed a great agreement with other DFT calculations. The generally used DFT-GGA failed to predict the high spin of the five coordinate Fe complexes [68], but it could be obtained by the DFT+U with U~ 4 eV and J~ 1 eV. The complexes Fe(phen) 2 (NCS) 2 and Fe(btr) 2 (NCS) 2 were tested using a U value of 2.5 eV [62], with the energy difference between the low spin state and high spin state is in agreement with the experimental values and proved that the U coulomb term was needed. The study showed the importance of magneto elastic couplings through the correlation between the spin state and the structure [62]. • Another study on the complex [Fe(pmd)-(H 2 O)M 2 (CN) 4 ].H 2 O (pmd = pyrimidine and M = Ag or Au) showed an interesting SCO behavior according to temperature [66]. This complex forms chain polymers that contain two different Fe(II) ions Fe1 and Fe2. Through hydration/dehydration, temperature changes between 130 and 230 K for the Ag-based coordination polymer changing the SCO reversibly and this change is due to the structure change caused by the water molecules in the network. For the Au-based complexes, only the SCO transition was different in the hydrated framework [66]. Such behavior could be explained using the DFT+U calculations [67]. The low spin-high spin transition was found to occur only on the sixfold nitrogen coordinate Fe 1 ion, while the Fe 2 ion coordinate with four nitrogen and two oxygen from the water molecules. For the dehydrated compounds, the effect of the Au atom caused a difference in the degree of the covalent bonding, which resulted in a distinct behavior of the Au network as compared to the Ag network. The hydrated and dehydrated Ag networks were predicted to exhibit a low spin-high spin transition, whereas the dehydrated Au network was predicted to remain in a high spin state [55]. • Fe-porphyrin molecules were found to have an intermediate spin state. The ground-state configuration was indicated to be (d XY ) 2 (d π ) 2 (d z 2 ) 2 using Mossbauer [68][69][70], magnetic [71] and NMR [72,73] measurements. However, Raman spectroscopy predicted a ground state with a configuration (d XY ) 2 (d π ) 3 (d z 2 ) 1 [74]. Therefore, a computational calculation was important to predict the reason for those results. The DFT+U was used to predict the electronic structure and magnetic properties of Fe molecules for a range of Coulomb U parameters (U = 2-4 eV), which is reasonable for iron [10,75], and then compared to available data in literature. It was found that GGA+U with U value of 4 eV provided an overall better comparison of the structural, electronic, magnetic properties, and energy level diagram of these systems [76,77]. To summarize, DFT+U are good to predict the correlation in the centered metal in organometallics. The spin change between FM and AFM states or in SCO can all be well predicted by the Hubbard correction, while the pure DFT fails due to the correlation in the d or f orbitals of the centered metal. Solving the CO adsorption puzzle with the U correction Studying surface chemistry is of great significance for enhancing the overall efficiency of many electrochemical applications [78][79][80]. In catalysis, for example, understanding the adsorption mechanism of species on catalytic surfaces-mainly electrodes-is essential in order to formulate a design principle for the prefect catalyst that can reach the optimum efficiency for a desired electrochemical process [81][82][83]. Typically, the adsorption of CO on metal surface is widely acknowledged as the prototypical system for studying molecular chemisorption [84][85][86][87]. Despite the extensive experimental studies, grasping the complete theoretical description of the "bonding model" has not yet been reached, due to the inability of experimental tools to fully describe the details of molecular orbital interactions and to make a profound population analysis, which is based on studying the electronic structures of the substrate and surface particles [88,89]. To this end, DFT can be utilized to explicitly describe electronic structures of the system particles in greater details, which can help in extending the conceptual model of CO chemisorption [90][91][92][93][94]. Unfortunately, due to the inherent wrong description of the electronic structure by DFT, wrong predictions of CO preferred adsorption sites are observed that contradict experimental results, especially for the (111) surface facets of transition metals, leading to the so-called "CO adsorption Puzzle" [95,96]. The root of this DFT problem resides on the fact that both local density and generalized gradient approximation functionals underestimate the CO bandgap, predicting wrong positions of the CO frontier orbitals, which results in an overestimated bond strength between the substrate and surface molecules [97]. One of the popular solutions that has been exploited by researchers to resolve the adsorption site prediction puzzle is the DFT+U correction [97,98]. In this approach, the position of the 2π* orbital is shifted to higher values, by adding the on-site Coulomb interaction parameter. By doing so, the interaction of CO 2π* orbital with the metallic d-band will no longer be overestimated, bringing the appropriate estimation of the CO adsorption site. Kresse et al. [99] first implemented this method and successfully obtained a site preference in agreement with experiment, emphasizing that the use of such a simple empirical method is able to capture the essential physics of adsorption. DFT calculations utilizing GGA functionals predict adsorption on the threefold hollow site for Cu(111) and in the bridge site on Cu(001), instead of the experimental on-top site preference. Reference [98] implanted Kresse's method to investigate the adsorption of CO on Cu(111) and (001) surfaces with 1/4 monolayer (ML) coverage on different adsorption sites. In that study, the HOMO-LUMO gap of the isolated CO molecule was demonstrated to be increased by increasing the value of U. Also, upon changing the U value, the corresponding adsorption energies of the CO over the different adsorption sites were calculated. Reviewing the Cu (111) surface results, five different U values (0.0, 0.5, 1.0, 1.25, and 1.5 eV) were used in the calculations. It was observed that only 20 meV changes in the adsorption energy (higher coordinated hollow sites) for U = 1.25 and 0.03 eV for U = 1.5 eV. Nonetheless, the absolute value of adsorption energy decreases linearly with increasing U, where the rate of reduction is found to be larger for higher coordinated sites. It was observed that the site preference between top and bridge sites to be reversed around the U value of 0.05 eV, while between the top and hollow sites around U = 0.45 eV. Concerning the adsorbate (surface) description in the study, the calculated interlayer relaxations were the same as that calculated using the GGA (PW91) functional without the U correction. Not only does the U correction help in solving the adsorption puzzle dilemma, but it can also enhance the description of other related properties, such as the calculated work function and the vibrational spectra for the CO-metal complexes, which are also demonstrated in Ref. [98] (Figure 5). Figure 5. A schematic sketch of the molecular eigenstates of the CO molecule. The DFT+U technique shifts the LUMO orbitals to higher energies, but the energies of the occupied orbitals remain the same. Summary and outlook In this chapter, the corrective capability of the DFT+U is overviewed and evaluated for a number of different classes of materials. Generally, the addition of the on-site Coulomb interaction potential (U) to the standard DFT Hamiltonian proved to provide significant changes to the predicted electronic structures, which can solve the inherent DFT bandgap prediction problem. The value of U can either be theoretically calculated or semiempirically tuned to match the experimental electronic structure. For the various case studies and applications reviewed, the criticality of correcting the electronic structure predictions was manifested, as it leads to significant improvements for the prediction of further electronic-related properties. Prior to the practical assessment, the theoretical foundation of the DFT+U method is briefly discussed and is verified to be rather simple adding only marginal computational cost to the standard DFT calculations. Compared to other corrective approaches, the DFT+U formulation demonstrated to be simpler in terms of theoretical formulation and practical implementations with considerably lower computational cost, while having nearly the same predictive power; it can even capture properties of certain materials that cannot be captured by other higher level or exact calculations. One of the most popular implementations of the U correction is the description of the electronic structure for strongly correlated materials (Mott insulators). The behavior of these types of insulators cannot be captured by applying Hartree-Fock, band theory based, calculations, as the root of this problem resides on the deficiency of the band theory to capture such behavior, as it neglects the interelectron forces. One of the simple models, which explicitly accounts for the on-site repulsion between electrons at the same atomic orbitals, is the Hubbard model. Based on this model, the DFT+U method is formulated to improve the description of the ground state of correlated systems. The theoretical and semiempirical techniques of the U optimization are discussed. The semiempirical tuning is found to be the most common practice employed by researchers due to the significant computational cost of ab initio calculations that U can have, and also, the computed U is not necessarily being better than the empirical ones. However, the semiempirical evaluation of U does not permit the capturing of changes in the on-site electronic interaction under changing physical conditions, such as chemical reactions. The practical implementations of U correction are discussed, while assessing the effect of the DFT scheme employed and the calculation parameters assigned on the numerical value of the optimum U utilized. The corrective influence of the U correction is validated by reviewing different examples and case studies in literature. Starting with the transition metal oxides, the effect of adding the U parameter to correctly describe the electronic structure of pure and defected TiO2 is reviewed, showing the different optimum values of U utilized for each level of calculation. Then, the implementation of the Hubbard correction to the systems that comprises molecules with solid-state crystals is reviewed, such as organometallics. The addition of U to the DFT calculation provides a better understanding of the behavior of the metals inside the organometallic systems. One of the most importantly studied organometallic systems is the metal organic frameworks (MOFs). Different examples in literature are reviewed, showing the effect of the U correction and how it can significantly improve the prediction of the magnetic properties of such systems. Also, one of the unique features of organometallics, which can be influenced the U correction, is the spin crossover (SCO). This property allows the MOFs and the organometallics generally to reversibly switch between spin states upon changing the external parameters. The SCO is proved to be predicted more effectively by applying the U correction, as demonstrated in the results presented in literature. Finally, the significance of the DFT+U method is manifested upon describing the adsorption mechanism of CO on transition metal systems. The influence of U correction on solving the so-called adsorption Puzzle is demonstrated, which leads to the correct prediction of CO adsorption site preference, which was an unresolved problem when DFT calculations are applied alone. Upon reviewing the presented applications and different case studies, where the U correction significantly improved the estimated results without changing the essential physics of the systems, we can estimate the potential of the Hubbard correction to gain a greater weight in the future of computational chemistry. Despite the convenience of the semiempirical tuning of U, the capabilities of the Hubbard correction in this way cannot be fully exploited, as it cannot be used to study systems with variations of on-site electronic interactions. On the other hand, despite the availability of theoretical U calculation methods, their computational costs are considerably large, compared to the semiempirical methods. Therefore, further improvements to the ab initio calculation of U is still required, with lower computational costs, in order to conceive full potential of the U correction that is able to capture phase changes and chemical reactions for the studied physical systems.
12,308.8
2018-05-16T00:00:00.000
[ "Physics", "Materials Science" ]
Opportunistic Scheduling for OFDM Systems with Fairness Constraints We consider the problem of downlink scheduling for multiuser orthogonal frequency-division multiplexing (OFDM) systems. Opportunistic scheduling exploits the time-varying, location-dependent channel conditions to achieve multiuser diversity. Previous work in this area has focused on single-channel systems. Multiuser OFDM allows multiple users to transmit simultaneously over multiple channels. In this paper, we develop a rigorous framework to study opportunistic scheduling in multiuser OFDM systems. We derive optimal opportunistic scheduling policies under three QoS/fairness constraints for multiuser OFDM systems—temporal fairness, utilitarian fairness, and minimum-performance guarantees. Our scheduler decides not only which time slot , but also which subcarrier to allocate to each user. Implementing these optimal policies involves solving a maximal bipartite matching problem at each scheduling time. To solve this problem e ffi ciently, we apply a modified Hungarian algorithm and a simple suboptimal algorithm. Numerical results demonstrate that our schemes achieve significant improvement in system performance compared with nonopportunistic schemes. INTRODUCTION Emerging broadband wireless networks which support highspeed packet data with a different quality of service (QoS) demand more flexible and efficient use of the scarce spectral resource. In contrast to wireline networks, one of the fundamental characteristics of wireless networks is the timevarying and location-dependent channel conditions due to multipath fading. From an information-theoretic viewpoint, Knopp and Humblet showed that the system capacity can be maximized by exploiting inherent multiuser diversity in the wireless channel [1]. The basic idea is to schedule a single user with the best instantaneous channel condition to transmit at any one time. The technology has already been implemented in the current 3G systems, that is, 1xEV-DO [2] and high-speed downlink packet access (HSDPA) [3]. The idea has also recently been adopted in cognitive radio systems which are novel intelligent wireless communication systems providing highly reliable and efficient communications by exploiting unused radio spectrum [4,5]. Orthogonal frequency-division multiplexing (OFDM) is a popular multiaccess scheme widely used in DVB, wire-less LANs (e.g., 802.16, ETSI HIPERLAN/2), and ultra wideband (UWB) systems [6]. It is also a promising modulation scheme of choice proposed for many future cellular networks such as cognitive radio systems [7,8]. OFDM divides the total bandwidth into many narrowband orthogonal subcarriers, which are transmitted in parallel, to combat frequency-selective fading and achieve higher spectral utilization. OFDMA, a multiuser version of OFDM, allows multiple users to transmit simultaneously on the different subcarriers [9]. Good scheduling schemes in wireless networks should opportunistically seek to exploit the time-varying channel conditions to improve spectrum efficiency, thereby achieving multiuser diversity gain. However, the potential to transmit at higher data rates opportunistically also introduces an important tradeoff between wireless resource efficiency and level of satisfaction among individual users (fairness). For example, allowing only users close to the base station to transmit at high transmission rate may result in very high throughput, but may sacrifice the transmission of other users. Such a scheme cannot satisfy the increasing demand for QoS provisioning in broadband wireless networks. EURASIP Journal on Wireless Communications and Networking To solve this problem, Liu et al. described a framework for opportunistic scheduling to exploit the multiuser diversity while at the same time satisfying three long-term QoS/fairness constraints [10][11][12]. In that work, only a single user can transmit at each scheduling time. The authors of [1] show that this is optimal for single-channel systems such as TDMA. However, the same is not the case for multiplechannel systems. In this paper, we propose an opportunistic scheduling framework for multiuser OFDM systems. We build on Liu's work by going from the single-channel to the multiplechannel case. We show how the system performance can be optimized by serving multiple users simultaneously over the different subcarriers. We focus on the downlink of an OFDM system. We derive our opportunistic scheduling policies under three long-term QoS/fairness constraints-temporal fairness, utilitarian fairness, and minimum-performance guarantees, which are similar in form to those of [12], but adapted to the setting of multiuser OFDM systems. We also state optimality conditions under each of these constraints. In particular, our scheduler decides not only which time slot but also which subcarrier to allocate to each user under the given QoS/fairness constraints. A stochastic approximation algorithm is used to calculate the control parameters online in the policies. To search over the optimal user subsets efficiently, we apply a modified bipartite matching algorithm. We also develop an efficient, low-complexity suboptimal algorithm-our experimental results illustrate that this algorithm achieves near-optimal performance. The remainder of this paper is organized as follows. In Section 2, we discuss related work on scheduling and fairness for OFDM. The system model is described in Section 3. In Section 4, we derive opportunistic scheduling policies under various fairness constraints and prove their optimality. In Section 5, we address some implementation issues, including control parameter estimation and the assignment problem that arises in implementing these policies. An optimal algorithm and an efficient suboptimal algorithm are proposed here. In Section 6, we present the numerical results to illustrate the performance of our policies. Finally, concluding remarks are given in Section 7. RELATED WORK Wireless scheduling has attracted a lot of recent attention. The authors of [13,14] extend the scheduling policies for wireline networks to wireless networks to provide short-term and long-term fairness bounds. However, they model a channel as being either "good" or "bad," which may be too simple in some situations. In [15][16][17], the authors study wireless scheduling algorithms when both delay and channel conditions are taken into account. Scheduling with short-term fairness constraints is also discussed in [10,18]. In [19,20], the authors present a scheduling scheme for the Qualcomm IS-856 (also known as HDR (high data rate)) system. Their scheduling scheme exploits time-varying channel conditions while satisfying a certain fairness constraint known as proportional fairness [21]. Although there has been considerable recent efforts on proportional fairness schedul-ing [22][23][24], to the best of our knowledge, there is currently no work considering multiuser OFDM systems with the three QoS fairness constraints we mentioned above. So in this paper we will focus on these three fairness constraints. Opportunistic scheduling exploits the channel fluctuations of users. In [22], the authors use multiple "dumb" antennas to "induce" channel fluctuations, and thus exploit multiuser diversity in a slow fading environment. The authors of [25] show that with multiple antennas, transmitting to a carefully chosen subset of users has superior performance. The resource management problem in OFDM systems has attracted a lot of research interest [26,27]. In [26], the authors propose an algorithm to minimize the total transmission power with minimum-rate constraints for users. Specifically, the algorithm allocates a set of subcarriers to each user and then determines the number of bits and transmission power on each subcarrier. In [27], the authors study the problem of dynamic subcarrier and power allocation with the objective to maximize the minimum of the users' data rates subject to a total transmission power constraint. All these studies show that dynamic resource allocation (in terms of bit, subcarrier, and power) schemes can achieve significant performance gains over traditional static allocations (such as TDMA-OFDM and FDMA-OFDM). However, none of the schemes described above exploit multiuser diversity. For delay-insensitive data service, we can expect to reap longterm performance benefits by exploiting multiuser diversity. OFDM has been used in several applications in cognitive radio. To enhance spectrum efficiency, the spectrum pooling system allows a license owner to share underutilized licensed spectrum with a secondary wireless system during its idle times [8]. A preferred transmission mode of the secondary system is OFDM due to its inherent flexibility. In [28], the authors discuss the desired properties in designing physical layers of cognitive radio systems and claim that the modulation scheme based on OFDM is a natural approach that satisfies the desired properties. Recently, there has been significant interest in opportunistic scheduling and fairness issues for multiple-channel systems [29][30][31][32][33]. In [31], the authors consider a totalthroughput maximization problem with deterministic and probabilistic constraints for multiple-channel systems. In [33], the authors consider opportunistic fair scheduling in downlink TDMA systems employing multiple transmit antennas and beamforming. In [34], the authors introduce cross-layer optimization for OFDM wireless networks. The interaction between the physical layer and media access control (MAC) layer is exploited to balance the efficiency and fairness of wireless resource allocation. The authors consider proportional and max-min fairness. SYSTEM MODEL In this section, we describe the system model, assumptions, notation, and formulation of the scheduling problem. The architecture of a downlink data scheduler for a single-cell multiuser OFDM system is depicted in Figure 1. 3 There is a base station (transmitter) with a single antenna communicating with N mobile users (receivers). Each user has different channel conditions over different subcarriers. By inserting pilot symbols in the downlink, the users can effectively estimate the channels. Every user should report its channel-state information over every subcarrier to the base station. All the channel-state information is sent to the subcarrier and bit allocation scheduler in the base station through feedback channels from all mobile users. The scheduling decision made by the scheduler is conveyed to the OFDM transmitter. The transmitter then assigns different transmission rates to scheduled users on corresponding subcarriers. The scheduler makes decisions once every time slot based on the channel-state information and the control parameters for fairness guarantees. We assume that the base station knows the perfect channel-state information for each user over each subcarrier. The channel conditions for different users are usually independently varying in a multiuser system. Owing to frequency-selective fading, one user may experience deep fading in some subcarriers, but relatively good in other subcarriers. By dynamically assigning users to favorable subcarriers, the overall performance of the network can be increased from the multiuser diversity. In practice, requiring "perfect" channel-state information results in significant feedback overhead burden, which might be difficult to implement. We can view our current work as providing fundamental performance bounds on what is achievable with channel feedback. The OFDM signaling is time slotted. The length of a time slot is fixed and the channel does not vary significantly during a time slot. The length of a time slot in the scheduling policy can be different from an actual time slot in the physical layer. It depends on how fast the channel conditions vary and how fast we want to track such changes. We assume that there is always data for each user to receive, that is, the system has infinite backlogged data queues. We also assume that the transmission power is uniformly allocated to all subcarriers. In principle, performance can be improved further by allocating a different power level to each subcarrier. In a system with a large number of users, this improvement could be marginal because of statistical effects [22]. In this paper, we will focus on scenarios with large numbers of users, or heavy-traffic systems, where the number of users is greater than the number of available OFDM subcarriers. These scenarios can be regarded as an extreme situation for OFDM. But it is important to determine the impact of a large number of users, such as in [22]. Our goal is to maximize the system performance by exploiting the time-varying and frequency-varying channel conditions while maintaining certain QoS/fairness constraints. Let i = 1, . . . , N be the index of users, and k = 1, . . . , K the index of subcarriers. Following [12], let ω t i,k be the instantaneous performance value that would be experienced by user i if it were scheduled to transmit over subcarrier k at time slot t. The ω t i,k comprise an N × K matrix, denoted as ω t . Usually, the better the channel condition of user i over subcarrier k, the larger the value of ω t i,k . Throughput (in terms of data rate bits/sec) is the most straightforward form of a time-varying and channel-condition-dependent performance measure. For convenience, the reader can think of throughput as the performance measure in this paper. However, our formulation applies in general. Let A = (A 1 , A 2 , . . . , A K ) represent a scheduling action, which is a vector of the indices of the users scheduled over all K subcarriers. The decision rule π t (·), which is a function of ω t , specifies which action should be chosen, that is, , where the value of A t k is the index of the user scheduled over subcarrier k at time t. We call π(·) = {π 1 (·), π 2 (·), . . . , π t (·), . . . } ∈ Π a policy, where Π is the set of all scheduling polices. Note that a policy may involve a time-varying rule for deciding scheduling actions. We are only interested in the so-called feasible policies, those that satisfy specific QoS/fairness requirements (described in the next section). Let U T i (π) be the average throughput of user i up to time T, and R T i (π) the average resource consumption of user i up to time T, that is, , that is, U T (π) is the average overall throughput up to time T. Then we define which can be considered as the asymptotic best-case system performance of policy π. Using the above notation, our goal can be formally stated as follows: find a feasible policy π that maximizes the system performance U(π) while maintaining certain QoS/fairness constraints. In the following section, we derive optimal policies for three categories of scheduling problems, each with a unique QoS/fairness requirement. OPPORTUNISTIC SCHEDULING UNDER VARIOUS FAIRNESS CONSTRAINTS Good scheduling schemes should be able to exploit the timevarying channel conditions of users to achieve higher utilization of wireless resources, while at the same time guarantee some level of fairness among users. Fairness is central to scheduling problems in wireless systems. Without a good fairness criterion, the system performance can be trivially optimized, but might prevent some users from accessing the network resource. In this section, we will study scheduling problems under three fairness criteria for multiuser OFDM systems-temporal fairness, utilitarian fairness, and minimum-performance guarantees. These categories of fairness are adopted from [12] and are extended to multiuser OFDM systems. It turns out that the form of the optimal policies here bear a resemblance to those of [12]. Figure 1: Downlink scheduling over multiuser OFDM system. Temporal fairness scheduling A natural fairness criterion is to give each user a certain long-term fraction of time because time is the basic resource shared among users. The problem of multiuser OFDM scheduling with temporal fairness can be expressed as where r i denotes the minimum time fraction that should be assigned to user i, with r i ≥ 0 and N i=1 r i ≤ 1. Recall that R T i (π) is the average resource consumption of user i up to time T. The r i s are predetermined and serve as the prespecified fairness constraints. The value of r i denotes the minimum fraction of time that user i should transmit over all the subcarriers in the long run, which is usually determined by the user's class, the price paid by the user, and so forth. Define the policy π * as follows: where the control parameters v * i are chosen such that [10], we can think of v * = (v * 1 , . . . , v * N ) in (4) as an "offset" or "threshold" to satisfy the temporal fairness constraints. Under this constraint, the scheduling policy schedules the "relatively best" subset of users to transmit. The subset of users selected by action A t is "relatively best" is maximum over all actions. If v * i > 0, then user i is an "unfortunate" user, that is, the channel conditions it experiences over all subcarriers are relatively poor. (e.g., it is far from the base station.) Hence, it has to take advantage of other users (e.g., users with v * i = 0) to satisfy its fairness requirement. But to maximize the overall system performance, we can only give the "unfortunate" users their minimum time-fraction requirements, hence condition 3. The policy π * defined in (4), which represents our opportunistic scheduling policy, is optimal in the following sense. exists for all i for π * , then the policy π * is an optimal solution to the problem defined in (3), that is, it maximizes the average OFDM system performance under the temporal fairness constraints. Proof. Let π be a policy satisfying the temporal fairness constraints, and let v * i satisfy conditions 1-3. Hence, we have By the definition of π * , we have Thus, Therefore, Since where the second part of (11) equals zero because of condition 3 on v * i . Inequalities (5) It is possible that the optimal policy is confronted with a tie between two or more users. When ties occur in the argmax in the policy, they can be broken arbitrarily. Utilitarian fairness scheduling In the last section, we studied the opportunistic scheduling problem for multiuser OFDM with temporal fairness constraints. In wireline networks, when a certain amount of resource is assigned to a user, it is equivalent to granting the user a certain amount of throughput. However, the situation is different in wireless networks, where the performance value and the amount of resource are not directly related. Therefore, a potential problem in wireless network is that the temporal fairness scheme has no way of explicitly ensuring that each user receives a certain guaranteed fair amount of utility. Hence, in this section, we will describe an alternative scheduling problem that would ensure that all users get at least a certain fraction of the overall system performance. The problem of multiuser OFDM scheduling with utilitarian fairness can be expressed as where a i denotes the minimum fraction of the overall average throughput required by user i, with a i ≥ 0 and N i=1 a i ≤ 1. Recall that U T i (π) is the average throughput of user i up to time T using policy π, and U(π) is the average overall throughput. The a i 's are predetermined fairness constraints here. This constraint requires long-term fairness in terms of performance value (throughput) instead of resource consumption (time) as in Section 4.1. We define the policy π * as follows: where κ = 1 − N i=1 a i γ * i , and the control parameters γ * i are chosen such that , then γ * i = 0, for all i. Analogous to v * in the last section, γ * = (γ * 1 , . . . , γ * N ) in (14) can be considered as a "scaling" to satisfy the utilitarian fairness constraints. The scheduling policy always schedules the "relatively best" subset of users to transmit. Here, the subset of users selected by action A t is "relatively best" if is maximum over all actions. If γ * i > 0, then user i is an "unfortunate" user, and its average performance value equals its minimum requirement. The policy π * defined in (14), which represents our opportunistic scheduling policy, is optimal in the following sense. Theorem 2. If lim T→∞ U T i (π * ) exists for all i for π * defined in (14), then the policy π * is an optimal solution to the problem defined in (13), that is, it maximizes the average OFDM system performance under the utilitarian fairness constraints. Proof. Let π be a policy satisfying the utilitarian fairness constraints, and let γ * i satisfy conditions 1-3. Hence, we have EURASIP Journal on Wireless Communications and Networking Therefore, where the second part of (17) equals zero because of condition 3 on γ * i . Similar to the proof of Theorem 1, the properties of lim sup and lim inf are applied here. Minimum-performance guarantee scheduling So far, we have discussed two optimal multiuser OFDM scheduling policies that provide users with different fairness guarantees. However, while they satisfy a relative measure of performance (e.g., fairness), they do not consider any absolute measures such as data rate. This motivates the study of a category of scheduling problems with minimumperformance guarantees [11,35]. The problem to maximize the OFDM system performance while satisfying each user's minimum performance requirement can be stated as max π∈Π U(π) subject to lim inf where C = {C 1 , C 2 , . . . , C N } is a feasible predetermined minimum-performance requirement vector. Feasible here means that there exists some policy that solves (18). The QoS constraints here offer users a more direct service guarantee. For example, a user requires a minimum data rate guarantee, then the performance measure here can be data rate. Every user is guaranteed a minimum data rate, which may be more appealing from the user viewpoint. However, it can be quite difficult in practice to apply because of the difficulty to determine if a requirement vector is feasible. Suppose C = {C 1 , C 2 , . . . , C N } is feasible. We define the policy π * for the problem in (18) as follows: where the control parameters β * i are chosen such that (1) β * i ≥ 1, for all i; Note that the parameter β * = (β * 1 , . . . , β * N ) "scales" the performance values of users, and the scheduling policy always schedules the "relatively best" subset of users to transmit. Here, the subset of users selected by action A t is "rel- is maximum over all actions. If β * i > 1, then user i is an "unfortunate" user, and it is granted only its minimum-performance requirement. The policy π * defined in (19), which represents our opportunistic scheduling policy, is optimal in the following sense. Theorem 3. If lim T→∞ U T i (π * ) exists for all i for the π * defined in (19), then the policy π * is an optimal solution to the problem defined in (18), that is, it maximizes the average OFDM system performance under the minimum-performance guarantee constraints. Proof. Let π be a policy satisfying the minimum-performance guarantee constraints, and let β * i satisfy conditions 1-3. Hence, we have By the definition of π * , we get Therefore, where the second part of (22) equals zero because of condition 3 on β * i . Similar to the proof of Theorem 1, the properties of lim sup and lim inf are applied here. IMPLEMENTATION ISSUES In this section, several implementation issues including parameter estimation and efficient policy search methods will be considered. An optimal algorithm and a low-complexity suboptimal algorithm are developed here for policy search. Control parameter estimation The opportunistic scheduling policies described in Section 4 involve some control parameters to be estimated online: v * in temporal fairness, γ * in utilitarian fairness, and β * in the minimum-performance guarantee policy. Those parameters Input: anN × K nonnegative matrix [c ik ]. Step 1: initialization: (a) Append (N − K) all-zero columns to the matrix. (b) In each row, subtract the smallest entry from every entry in that row. In each column, subtract the smallest entry from every entry in that column. Step 2: cover all zeros with the minimum number of (horizontal and/or vertical) lines. If the minimum number = N, go to Step 4. Step 3: subtract the smallest uncovered entry from every uncovered entry; add it to every intersection of lines. Go to Step 2. Step 4: make the assignment at zeros. If any row or column has only one 0, make that assignment. Cross out the corresponding row and column, and move to the next assignment. are determined by the distribution of performance value matrix {ω t } and the predetermined constraints. In practice, the distribution is unknown, and hence we need to estimate the control parameters. In [12], Liu et al. give a practical stochastic approximation technique to estimate such parameters. The basic idea is to find the root of a unknown continuous function f (x). We approach the root by adapting the weighted observation error. For example, for user i in temporal fairness scheduling, the base station updates the parameter v t+1 using a stochastic approximation algorithm v t+1 where, for example, the step size t = 1/t. The initial estimate v 1 can be set to 0 or some value based on the history information. Using standard methods, it can be shown that v t converges to v * with probability 1 [36]. The computation burden above is O(N) per time slot, where N is the number of users, which suggests that the algorithm is easy to implement online. For our OFDM scheduling schemes, we have found that this stochastic approximation algorithm also works well. For the detailed procedure, we refer the reader to [12]. Optimal user subset search methods In our optimal OFDM policies (e.g., in the temporal fairness policy), all the "relative performance values" (ω t i,k + v * i ), denoted c ik for convenience, comprise an N × K matrix [c ik ]. Therefore, the operator arg max A t is to find an action A t that indicates which K elements in [c ik ] have the maximal sum over all K selected elements. This operator is obviously different from the arg max i in [12], which simply returns the index of the largest element from a vector. It is straight forward to compute the arg max if no hard physical limitations are considered. The operator can simply select the largest K elements. However, a common physical constraint is that in any time slot, the scheduler cannot assign two users to the same subcarrier, or two subcarriers to the same user. Mathematically, at any time slot t, for any two subcarriers j and k, j / =k ⇔ A t j / =A t k . When this physical constraint is considered, the computation of the arg max in the optimal policy is nontrivial. A brute-force approach is exhaustively searching over the ( N K ) possible assignments, which obviously has very high computational complexity. Since this optimal user subset search operation should be performed online at each slot, we need to use more efficient algorithms. It turns out that the problem of computing the arg max can be posed as an integer linear program (ILP) [37]: where the decision variables x ik indicate which elements to choose, and the weights c ik are relative performance values defined above. This problem is called the maximal weighted bipartite matching problem in graph theory, or the assignment problem in combinatorial optimization [38]. It is interesting to see that the arg max operator in optimal multiuser OFDM scheduling problem can be interpreted as a graph problem (U, S, E, w), where U represents the set of all users, S represents the set of all subcarriers, and E represents the set of all the feasible choices for specific users to select specific subcarriers. Each choice in E is weighted by a function w(E). The problem is to find a matching M ∈ E for U and S that maximizes the sum of the weights over all edges in M. The Hungarian algorithm is one of many algorithms that have been devised to solve the assignment problem in polynomial time (O(N 3 ) when N = K) [39]. We modify the Hungarian algorithm to solve our general unbalanced (N ≥ K) problem here by introducing a number of slack variables to convert the ILP problem into standard form. Note that the standard form ILP with the slack variables is algebraically equivalent to the original problem [41]. It is proven in [39] 8 EURASIP Journal on Wireless Communications and Networking that the Hungarian algorithm can always find the maximum assignment, that is, it is an optimal solution to this problem. Algorithm 1 is our modified Hungarian algorithm. Ideally, the OFDM scheduler should repeat the above procedure at every scheduling slot. However, this still poses a heavy computational burden on the base station. Hence suboptimal algorithms with lower complexity are of interest for practical implementation. We develop a suboptimal algorithm called "max-max" to perform the above arg max operation with much lower complexity. This algorithm is a variation of the "min-min" method for task mapping in heterogeneous computing [42]. The basic idea is this: first, find the overall maximal element in the matrix [c ik ], then assign the corresponding subcarrier to the corresponding user. Next, remove the newly assigned user-subcarrier pair from the selection table. In other words, the corresponding row and column are removed from the matrix. Continue to repeat the above procedure on the reduced matrix until all subcarriers are assigned. In the simulations in the next section, the suboptimal scheme shows near-optimal performance with a lower complexity. SIMULATION RESULTS In this section, we present numerical results to illustrate the performance of the various OFDM scheduling schemes developed in this paper. For the purpose of comparison, we also simulate two special scheduling policies. Round-robin [43] is a nonopportunistic scheduling policy that schedules users over all subcarriers in a predetermined order. It is simple but lacks flexibility. The round-robin policy can serve as a performance benchmark to measure how much gain results from using our opportunistic scheduling policies. The other policy for comparison is a greedy scheduling scheme that always selects the user with the maximum performance to transmit for each subcarrier at each time slot. The greedy policy will in general violate the QoS/fairness constraints, but provide an upper bound on the system performance. It is used here to expose the tradeoff between the QoS constraints for individual users and the overall system throughput. The more relaxed the fairness constraints, the higher the overall achievable throughput, therefore, the closer to what we will get to the performance of the greedy scheme. In our simulation, we consider the downlink of a heavytraffic single-cell OFDM system with fixed 64 subcarriers. There is one base station serving all the users in the cell. Each user suffers from multipath Rayleigh fading with the bad-urban (BU) scenario of the COST 259 channel model [44,45], and we assume a path-loss exponent of four. Every user is assumed to be stationary or slowly moving so that the maximum Doppler shift is 20 Hz. The performance value, used by different users usually is a nondecreasing function of their SINR, and can be in various forms, such as linear functions, step functions, or S-shape functions. For simplicity, here we take all the performance values as linear functions of users' SINR (in dB). We assume that the physical limitation on scheduling discussed in Section 5.2 applies: at each time slot, no two users can be scheduled on the same subcarrier and each user is scheduled exactly one subcarrier. Performance gain First, we assume the locations of all users are distributed uniformly in the cell, and examine the impact of the number of users on the average system throughput. We use the roundrobin policy as the baseline, and define the system throughput gain as (U S − U R )/U R , where U S and U R denote the average system throughput of a given scheduling policy and the round-robin policy, respectively. Figure 2 shows the system through put gain relative to round-robin from the different policies in the temporal fairness scheduling simulations. For the purpose of simulation, we assume the time-fraction assignment is done using fair sharing, that is, the total resources are evenly divided among the users. Therefore, if there are N users in the cell, we set r i = 1/N for all users. From Figure 2, it is evident that the system throughput gain increases with the number of users. This is reflective of the multiuser diversity gain. For 64 users, our optimal policy (Hungarian) achieves about 46% overall throughput gain, while the greedy policy has an improvement of 101%. This is not surprising since the greedy policy achieves the highest overall performance at the cost of unfairness among the users. The suboptimal policy (maxmax) shows surprisingly near-optimal performance. Its performance gap with the optimal policy is less than 1-2%, and even smaller when we increase the number of users. Figure 3 shows the system throughput gain relative to round-robin from the different policies in the utilitarian fairness scheduling simulations. We also assume fair sharing in the throughput-fraction assignment. This means we set a i = 1/N for all users in an N-user system. As expected, the increasing trend similar to Figure 2 can be also seen here. For 64 users, our optimal policy (Hungarian) achieves about 32% overall throughput gain, while the greedy policy has an Zhi Zhang et al. improvement of 102%. The suboptimal policy (max-max) also improves the system performance by 27%. Next, we investigate the performance of the opportunistic scheduling schemes with minimum-performance guarantees. First, we run the simulation for 1, 000, 000 time slots using the round-robin policy, where the resource (time) is equally distributed among all users. Then, we compute an average performance value and use it as the minimumperformance requirement for each user. It is easy to see that this minimum-performance requirement vector is feasible. Figure 4 shows the system throughput gain relative to round-robin from the different policies in the minimumperformance guarantee scheduling simulations. For 64 users, our optimal policy (Hungarian) achieves about 31% overall throughput gain, while the greedy policy (which violates the minimum-performance requirements) has an improvement of about 100%. The suboptimal policy (max-max) also performs well with 24% overall gain. Fairness Using the temporal fairness scheduling scenario as an example, we study the fairness among the users by applying the different policies. We use the same single-cell system with 64 subcarriers, and there are 128 users in the system. The users are divided into three "distance" groups. Users 1-48 belong to the "far" group, users 49-80 belong to the "middle" group, and users 81-128 belong to the "near" group. Obviously a user in the "near" group has a much higher probability to get a strong SINR than a user in the "far" group. We set all users to have the same minimum timefraction requirement. Specifically, each user has a resource (time) requirement r i = 2/(3N) for an N-user system, where i r i = 2/3 < 1. Therefore, the system has the freedom to as- sign the remaining 1/3 portion of the resource to some "better" users (beyond their minimum requirements) to further improve the system performance. Figure 5 indicates the amount of resource consumed by selected users in the temporal fairness scheduling simulations. The first bar represents that of round-robin, where the resource is equally shared by all users. The second bar represents our optimal policy (Hungarian). The third bar is the greedy policy. The rightmost bar shows the minimum requirements of user. The second bar is higher than the fourth bar for all the users, which indicates that our temporal fairness optimal scheduling policy meets the minimum timefraction requirements for all users. In the greedy policy, users 1, 16, and 32 get very little resource (far below the minimum requirement line) while users 88, 96, and 128 have very large shares. As expected, the greedy algorithm is heavily biased though it achieves the highest overall performance. In the following, we simply check the fairness among the users with utilitarian fairness and minimum-performance guarantee scheduling. We use the same cellular system and user group settings as temporal fairness. In Figure 6, we show the average performance values of selected users in the utilitarian fairness scheduling simulations. The preset performance requirements of the selected users 1, 16, 32, 56, 64, 88, 96, and 128 are [0.001, 0.002, 0.001, 0.003, 0.003, 0.004, 0.005, 0.005]. The values represent the minimum fraction of overall average performance for individual users. In Figure 7, we show the average performance values of selected users in the minimum-performance guarantee scheduling simulations. Similar to the previous section, we first run a round-robin simulation, then use the obtained average performance as minimum-performance requirement for each user. From the figure, we see that our optimal scheduling policy (Hungarian) meets all the requirements and outperforms round-robin policy everywhere. In summary, the simulation results show that using our OFDM opportunistic scheduling policies, the system can achieve significant performance gains over the nonopportunistic round-robin policy while satisfying the various QoS/fairness requirements. Also, the low-complexity suboptimal policy shows near-optimal performance in every scenario. CONCLUSIONS Opportunistic transmission scheduling is a promising technology to improve spectrum efficiency by exploiting timevarying channel conditions. We investigated the application of opportunistic scheduling in multiuser OFDM systems, which dynamically allocates resource in both temporal and spectral domains. Optimal scheduling policies were presented and proven to be optimal under the temporal fairness, utilitarian fairness, and minimum-performance QoS constraints. We developed optimal and suboptimal algorithms to implement these optimal policies efficiently. The simulation showed that the schemes achieve improvements of about 30%-140% in network efficiency compared with a scheduling scheme that does not take into account channel conditions. Scheduling problems with multiple mixed QoS/fairness constraints will be interesting to tackle as future work and is definitely of practical interests. For example, a user might ask for both minimum temporal fraction and minimum performance guarantees. Or a user might be constrained by both maximum and minimum requirements of wireless resource. We also plan to investigate the significant feedback overhead involved in assuming perfect channel-state information feedback in OFDM systems, especially in fast fading channels. Scenarios with relatively small numbers of users in the system will also be explored. That means two or more subcarriers could be available for each user. The effects of finite-length data arrival queues or explicit delay requirement for certain users also will be studied. The application of multiplechannel opportunistic scheduling for MAC layer QoS control in cognitive radio systems will be considered in our future work.
8,849.4
2008-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Transport equation driven by a stochastic measure We consider the stochastic transport equation where the randomness is given by the symmetric integral with respect to stochastic measure. For stochastic measure, we assume only $\sigma$-additivity in probability and continuity of paths. The existence and uniqueness of the weak solution to the equation are proved. Introduction We consider the stochastic transport equation that formally can be written in the form Here µ is a stochastic measure (SM), see Definition 1 below.We assume that µ is defined on the Borel σ-algebra of [0, T ], and the process µ t = µ((0, t]) has a continuous paths.Assumptions on b and u 0 are given in Section 3. Equation (1) we consider in the weak sense, definition of the solution is given in (6).We will prove existence and uniqueness of the solution.Similarly to other types of stochastic transport equation, we demonstrate that the solution is given by formula u(t, x) = u 0 (X −1 t (x)) where X t (x) satisfies the auxiliary equation (7). Stochastic integral with respect to µ is defined as symmetric integral.This Stratonovich-type integral was studied in [17], we recall it's definition and basic properties in subsection 2.2.SMs include many important classes of processes, but we can prove existence of the integral only for integrands of the form f (µ t , t) where f ∈ C 1,1 (R × [0, T ]).Thus, we will find our solution u having this form. For the stochastic transport equation driven by the Wiener process, the existence and uniqueness of the solution were proved under different assumptions on b and u 0 , see [3], [5], [7], [12], [24].It was shown that stochastic term in the transport equation leads to regularization of the solution, see [1], [6], [7].Equation in bounded domain was studied in [14].In these papers, the stochastic term is given by Stratonovich integral and solution is considered in the weak sense.In [25] the existence and uniqueness of stochastic strong solution are obtained, the renormalized weak solution was studied in [27]. Transport equation with other stochastic integrators is less studied.The existence and uniqueness of the solution to equation driven by Lévy white noise was proved in [16], to equation driven by fractional Brownian motion -in [15].In both papers, the Malliavin calculus approach was used. In this paper, we consider the rather general stochastic integrator.At the same time, we need some restrictive assumptions on b and u 0 , and study the case of one dimensional spatial variable. The recent results for the equations driven by stochastic measures may be found in [2], [9], [10].The rest of the paper is organized as follows.In Section 2 we recall the definitions and basic facts concerning stochastic measures and symmetric integral.Also we prove the analogue of the Fubini theorem for our integral that we will need below.In Section 3 we give our assumptions on the equation and formulate the main result.Section 4 is devoted to the proof of the existence of the solution, and we give the explicit formula for u.In Section 5, under some additional assumptions, we obtain the uniqueness of the solution. Stochastic measures Let L 0 = L 0 (Ω, F, P) be the set of all real-valued random variables defined on the complete probability space (Ω, F, P) (more precisely, the set of equivalence classes).Convergence in L 0 means the convergence in probability.Let X be an arbitrary set and B a σ-algebra of subsets of X. Definition 1.A σ-additive mapping µ : B → L 0 is called stochastic measure (SM). We do not assume the moment existence or martingale properties for SM.In other words, µ is L 0 -valued vector measure. Many examples of the SMs on the Borel subsets of [0, T ] may be given by the Wiener-type integral We note the following cases of processes X t in (2) that generate SM. 1. X t -any square integrable continuous martingale. is known as the Rosenblatt process, see also [21,Section 3]. The detailed theory of stochastic measures is presented in [19].The results of this paper will be obtained under the following assumption on µ. Processes X t in examples 1-4 are continuous, therefore A1 holds in these cases. Symmetric integral The symmetric integral of random functions with respect to stochastic measures was considered in [17].We review the basic facts and definitions concerning this integral.Definition 2. Let ξ t and η t be random processes on provided that this limit in probability exists for any such sequence of partitions. For Wiener process η t and adapted ξ t we obtain the classical Stratonovich integral.If η t and ξ t are Hölder continuous with exponents γ η and γ ξ , γ η + γ ξ > 1, then value of (3) equals to the integral defined in [26]. The following theorem describes the class of processes for which the integral is well defined. Fubini theorem for symmetric integral We will need the following auxiliary statement. Proof.Denote Theorem 1 and assumptions of the lemma and imply that the integrals in ( 5) are well-defined.Applying (4), we transform left-hand side and right-hand side of ( 5) The equalities hold by usual Fubini's theorem. 3 The problem.Formulation of the main result. We consider equation (1) in the weak form.This means that u : By C ∞ 0 (R) we denote the class of infinitely differentiable functions with the compact support.For our equation, we will refer to the following assumptions. Assumption A2. u 0 : R × Ω → R is measurable and has continuous derivative in x. ∂x is continuous and bounded. Note that, by A4, b is globally Lipschitz continuous in x. For each fixed ω ∈ Ω, we consider the following auxiliary equation Assumption A4 imply that ( 7) has a unique solution on [0, T ] for each x. By well known result of theory of ordinary differential equations, the solution has a continuous derivative We have Therefore, X ′ t (x) > 0, and the function X −1 t (x), where inverse is taken with respect to variable x, is well defined. Note that X t is the sum of a differentiable function of t and µ t , X ′ t is a differentiable function of t.This allows us to consider integrals The main result of the paper is the following. Remark 1.Note that u(t, x) = u 0 (X −1 t (x)) has a form h(µ t , t, x) from the second part of the theorem.This follows from Assumption A2 and standard statements about differentiability of inverse functions.We have that X t (x) = g(µ t , t, x), where g ∈ C 1,1,1 (R × [0, T ] × R).For the mapping (µ, t, x) → (µ, t, g(µ, t, x)), the matrix of the first derivatives is non-degenerated.Therefore, the inverse mapping is well-defined and smooth. Remark 2. Let us compare our assumptions with those made in other papers.Usually, it is supposed that u 0 is measurable and bounded (see, for example, [7], [12], [15], [24]).We additionally assume that u 0 has a continuous derivative, we need this to guarantee that the symmetric integral of u 0 (X −1 t (x)) be well-defined. Condition of differentiability of b is standard, boundedness of ∂b ∂x may be assumed in some L p norm (see [7], [24]) or uniformly ([15]).Note that in [12] the main result was obtained for arbitrary bounded measurable b. Our integrability condition A5 is technical and is important for our method.It is similar to respective assumptions in [1], [3], [13]. Existence of the solution In this section, we prove the first statement of our theorem. By the chain rule ( 4), for ϕ ∈ C ∞ 0 (R) we have Applying the change of variables y = X t (x), we get Lemma 1 may be applied here because ϕ has a compact support.Assumption A4 and (8) imply that ∂b(s, y) ∂x dy ds Thus, u(t, x) = u 0 (X −1 t (x)) satisfies (6). Uniqueness of the solution In this section, we prove the second statement of our theorem.We will follow the standard approach (see, for example, proof of the uniqueness of the solution in [3], [13]).Let u(t, x) satisfies ( 6) with u 0 (x) = 0. We will obtain that u(t, x) = 0 what implies the uniqueness of the solution. For this case, from (6) Denote We have that G(0, y) = 0 because u(0, x) = 0, and Our solution has a form u(t, x) = h(µ t , t, x), and, applying ( 4) and (11), we obtain Let φ ε be a standard mollifier, We take the derivative with respect to t, use the notation B(t, z) = b(t, z + µ t ), and get In (*) we have used that φ ε has a compact support, and, by integration in parts, Thus, Lemma II.1 i) [4] gives that for each fixed t provided that B(t, •) ∈ W 1,1 loc (R), V (t, •) ∈ L ∞ loc (R, dx), where W denotes the Sobolev space.These conditions hold due to assumptions of our theorem.
2,130.8
2024-07-21T00:00:00.000
[ "Mathematics" ]
Synergistic cytotoxicity of the CDK4 inhibitor Fascaplysin in combination with EGFR inhibitor Afatinib against Non-small Cell Lung Cancer In the absence of suitable molecular markers, non-small cell lung cancer (NSCLC) patients have to be treated with chemotherapy with poor results at advanced stages. Therefore, the activity of the anticancer marine drug fascaplysin was tested against primary NSCLC cell lines established from pleural effusions. Cytotoxicity of the drug or combinations were determined using MTT assays and changes in intracellular phosphorylation by Western blot arrays. Fascaplysin revealed high cytotoxicity against NSCLC cells and exhibit an activity pattern different of the standard drug cisplatin. Furthermore, fascaplysin synergizes with the EGFR tyrosine kinase inhibitor (TKI) afatinib to yield a twofold increased antitumor effect. Interaction with the Chk1/2 inhibitor AZD7762 confirm the differential effects of fascplysin and cisplatin. Protein phosphorylation assays showed hypophosphorylation of Akt1/2/3 and ERK1/2 as well as hyperphosphorylation of stress response mediators of H1299 NSCLC cells. In conclusion, fascaplysin shows high cytotoxicity against pleural primary NSCLC lines that could be further boosted when combined with the EGFR TKI afatinib. Introduction Approximately 80% of all lung cancers are of the Non-small Cell Lung Cancer (NSCLC) type that is often detected at an advanced stage and portends a dismal prognosis [1]. The standard first-line therapy employing platinum-based chemotherapy resulted in minor improvements in survival but at the cost of side effects and poorer quality of life (QoL). The platinum drug combinations with either gemcitabine, docetaxel or pemetrexed have reached a plateau offering a mean survival of approximately one year in advanced NSCLC [2]. Patients expressing immune checkpoint markers are amenable to treatment with monoclonal antibodies [3,4]. The focus of NSCLC treatment shifted significantly with availability of inhibitors of targetable driver kinases such as mutated epidermal growth factor (EGFR) and anaplastic lymphoma kinase (ALK) rearrangements, among others [5]. The first-generation EGFR tyrosine kinase inhibitors (TKIs) gefitinib and erlotinib bind reversibly to the kinase domain of the receptor, but second-generation drugs such as the pan-ErbB inhibitor afatinib show irreversible inhibition of the kinase activity [6]. In NSCLC, pancreatic cancer and This preclinical work does not include any treatment of patients and is not part of any interventional study. IC50 AfaƟnib 216 Investigational New Drugs (2022) 40:215-223 colorectal cancer, afatinib resulted in an inhibition of cellular growth and induction of apoptosis [7]. Although afatinib is most effective against mutated EGFR it is likewise active against the wildtype receptor. Unfortunately, the majority of NSCLC lacks actionable drivers and still have to be treated with cytotoxic combination chemotherapy. However, durable disease control is rare and the 5-year survival is below 5% [8]. Therefore, new agents with different mechanisms of antitumor activity may improve outcomes of NSCLC patients. Our previous studies revealed that fascaplysin exhibited high cytotoxicity against Small Cell Lung Cancer (SCLC) cell lines (mean IC 50 0.89 µM) and against SCLC Circulating Tumor Cell (CTCs) lines (mean IC 50 0.57 µM) [15,16]. Selected NSCLC lines exhibited a mean IC 50 of 1.15 µM for fascaplysin and the compound showed an additive cytotoxic effect with cisplatin. Available permanent cancer cell lines have been adapted for vigorous in vitro growth and may not be truly representative of the in vivo situation in patients. Acquisition of NSCLC cells for tests is possible by routine thoracentesis in patients with advanced NSCLC. Malignant pleural effusion (MPE) is observed in half of advanced NSCLC cases and is associated with a short survival [17]. MPE samples frequently contain numerous tumor cells, that allow for the determination of driver gene status and chemosensitivity [18][19][20]. In the present study, a panel of primary NSCLC lines from pleural effusions was employed to compare their chemosensitivity against fascaplysin with that for cisplatin. Furthermore, both drugs were combined with the afatinib to test a possible synergistic activity and with the Chk1/2 inhibitor AZD7762 to investigate DNA damagemediated drug effects. The results demonstrate that afatinib acts synergistically with fascaplysin to sensitize the NSCLC cancer cells against this marine drug. Materials and methods Cell Culture and reagents Unless otherwise noted, all chemicals were obtained from Sigma-Aldrich (St. Louis, MO, USA). Dulbecco's phosphate buffered saline (PBS) was purchased from Gibco/Invitrogen (Carlsbad, CA, USA). Compounds were prepared as stock solutions of 2 mg/mL in either DMSO or 0.9% NaCl for cisplatin and aliquots stored at − 20 °C. Equivalent concentrations of DMSO were supplemented to medium controls. Established permanent cell lines were obtained from the American Type Culture Collection (Manassas, VA, USA) and primary lung cancer lines were established in our lab. Collection of pleural effusions of lung cancer patients, isolation of tumor cells and generation of cell lines was done according to the Ethics Approval 366/2003 by the Ethics Committee of the Medical University of Vienna, Vienna, Austria. In brief, pleural effusions were centrifuged and the tumor cells washed with tissue culture medium consisting of RPMI-1640 medium, supplemented with 10% FBS (Seromed, Berlin, Germany) and antibiotics Phosphokinase Array Relative protein phosphorylation levels of 38 selected proteins were obtained by analysis of 43 specific phosphorylation sites using the Proteome Profiler Human Phospho-Kinase Array Kit ARY003B/C (R&D Systems, Minneapolis, MN, USA) in duplicate tests carried out according to the manufacturer's instructions. Briefly, cells were rinsed with PBS, 1 × 10 7 cells/mL lysis buffer were solubilized under permanent shaking at 4 °C for 30 min, and aliquots of the lysates were stored frozen at − 80 °C. After blocking, membranes with spotted catcher antibodies were incubated with diluted cell lysates at 4 °C overnight. Thereafter, cocktails of biotinylated detection antibodies were added at room temperature for 2 h. Phosphorylated proteins were revealed using streptavidin-HRP/chemiluminescence substrate (SuperSignal West Pico, Thermo Fisher Scientific, Rockford, IL, USA) and detection with a Molecular Imager ChemiDoc MP imaging system (Bio-Rad, Hercules, CA, USA). Images were quantified using Image J (NIH, Bethesda, MD, USA) and Origin (OriginLab, Northampton, MA, USA) software. The different Western blot membranes were normalized using the 6 calibration spots included. Cytotoxicity Assay Aliquots of 1 × 10 4 cells in 200 µL medium were treated for four days with twofold dilutions of the test compounds in 96-well microtiter plates in quadruplicate (TTP, Trasadingen, Switzerland). The plates were incubated under tissue culture conditions and cell viability was measured using a modified MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay (EZ4U, Biomedica, Vienna, Austria). Optical density was measured using a microplate reader at 450 nm and values obtained from control wells containing cells and media alone were set to 100% proliferation. For the assessment of the interaction of the test compounds, tests were performed comprising the individual drugs alone and in combination, followed by analysis using the Chou-Talalay method with help of the Compusyn software (ComboSyn Inc., Paramus, NJ, USA). Statistics Statistical analysis was performed using Student's t test for normally distributed samples (* p < 0.05 was regarded as statistically significant). Values are shown as mean ± SD. Cellular toxicity of fascaplysin, cisplatin and afatinib Cytotoxicity of fascaplysin, cisplatin and afatinib were determined in MTT assays employing primary NSCLC cell lines and the permanent NSCLC cell lines H23, H1299, PC9 and A549 (Fig. 1A-C). IC 50 values for fascaplysin varied from 0.48 -1.37 µg/ml, with 8/17 cell lines exhibiting high chemosensitivity (Fig. 1A). A group of cell lines with high sensitivity of 0.48 ± 0.14 µg/ml contrasts to a more resistant NSCLC cell population exhibiting a mean IC 50 value of 1.37 ± 0.18 µg/ml (p = 0.001). The difference in fascaplysin sensitivity of the permanent cell line H23, H1299, PC-9 and A549 versus primary NSCLC lines is not statistically different. The IC 50 values for cisplatin show a distinct sensitivity pattern for the NSCLC cell lines tested (range: 1.42 In contrast, IC 50 values for afatinib range from 2 µM to approximately 8 µM indicating relatively low sensitivity for these primary NSCLC cell lines with exception of BH584 and BH659 that have revealed a NSCLC-SCLC transformation (mean IC 50 : 4.81 ± 2.05 µM; Fig. 1C). Accordingly, several of these primary NSCLC lines have been obtained after progress under EGFR TKI therapy. The difference in afatinib sensitivity of the permanent cell line H23, H1299, PC-9 and A549 versus primary NSCLC lines is not statistically different. Due to high variability of the IC50 values observed for the permanent lines, differences to primary NSCLC lines were not significant for all drugs. Cellular toxicity of fascaplysin-afatinib combinations The cytotoxic effects of fascaplysin-afatinib combinations were tested in proliferation assays using 10 twofold dilutions of the single drugs and a combination of the two drugs at full concentrations. The effects of the combinations were calculated according to the Chou-Talalay method. The combination indices (CIs) are shown in Fig. 2 and all tests revealed synergy of this combination with CIs ranging from 0.08 -0.67. The mean CI value for the fascaplysin-afatinib combinations and all cell lines was 0.324 ± 0.19. For the three ALK-rearranged cell lines, the fascaplysin-alectinib and fascaplysin-crizotinib combinations were synergistic for BH482 and BH827 but not for the alectinib-resistant cell line BH583 (data not shown). Comparisonof IC 50 values of fascaplysin-afatinib combinations versus fascaplysin single drug A comparison of the IC 50 values of fascaplysin alone with IC 50 values obtainted from fascaplysin-afatinib combinations showed significantly increased drug sensitivity of the NSCLC lines in 8/14 cases (Fig. 4). H1299 NSCLC cell line: effects of fascaplysin on protein phosphorylation Changes in the phosphorylation of signaling proteins of H1299 cells in response to fascaplysin were analyzed with help of a Western blot profiler array that detects 43 kinase phosphorylation sites and 2 related proteins. Significant changes in the phosphorylation pattern of selected proteins are shown in Fig. 5. Specific sites were hypophosphorylated for Akt1/2/3, ERK1/2, GSK-3β and HSP27, whereas Chk2, src, c-Jun, PRAS40 and RSK1/2/3 become hyperphosphorylated in response to drug exposure. Discussion Therapy of NSCLC has changed dramatically with the advent of TKIs against driver kinases and the activation of antitumor immune responses by monoclonal checkpoint inhibitors (ICIs) [5]. However, the efficacy of such therapeutic modalities is restricted to approximately 30% of the patients and the majority of advanced NSCLC cases has still to be treated with cytotoxic chemotherapy. However, classical chemotherapy has reached a plateau at a low level in respect to overall survival (OS) [21]. The recent combinations of ICIs with chemotherapy revealed relatively low and unpredictable responses [22,23]. Thus, novel compounds that hit targets different from that of the platinumbased combinations may improve responses and prolong survival. We have demonstrated previously that fascaplysin has high cytotoxic activity against SCLC, SCLC CTCs and a limited range of NSCLC lines [16]. Here, the chemosensitivity of a panel of primary pleural NSCLC lines against fascaplysin was compared to the cytotoxic effects of cisplatin. The IC 50 values for fascaplysin ranged from 0.48 -1.37 µg/ml for the whole NSCLC cell line panel and from 1.42 -6.48 µg/ml for cisplatin, although most cell lines proved to be cisplatin-sensitive with IC 50 values below and around 3 µg/ml. Thus, fascaplysin displays considerable cytotoxicity against the primary NSCLC lines that may be further boosted in combination with TKIs directed to EGFR. The EGFR TKI afatinib is a second generation, irreversible ErbB family blocker, that exhibits inhibitory activity against EGFR, human EGFR 2 (HER2) and 4 (HER4), with IC 50 values of 0.5, 14, and 1 nM, respectively [6,24]. The IC 50 afatinib values for the whole primary NSCLC cell line panel of 4.81 ± 2.05 µM is a typical result for cell lines not dependent on mutated EGFR, such as breast cancer cell lines T47D and BT20, whereas IC 50 values for afatinib and cell lines addicted to mutated EGFR may be as low as 6-10 nM [25]. At extremes, NSCLC cell lines such as NCI-H460 and NCI-H226 exhibit afatinib IC 50 values of approximately 50 µM. A pharmacokinetics analysis revealed that plasma concentrations of afatinib peaked at 3 -4 h after administration and decreased with a half-life of 37 h at steady state [26]. Afatinib is administered at 40 mg PO/day resulting in approximately 0.2 µM peak plasma concentration after multiple dosing. Our results show that this TKI in combination with fascaplysin results in approximately twofold sensitization and a considerable decrease of the IC 50 values. Although afatinib is the standard drug for the treatment of lung squamous cell carcinoma (SCC) with EGFR overexpression, attempts have been made to use this irreversible blocker for other EGFR expressing tumors. Advanced head and neck squamous cell carcinoma (HNSCC) hold a poor prognosis and tumor progression is associated with overexpression of EGFR [27]. Afatinib increased the cytotoxicity of cisplatin when combined in different schedules of exposure against these HNSCC cell lines. In detail, cisplatin treatment followed by afatinib exposure showed higher activity against two EGFR wildtype HNSCC cell lines compared to other approaches. Furthermore, EGFR was found hyperphosphorylated in cisplatin-resistant wildtype EGFR NSCLC cells, H358 R and A549 R , and the cisplatin/ gefitinib combination applied promoted apoptotic cell death [28]. Another study employing five human EGFR wild-type HNSCC cell lines showed significant synergy of afatinib with cisplatin [29]. In detail, in three out of the five cell lines 0.625 µM afatinib in combination with cisplatin exerted antiproliferative effects and the remaining two lines showed responses for a combination with ≥ 1.25 µM afatinib. Since the EGFR TKI gefitinib showed similar effects to afatinib in sensitizing wildtype EGFR NSCLC cells to cisplatin, the effects of afatinib seem not to be linked by off-target effects due to reactions with non-EGFR protein cysteine residues [30]. In general, the synergistic toxicity may be based on the link of EGFR signaling to the response to DNA damage by chemotherapeutic agents including cisplatin [31]. The induction of the DNA repair system involves sensing of the damage by ATM (ataxia-telangiectasia mutated) and ATR (ATM-and Rad3-Related) kinases and activation of Chk1/2 downstream kinases [32]. The overexpression of Chk1 is associated with poorer outcomes and may contribute to therapy resistance in NSCLC [33]. AZD7762 is a potent inhibitor of Chk1/2 that blocks specifically the ATP binding pocket (IC 50 5 nM) [34]. AZD7762 has activity on a range of other kinases SRC family members, colony stimulating factor receptor (CSF1R), RET and others. In combination with DNA-damaging agents such as gemcitabine, topotecan, doxorubicin, and cisplatin, AZD7762 inhibits cancer cell growth in vitro via Chk1 inhibition and abrogation of the G2 and S phase checkpoints [35]. The sensitzing effect of this inhibitor over the DNA-damaging agents alone ranged from 5-to 20-fold. Furthermore, AZD7762 could enhance cisplatin-mediated apoptosis by inhibiting damage repair in vitro and enhanced xenograft apoptosis induced by cisplatin in vivo [36]. Surprisingly, in our experiments the synergistic effect of AZD7762 on tumor cell death proved to be higher in fascaplysin-AZD7762 combinations versus cisplatin-AZD7762 combinations. Studies has shown that the intercalation of fascaplysin is regarded as the major binding mode for DNA [37]. Fascaplysin displaces ethidium bromide from DNA that is known to bind to the minor groove of doublestrand DNA and, therefore, intercalation is hold to be responsible for the unique cytotoxicity of native fascaplasin versus nonplanar derivatives and induction of the DNA repair system [38]. Investigation of fascaplysin-induced changes in protein phosphorylation in H1299 NSCLC cells was assessed using Western blot arrays, as previously demonstrated for the A549 cell line [16]. The PI3K/AKT/mTOR pathway, which plays essential roles in cell proliferation and survival is frequently deregulated in cancer, in particular due to loss of PTEN, as in the case of H1299 [39]. The fascaplysin-induced decreases in Akt (Ser473) phosphorylation are correlated with lower cell survival due to induction of apoptosis [40]. Decreased phosphorylation of the mitogen-activated protein kinase (MAPK) pathway terminal master kinases ERK1/2 results in diminished proliferation and was found here for the exposure to fascaplysin, [41]. Chk2 and Chk1 phosphorylation triggers DNA repair and hyperphosphorylation of c-Jun and Src which is linked to the cellular stress response [42]. Hypophosphorylation of multifunctional glycogen synthase kinase 3β (GSK3β) alters a key node of survival pathways mediated by Ser/ Thr protein kinases related to Akt, protein kinase C (PKC), ERK1/2 and Wnt [43]. Furthermore, hypophosphorylation of the chaperone HSP27 is known to enhance the cytotoxicity of chemotherapeutics [44]. The p90 ribosomal S6 kinases (RSK1-4) comprise a family of serine/threonine kinases that lie at the terminus of the ERK pathway. RSKs promotes silencing of G2 DNA damage checkpoint in a Chk1-dependent manner, and activation of RSKs promotes resistance to DNA-damaging agents [45]. The cell stress response observed in H1299 seems to result in activation of the RSK kinases. The proline-rich Akt substrate of 40 kDa (PRAS40) is a substrate of Akt and is phosphorylated by growth factors or other stimuli. PRAS40 is an important substrate of the Akt3 kinase, which regulates the apoptotic sensitivity of cancer cells and becomes activated in H1299 to counteract the cytotoxic effects of fascaplysin [46]. The fascaplysin-induced alterations in protein phosphorylation indicate efficient execution of cytotoxic effects and a failing intracellular stress response. In summary, fascaplysin promotes cell death of NSCLC cell line in a manner different from the standard platinum drugs. This marine drug induces a DNA repair response, syngergizes with the Chk1/2 inhibitor AZD7762 and with the EGFR TKI afatinib. Declarations Ethics approval The collection of patient´s samples and experimentation was done according to the Ethics Approval 366/2003 by the Ethics Committee of the Medical University of Vienna, Vienna, Austria. Informed consent In accordance with the Ethics approval 366/2003, informed consent was obtained from all participants included in the collection of pleural effusions. Consent for publication All authors consented to publish this study in the journal of Investigational New Drugs. Competing interests All authors declare no conflict of interest. The authors declare no conflict of interest related to the present work. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
4,291
2021-10-01T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
1 The risk of future mpox outbreaks among men who have sex with men: a modelling study based on cross-sectional seroprevalence data ), and (2) an endpoint titration assay using a full five-fold dilution series to further assess sera from the Rotterdam cohort, which were regarded as at least borderline-positive in the screening ELISA. Absorbance was measured at Research in context Evidence before this study We searched PubMed from database inception to July 24 th , 2023, using the search terms ("monkeypox" OR "mpox" OR "orthopoxvirus") AND ("immunity" OR "antibodies" OR vaccin*) OR (model*) OR ("epidemiology" OR "outbreak") OR ("seroprevalence" OR "serosurveillance"), with no date or language restrictions.We included the cited references of retrieved publications in our search.We found that there are crucial knowledge gaps with respect to monkeypox virus (MPXV)-specific immunity and the impact of risk group vaccination with third-generation smallpox vaccines on the prevention of future outbreaks.No seroprevalence studies following the 2022-2023 mpox outbreak have been published yet.Recent observational studies from Israel, the United Kingdom, and the United States reported adjusted vaccine effectiveness levels between 36% -86% following one or two doses of MVA.After the massive 2022-2023 outbreak, the number of new cases is now at a very low level, although there are continouing reports of small clusters of infections, including breakthrough infections and re-infections.This long tail to the epi curve, raises concerns about the potential of future outbreaks.We found no studies that predicted the risk of future mpox outbreaks after risk group vaccination based on measured seroprevalence data. Added value of this study The primary goal of this study was to estimate the risk of future mpox outbreaks among men who have sex with men (MSM) in the Netherlands using mathematical modelling based on realworld data.To this end, we used a combination of serological assays to determine the presence and abundance of (functional) poxvirus-specific antibodies in a Dutch MSM cohort.Next, we demonstrate the impact of rapid diagnosis (followed by isolation) of mpox cases on the size and duration of future outbreaks employing a mathematical model. Implications of all the available evidence A seroprevalence level of approximately 45% among MSM in the Netherlands visiting the Centres for Sexual Health was found.Modelling showed that this seroprevalence level alone may not be sufficient to prevent future outbreaks, which emphasises the importance of continued disease awareness, and maintaining population immunity and diagnostic capacities.These insights have important implications for public health policy making and fill critical knowledge gaps to help inform surveillance activities and guide vaccination strategies. . Introduction Routine smallpox vaccination was discontinued globally in the 1970s following the successful eradication of smallpox.As a consequence, there have been gradual changes in population susceptibility to orthopoxviruses(1), including mpox caused by the monkeypox virus (MPXV).This growing pool of susceptible individuals is thought to have directly contributed to the recent global mpox outbreak with over 85,000 reported cases predominantly among men who have sex with men (MSM).(2,3) The Dutch 2022-2023 outbreak consisted of over >1,250 reported cases with a peak in July 2022.Of the first 1,000 cases, 99% were males with a mean age of 37, of whom 95% identified as MSM.(4) Prior to the 2022-2023 mpox outbreak, studies conducted in different regions of the world demonstrated significant variations in orthopoxvirus seroprevalence levels.Orthopoxvirus seroprevalence in blood donors was shown to be less than 10% in mpox non-endemic countries such as France, the Lao People's Democratic Republic, and Bolivia.(5) In contrast, seroprevalence levels of 51% and 60% were measured in the mpox-endemic countries Côte D'Ivoire and the Democratic Republic of Congo, respectively.(6) Although different methodologies were used that limit direct comparisons, these findings underscore high susceptibility on a population level for MPXV infections in non-endemic countries prior to the 2022-2023 outbreak. A third-generation smallpox vaccine based on the replication-deficient poxvirus modified vaccinia virus Ankara (MVA) (MVA-BN; also known as Imvanex, JYNNEOS, or Imvamune), was rapidly employed in vaccination campaigns during the 2022-2023 mpox outbreak to interrupt MPXV transmission in high-risk populations.We have previously demonstrated that, while a two-dose MVA-BN immunisation series in non-primed individuals induced a cross-reactive immune response against MPXV, levels of MPXV-neutralising antibodies were comparatively low.(7) Recent studies from Israel(8), the United Kingdom (9), and the United States (10) reported a vaccine effectiveness of MVA-BN against mpox between 66% -86%, which was comparable to that of the first-generation smallpox vaccine with 58% -85%, despite the relatively low immunogenicity as measured by serological assays.(4,11,12) Recently, breakthrough infections in previously vaccinated individuals, and re-infections in individuals who had already contracted mpox have been reported, (13)(14)(15)(16)(17) raising concerns about the longevity of immune responses, and the effectiveness of orthopoxvirus-specific immunity in preventing novel outbreaks. . In contrast to previous outbreaks of mpox in non-endemic areas, (18,19) the 2022-2023 outbreak exhibited several distinct epidemiological characteristics.(3,20) These included its unprecedented scale, the occurrence of disease mainly among MSM, and sexual contact as a primary mode of transmission.(21) A modelling study based on the United Kingdom outbreak highlighted a substantially higher R0 within the MSM sexual network compared to non-sexual household transmissions.(21) The outbreak's deceleration in the second half of 2022 was attributed to the lack of susceptible individuals, either due to vaccination-or infection-induced immune responses, combined with increased awareness, and behavioural changes particularly within the context of sexual interactions.(22)(23)(24) Despite recognising the importance of population immunity for the prevention of future outbreaks, none of these studies performed immunological assessments and uncertainties persist regarding the current level of immunity among the at-risk population. To estimate the impact of population immunity on the size and duration of potential future mpox outbreaks, we assessed the seroprevalence of orthopoxvirus-specific antibodies among 1,065 MSM in the two largest cities in the Netherlands after the peak of the 2022-2023 mpox outbreak. The study population comprises MSM presenting at Centres for Sexual Health (CSH), who likely exhibit higher levels of sexual activity than the general Dutch MSM population.Consequently, they are more likely to have been invited for vaccination and/or to have been exposed to the virus, therefore representing the group at highest risk of MPXV infection.The observed seroprevalence levels in combination with published literature data on vaccine effectiveness and infection dynamics were subsequently used in a stochastic transmission model to estimate the extent of future mpox outbreaks. Study population Centres for Sexual Health (CSH) offer testing for sexually transmitted infections (STI) to those at high risk such as MSM.Mpox testing at CSH was introduced during the early phases of the outbreak in 2022.In the Netherlands, mpox vaccination at Public Health Services started mid-July 2022.Vaccination was offered by CSH to MSM clients who were (current or prospective) HIV pre-exposure prophylaxis (PrEP) users, were HIV infected or had high risk for STI (defined as notified for STI exposure, having been diagnosed with an STI recently or having multiple sexual contacts).We analysed residual serum samples obtained from these MSM collected at the CSH in Rotterdam and Amsterdam.The sera were collected in September 2022, after the introduction of vaccination and the peak of the mpox outbreak in the Netherlands.The sample set included N=315 sera from Rotterdam and N=750 sera from Amsterdam.Individuals born before 1974 (cessation of smallpox vaccination for the general population in the Netherlands) were inferred to have received childhood smallpox vaccination. Ethical statement Prior to the study, we obtained approval from the Erasmus MC Medical Ethics Review Committee (MEC-2022-0675) granting permission to conduct research using residual materials. The AmsterdamUMC Medical Ethics Review committee granted permission for samples from Amsterdam (W22_428#22.506).A privacy impact assessment was made for combining data from the CSH Rotterdam clients, including vaccination indication and infection status, with their vaccination data.No additional data or samples were collected specifically for this study and all samples were pseudonymised, ensuring that no identifiable data from participants were collected, transferred, or analysed.This study did not involve any direct interaction with participants or any potential harm. Detection of VACV-specific IgG antibodies For the detection of VACV-specific antibodies, an in-house ELISA was employed using a VACV Elstree-infected HeLa cell lysate as antigen as described previously.(7) Two separate assays were performed: (1) a screening assay in which sera were assessed for the presence of VACVspecific antibodies at two serum dilutions (1:10 and 1:50), and (2) an endpoint titration assay using a full five-fold dilution series to further assess sera from the Rotterdam cohort, which were regarded as at least borderline-positive in the screening ELISA.Absorbance was measured at .450 nm using an Anthos 2001 microplate reader and corrected for absorbance at 620 nm. Values of optical density measured at a wavelength of 450 nm (OD 450 values) were obtained with mock-infected cell lysates and subtracted from the OD 450 value obtained with the VACVinfected cell lysates to determine a net OD 450 response.A positive control based on a pool of two sera from post-MVA-BN individuals, who had also received childhood smallpox vaccination, was included on every ELISA plate, generating a minimum-to-maximum OD 450 S-curve.OD 450 values generated by dilution series per sample were transformed to this control S-curve, and 30% endpoint titres were calculated. Sensitivity and specificity of the VACV-specific IgG screening ELISA The screening ELISA was validated using a pre-determined set of 85 sera from poxvirus-naïve individuals (expected negative for VACV-specific antibodies), and a set of 57 sera from doubledose MVA-BN-vaccinated individuals collected 28 days after the second dose (expected positive for VACV-specific antibodies) (Supplementary Figure S1).The test performance was calculated using the 1:50 dilution results for the sera of vaccinated individuals, and the 1:10 dilution results for the naive individuals.In these validations, we identified two OD 450 cut-offs for positivity: (1) a more stringent threshold of OD 450 =0.35, which demonstrated high sensitivity (98.2%) and maximum specificity (98.6%) in detecting VACV-specific IgG antibodies, and (2) a more relaxed cut-off of OD 450 =0.2 with maximum sensitivity (100%), while maintaining somewhat lower specificity (89.9%).The relaxed cut-off was included to allow for the detection of low OD 450 values in the early stages of infection or shortly after vaccination.Consequently, samples were interpreted as negative with an OD 450 <0.2, as borderline-positive with an OD 450 between 0.2 and 0.35, and as positive with an OD 450 above 0.35, independent of the dilution (1:10 or 1:50).For the estimation of seroprevalence levels, the relaxed cut-off (OD 450 =0.2) was used.The borderline-positive sera are displayed visually distinct from the positive samples in all figures. Detection of MVA-/MPXV-neutralising antibodies A plaque reduction neutralisation test (PRNT) was used to test serum samples for their capacity to neutralise either MVA (rMVA-GFP) or MPXV (MPXV_2022_NL001, clade IIB; EVAg: 010V-04721) as previously described.(7) All samples from the Rotterdam cohort identified as at least borderline-positive in the VACV screening ELISA (OD 450 >0.2) were included.The cells were imaged using the Opera Phenix spinning disk confocal HCS system (PerkinElmer) equipped with a ×10 air objective (NA 0.3) and 405-nm and 488-nm solid-state lasers.Infected cells were quantified using the Harmony software (version 4.9, PerkinElmer).The dilution that would yield 50% reduction of plaques compared with the infection control was estimated by determining the proportionate distance between two dilutions from which an endpoint titre was calculated.If no neutralisation was measured, the PRNT50 value was assigned as 10, one dilution factor below the lowest dilution tested. Inference of infections based on serological profile Based on the aggregated findings of the employed serological assays (VACV ELISA, and PRNT using MVA and MPXV), we identified individuals who had likely been infected with MPXV as described previously.(7) The following criteria were used to infer prior MPXV infection: in MVA-BN vaccinated individuals, an MPXV PRNT50 titre >100 and higher than the MVA PRNT50 titre, and in individuals who had not received any vaccination, the presence of an MPXV PRNT50 titre >20 in the absence of MVA-neutralising antibodies.This inference could not be made for subjects who likely received a childhood first-generation smallpox vaccination (born <1974). Stochastic model A mathematical stochastic model was used to model mpox transmission.The model was calibrated to the cumulative number of individuals diagnosed with mpox in the 2022-2023 outbreak in the Netherlands, the seroprevalence at the end of the outbreak, and the number of individuals that were vaccinated (see Supplementary Figure S2, Supplementary Table S1, and Supplementary Methods).The model is seeded by 1-10 individuals that are initially infected with MPXV following an event with high numbers of potential exposures.The model stratified individuals who were not infected based on vaccination status (not vaccinated, historically smallpox vaccinated before 1974, and recently MVA-BN vaccinated during the 2022-2023 outbreak).Using literature estimates, vaccination is assumed to reduce the risk of infection by 85% (range 75-95%) in historically vaccinated individuals (12) and by 78% (95% confidence interval [CI] 54-89%) in recently vaccinated individuals.(8-10)Upon infection, individuals first enter an exposed state in which they are not infectious to others.Individuals become infectious after a serial interval of 8.0 days (95% CI 6.5 -9.9 days).(25) We assumed that individuals remain infectious to others until they are diagnosed (within 1 to 21 days after symptom onset) (26), after which they will end high risk behavior and will consequently not transmit MPXV to others.At the beginning of the outbreak in 2022, there was no awareness of mpox and diagnostic tests were not available.The outbreak in the Netherlands started between the middle of April and the middle of May 2022.After May 23 rd , we assumed that awareness increased .during the outbreak, resulting in a reduced time between symptom onset and diagnosis in health care, and a reduction in new sexual partners by up to 50%.(27,28) Individuals that recovered from mpox are assumed to not be infectious to others. Statistical analysis A chi-square test for equality of two proportions was used to compare the seroprevalence percentage between Amsterdam and Rotterdam.Serological data are reported as geometric mean titres with 95% confidence intervals.No hypothesis testing was conducted to analyse differences in GMT between groups since the study was not designed for this purpose.Data were visualised using Prism (v10.0;GraphPad). . Of these, 68/353 (19.3%) were borderline positive (Figure 1A).The seroprevalence was lowest in 20-29-year-olds, which comprised most MSM; it was highest in the oldest group of 70-79 (Figure 1B).In all groups above 50 years of age, who have likely been historically vaccinated against smallpox, orthopoxvirus-specific antibodies were measured in at least 50% of individuals.Overall, the seroprevalence of orthopoxvirus-specific antibodies among MSM was comparable between Rotterdam and Amsterdam (χ 2 = 0.257, p = 0.61). Additional serological profiling was performed on the 143 VACV IgG-positive serum samples (OD 450 >0.2) obtained from the Rotterdam cohort in order to analyse and correlate orthopoxvirusspecific immune responses in defined subgroups based on the available epidemiological and vaccination data (Supplementary Figure S3).The highest VACV IgG titres were observed The number of actual reported cases in the Netherlands during the same period was 1,259.A sensitivity analysis was conducted to examine the influence of distinct seroprevalence levels and a varied range of vaccine effectiveness on the stochastic model (Figure 2B). Mathematical modelling of future mpox outbreaks Next, we conducted simulations to investigate the impact of immunity conferred by prior infections and vaccinations on a potential future mpox outbreak (Figure 2C,D).By considering a range of seroprevalence levels between 35% and 55%, based on the data reported here, we observed a large reduction in the outbreak size, with a median of 179 cases (IQR 108-265), an average daily incidence of 1.5 cases, and a duration of approximately 17 weeks (scenario 1). This marks an 86.4% reduction in outbreak size compared to our model's reproduction of the 2022-2023 Dutch outbreak.In this simulation, similar to the outbreak, we assume that the population at risk reduced their number of sexual partners.We separately included a simulation in a vaccinated population, without any change in sexual partners.In this scenario, the total .outbreak size was 344 cases (IQR 167-526), with an average daily incidence of 1.8 cases (scenario 3). Finally, we examined the influence of a reduced time-to-diagnosis.At the beginning of the 2022-2023 outbreak, there was a diagnostic delay, resulting in a delay in case isolation.The duration of infectiousness decreased during the later stages due to reduced time-to-diagnosis and thus earlier isolation.In a simulation involving a partially vaccinated population with a time-todiagnosis comparable to the later stages of the 2022-2023 outbreak, no outbreak occurred (median case count of 2, IQR 0-6; scenario 4).In this situation, a reduction in risk behaviour did not have any additional impact (scenario 2).Notably, even with lower vaccine effectiveness and seroprevalence ranges, the combination of vaccination and sufficient laboratory testing capacity indicated to be effective in preventing outbreaks. . Discussion Here, we show a seroprevalence for orthopoxvirus-specific antibodies of 45.4% and 47.1% among MSM visiting CSH in Amsterdam and Rotterdam, the Netherlands, respectively.Using mathematical modelling, we demonstrate that these seroprevalence levels will reduce the likelihood and size of future mpox outbreaks.However, to achieve complete prevention of future outbreaks, it is essential to keep the time-to-diagnosis short, similar to the later stages of the 2022-2023 Dutch outbreak.This requires maintained diagnostic capacities, and sustained disease awareness among healthcare professionals and the at-risk groups alike. We utilised a stochastic approach to model outbreaks of mpox.First, it is important to emphasize that our model estimated the risk of future mpox outbreaks based on the current level of immunity within the at-risk population.Consequently, it cannot be used to estimate the effect of the vaccination campaign on the decline of the 2022-2023 Dutch mpox outbreak itself. Notably, the peak of the outbreak in the Netherlands had already occurred before vaccination campaigns commenced.It was recently suggested that the decline of the outbreak in the Netherlands could have occurred due to infection-induced immunity and behavioural adaptations among highly sexually active MSM.(30) Like any modelling study, the reliability of our findings depends on the underlying assumptions and data used.A strength of our study is that we conducted simulations using the seroprevalence levels measured among MSM in the Netherlands, and combined it with literature data on serial interval (25) and vaccine effectiveness (8)(9)(10)(11) that emerged during the outbreak.Furthermore, we included the intended reduction of sexual risk practices among MSM in response to the 2022-2023 outbreak (22,31).It is however important to recognize that different sexual activity groups have varying contact rates and that the probability of mpox transmission per sexual encounter is unknown.The limited data led us to choose a nonassortative mixing assumption in our model.Predominant transmission within closely related networks of sexually active MSM might affect the outcome of this model; we acknowledge that it might result in an overestimation of future outbreak risks.These combined factors highlight the intricacies of the actual transmission dynamics and potential biases in the model, urging a careful interpretation of its projections. Serum samples used for our cross-sectional analysis were collected in September 2022, a period characterised by a rapid decline in the incidence of mpox cases.The vaccination .campaign in the Netherlands commenced in July 2022, targeting high risk groups.(4)Due to the recent and non-uniform administration of vaccinations to the majority of participants, the timing of this serosurvey might have been premature to capture all seroconversions.To account for the possibility of low antibody titres shortly after vaccination, cut-off values in the VACV IgG ELISA were carefully defined while ensuring high sensitivity and specificity.Considering the potential higher seroprevalence rates in subsequent months, a wider range of seroprevalence levels (up to 55%) was included in the sensitivity analysis of the stochastic model.Even at the upper end of this seroprevalence range, the levels of immunity were insufficient to completely prevent future outbreaks in our model.In addition, vaccination-or infection-induced immunity against mpox is expected to further decline over time. Seroepidemiology provides insight into disease burden, vaccine coverage, and age-specific immunity gaps.(32)(33)(34) These insights are crucial for informing public health policy decisions.However, conducting seroepidemiological studies for emerging pathogens such as MPXV presents unique challenges.The first challenge stems from the lack of standardised tests available for emerging pathogens.To ensure accurate comparisons across studies and populations, it is imperative to establish standardised protocols and reference materials for assessing (neutralising) antibodies.The second challenge revolves around the need to differentiate between natural infection and vaccination.A strength of our study lies in the recent development of binding and neutralising antibody assays specifically designed for VACV, MVA, and MPVX.(7) These assays facilitate comprehensive immunological profiling, allowing us to discern between past infection and MVA-BN vaccination.Moreover, the practical applicability of mathematical modelling is enhanced by the integration of real-world population immunity data.These contributions provide valuable insights into disease dynamics and support the formulation of targeted public health responses, including effective vaccination strategies. To prevent future mpox outbreaks, public health policy should take several factors into consideration.Firstly, immunity against mpox can be diminished due to demographic changes within the risk group: as new unexposed and unvaccinated MSM enter the sexually active population and older individuals who have been vaccinated or recovered from mpox leave the sexually active population, the proportion of susceptible individuals is expected to rise.Secondly, little is known about the durability of immunity induced by third-generation smallpox vaccines or previous infection.The occurrence of breakthrough infections indicates a potential waning of immunity over time.(13,14,35) In order to assess these changes, longitudinal or repeated cross-sectional studies are needed to monitor immunity levels in at risk populations. Such activities can identify gaps in vaccination coverage across different age groups, enabling focused efforts on younger, previously unvaccinated individuals.Alternatively, it could be necessary to offer booster vaccinations to those previously vaccinated.(36)Thirdly, without ongoing outreach and education efforts, there is a concern that MSM may become less aware of the symptoms of mpox over time, potentially leading to increased transmission of the disease. In conclusion, our study underlines the importance of maintaining mpox-specific immunity in the at-risk population, alongside diagnostic capacities, continuous surveillance, and sustained awareness among healthcare professionals and the at-risk group.These measures are vital for promptly identifying cases and implementing necessary control strategies.Future research should focus on understanding the durability of vaccine-induced protection, contributing to a more comprehensive understanding of mpox epidemiology, and facilitating targeted preventive measures. . . Figure S3A).MVA-(Supplementary Figure S3B) or MPXV-neutralising antibodies (Supplementary Figure S3C) were detected in 88/143 (61.5%) and 72/143 (50.3%) of VACV IgG-positive sera, respectively.Individuals with a PCR-confirmed mpox diagnosis had the highest MPXV-neutralising antibody titres among all subjects.Based on the combination of serological assays among the Rotterdam participants born after 1974, a total of 9 individuals were inferred to have contracted mpox (comprising four unvaccinated subjects and five MVA-BN vaccinated subjects; four with a single dose, one with a double dose).Together with the 5 PCR confirmed infections, these comprise 5.1% (14/272) of Rotterdam participants born after 1974 in the Rotterdam cohort. Figure 1 : Figure 1: Seroprevalence of orthopoxvirus-specific antibodies among men who have sex with men (MSM) in Rotterdam and Amsterdam.(A,B) Detection of VACV-specific IgG in n = 1,065 serum samples from MSM visiting the Centres for Sexual Health in Rotterdam (n = 315) and Amsterdam (n = 750) using an in-house screening ELISA grouped by location (A) or age (B).Samples were considered as positive with an OD 450 >0.35 (green), as borderline-positive with an OD 450 between 0.35 and 0.2 (yellow), and as negative with an OD 450 <0.2 (red).Seroprevalence levels were estimated based on the more relaxed cut-off of OD 450 >0.2, including borderline-positive samples, and are shown as donut graphs.Bold numbers above the plots indicate the number of seropositive participants among the respective population.VACV, vaccinia virus. Figure 2 . Figure 2. Monkeypox virus (MPXV) transmission model among men who have sex with men (MSM) in the Netherlands.The stochastic transmission model uses the seroprevalence data reported here (range between 35% and 55%; Figure 1) in combination with available literature data on infection dynamics and vaccine effectiveness, and was calibrated to parameters derived from the Dutch 2022-2023 mpox outbreak.(A) Comparison of daily incidence from the model-generated simulation (dotted black line) to the real-world epidemiology data (orange line) of the Dutch 2022-2023 mpox outbreak.The start of the vaccination campaign and the sample collection period are indicated.The peak of the curve from real-world epidemiology data was overlayed with the peak of the model-generated curve.(B) Sensitivity analysis of the impact of a different seroprevalence or vaccine effectiveness on the cumulative number of mpox cases in a new outbreak.The number in the first column represent the four different modelled scenarios: (1) a partially vaccinated population (seroprevalence 35-55%) with a reduction of sexual partners within the risk group comparable to the original outbreak, (2) a partially vaccinated population (seroprevalence 35-55%) with a reduction of sexual partners within the risk group comparable to the original outbreak, and decreased time-to-diagnosis comparable to the end of the outbreak, (3) same as scenario 1 but Table 1 . of the 315 and 750 MSM who visited CSHs in Rotterdam and Amsterdam in September 2022, respectively, are summarised in Of the 315 MSM visiting the CSH in Rotterdam, the mean age was 34 (IQR [interquartile range] 28 -42), and 13.6% (43/315) of the subjects were born before 1974 and likely received a childhood first-generation smallpox vaccination.Most of the participants were invited for MVA-BN vaccination due to (current or prospective) PrEP usage (59.4%, 187/315), high-risk behaviour (8.3%, 26/315), or HIV infection (1%, 3/315).At the time of this cross-sectional study, 62/315 (19.7%) subjects had received one dose of the MVA-BN vaccine with a median time between sampling and vaccination of 26 days, and 46/315 (14.6%) subjects had received two doses with a median time between sampling and last dose of 9 days.Five individuals (1.6%, 5/315) among those who visited the Centre for Sexual Health in Rotterdam had tested positive for mpox since May 2022.Of the 750 MSM visiting the Centre for Sexual Health in Amsterdam, the mean age was 32 (IQR 27-40), and 13.3% (100/750) were born before 1974 and presumably received childhood smallpox vaccination.Data on MVA-BN vaccination and MPXV infection were not available for the Table 1 . Characteristics of 1,065 MSM visiting the Centres of Sexual Health in Rotterdam and Amsterdam in September 2022. MVA-BN, modified vaccinia Ankara -Bavarian Nordic; PrEP, (HIV) pre-exposure prophylaxis; PLWHA, people living with HIV-AIDS; PCR, polymerase chain reaction; IQR, interquartile range.
6,022.8
2023-08-22T00:00:00.000
[ "Medicine", "Environmental Science", "Mathematics" ]
Accurate and precise characterization of linear optical interferometers We combine single- and two-photon interference procedures for characterizing any multi-port linear optical interferometer accurately and precisely. Accuracy is achieved by estimating and correcting systematic errors that arise due to spatiotemporal and polarization mode mismatch. Enhanced accuracy and precision are attained by fitting experimental coincidence data to curve simulated using measured source spectra. We employ bootstrapping statistics to quantify the resultant degree of precision. A scattershot approach is devised to effect a reduction in the experimental time required to characterize the interferometer. The efficacy of our characterization procedure is verified by numerical simulations. Introduction Linear optics is important in quantum computation and communication. The simulation of a linear optical interferometer is computationally hard classically subject to reasonable conjectures [1]. Single-photon detectors and linear optical interferometers allow for efficient universal quantum computation via linear optical quantum computing (LOQC) [2]. Linear optics can simulate the quantum quincunx [3] and quantum random walks [4]. Linear optics coupled with laser-manipulated atomic ensembles enables longdistance quantum communication [5]. A wide class of communication protocols can be realized with coherent states and linear optics [6]. The accurate and precise characterization of linear optics is important in quantum information processing tasks such as BosonSampling, LOQC and quantum walks. BosonSampling involves sampling from the output photon coincidence distribution of an interferometer on single photon inputs to each mode. Sampling from this distribution is computationally hard classically but is easy with a linear-optical interferometer. The classical hardness of the BosonSampling problem crucially depends on the error in the linear optical interferometer [24]. Similarly, the practical applications of BosonSampling, in quantum metrology and in the computation of molecular vibronic spectra, rely on the accurate implementation and characterization of linear optics [25,26]. Accurate and precise characterization is important in LOQC because a high success probability of the employed non-deterministic linear-optical gates relies on implementing the desired gates with high fidelity [27]. Furthermore, linear interferometers used in photonic quantum walks, which display strong non-classical correlations, require accurate characterization especially if quantum walks are employed for solving classically hard problems [28][29][30]. In other words, that accurate and precise characterization of interferometers enables a verifiable quantum speedup of linear-optical protocols over classical computers. Classical-light procedures [31,32] for linear optics characterization are unsuitable for Fock-state based experiments because the interferometer parameters change during the coupling and decoupling of classical light sources and of homodyne detectors at the interferometer ports. This change could result from drift of interferometer parameters in the time required to couple sources and detectors or as the result of mechanical process of coupling itself. Characterization procedures that rely on Fock-state (rather than classicallight) inputs are thus more desirable in BosonSampling and LOQC implementations; such procedures would enable interferometer characterization without altering the experimental setup and would thus be accurate. The Laing-O'Brien procedure [33] uses one-and two-photons for characterizing linear optical interferometers and is stable to the length scale of a photon packet. This procedure assumes perfect matching in source field and large-number statistics on the detected photons. Hence, implementations of this procedure are inaccurate due to spatiotemporal and polarization mode mismatch in the source field and imprecise due to shot noise. We aim to devise an accurate and precise procedure that uses one-and two-photons for the characterization of linear optical interferometers and to devise a rigorous method to estimate the standard deviation in the linear optical interferometer parameters [34]. Furthermore, we aim to provide a correct alternative to the χ 2 -test, which has been used to estimate the confidence in the characterized interferometer parameters in current BosonSampling implementations [11,35,36] ‡. Here we devise a procedure to characterize a linear optical interferometer accurately and precisely using one-and two-photon interference. Four strengths of our approach over the Laing-O'Brien procedure [33] are that our procedure (i) accounts for and corrects systematic error from spatial and polarization source-field mode mismatch via a calibration procedure (Section 3); (ii) increases accuracy and precision by fitting experimental coincidence data to curve simulated using measured source spectra (Section 3); (iii) accurately estimates the error bars on the characterized interferometer parameters via a bootstrapping procedure (Section 4); and (iv) reduces the experimental time required to characterize interferometers using a scattershot procedure (Section 5). Background This section provides the background for our one-and two-photon characterization procedure. The action of a multi-port linear optical interferometer on single photons entering one or two input ports and vacuum entering the other ports is detailed. Specifically, we calculate the probability of detecting a photon at a given output port when a single photon is incident at a given input port. The section concludes with expressions for the probability of detecting a coincidence measurement when two controllably delayed photons are incident on the interferometer. Action of a linear optical interferometer In this subsection, we define linear optical interferometers by their action on single photons. We parameterize the unitary transformation effected by an interferometer and ‡ The χ 2 -test [37][38][39] is used to quantify the goodness of fit between probability distribution functions of two categorical variables, which can take a fixed number of values. Coincidence-count curves and visibilities are not probability distribution functions of categorical variables, but rather are collections of many categorical variables (variables that can take on one of a fixed finite number of possible values), one variable corresponding to each time-delay value chosen in the experiment. Hence, quantifying the goodness of fit between two coincidence curves using the χ 2 -test is incorrect. This incorrectness undermines the claim that the data are consistent with quantum predictions and disagree with classical theory [11,36] and leaves the choice of unitary matrices [35] unjustified. present our treatment of losses and dephasing at the interferometer ports. Consider a single photon entering the i-th mode of an m-mode interferometer. The monochromatic photonic creation and annihilation operators acting on the i-th and the j-th ports obey the canonical commutation relation § for positive real frequencies ω 1 , ω 2 . The state of a single photon entering the i-th mode is where f i (ω) is the normalized square integrable spectral function, |0 is the m-mode vacuum state. The state of two photons entering modes i and j = i of the interferometer is with exchange symmetry holding if f i (ω) = f j (ω). One-and two-photon states are transformed into superpositions of one-and of two-photon states respectively under the action of the linear interferometer. We treat linear interferometers as unitary quantum channels acting on the state of the incoming light. The interferometer transforms the photonic creation and annihilation operators according to and its complex conjugate, where V (ω) is the transformation matrix of the interferometer. Photon-number conservation imposes unitarity of the transformation matrix V (ω) for all real ω. In general, the elements {V ij (ω)} of the transformation matrix depend on the frequency of transmitted light. We assume that the spectral functions of the incoming light are narrow compared to frequencies over which the entries {V ij } change noticeably and thus treat V to be frequency-independent. If only Fock states are incident at the interferometer and only photon-numbercounting detection is performed on the outgoing light, then the measurement outcomes are invariant under phase shifts at each input and output port. That is, interferometer V = D 1 V D † 2 produces the same measurement outcome as V for any diagonal unitary matrices D 1 and D 2 . Mathematically, if D 1 , D 2 are diagonal unitary matrices, then is an equivalence relation. Members of the same equivalence class defined by this equivalence relation produce the same number-counting measurement outcomes on receiving Fock-state inputs. § Two monochromatic photons are distinguishable based on the ports that they occupy and on their respective frequencies ω 1 and ω 2 . U U lossy Figure 1: Schematic diagram of the interferometer. U effects a unitary transformation on a multimode state of light. The dotted lines represent the couplings of the interferometer with light sources and detectors. The beam splitters at the input and output modes model the linear losses because of imperfect coupling and detector inefficiency. The vacuum input to these beam splitters is not shown. One of the beam splitter outputs enters the interferometer while the other one is lost. The triangles represent the random dephasing at the input and output ports. The dashed box labelled U lossy represents the combined effect of the dephasing, the losses and the unitary interferometer. The constraints θ i1 ≡ 0, θ 1i ≡ 0 ∀i ∈ {1, 2, . . . , m} on the input and output phases of the transformation matrix are obeyed in the following parameterization of U Thus, the values {λ i }, {α ij }, {θ ij }, {µ j } completely parametrize the class representative matrix U . The input and output ports of the interferometer are amenable to time-dependent linear loss and dephasing. We model losses using parameters ν j and κ i , which are the respective probabilities of transmission at the input mode j and output mode i. Dephasing is modelled using parameters ξ j and φ i , which are the arbitrary multiplicative phases at the input and output ports. Hence, the actual transformation effected by the interferometer is given by the matrix U lossy , which has matrix elements Figure 1 depicts the relation between the representative matrix U and the actual transformation U lossy ij that is effected by the interferometer. This completes our parameterization of the linear optical interferometer. Our characterization procedure employs one-and two-photon inputs to estimate the values of parameters {λ i }, {α ij }, {θ ij }, {µ j } of (9). In the next subsection, we recall the expectation values of measurements performed on interferometer outputs when one-and two-photon states are incident at the input ports. Figure 2: Schematic diagram of single-photon counting at the output of an interferometer when single photons are incident at one input port. The star symbol represents a source of single-photon pairs. Single photons are incident at one of the input ports while vacuum state is input to the remaining input ports (not shown in figure). The semicircles at the output ports represent single-photon detectors and the circles with the included # represent the photon-counting logic connected to the detectors. One-and two-photon inputs to linear optical interferometer Our characterization procedure employs single-photon counting to estimate the amplitudes {α ij } of the representative matrix U entries. The arguments {θ ij } of U are estimated using two-photon coincidence counts. In this subsection, we give expressions for one-and two-photon transmission probabilities, which are employed in our characterization procedure (Section 3). We first consider the case of single-photon transmission. The interferometer transforms the single-photon input state (2) to the state at the output ports according to (9). A photon is detected at the i-th output port with a probability when a single-photon is incident on the j-th input port. Whereas the values of {α ij } are estimated using single photon counting, {θ ij } values are estimated using two-photon coincidence measurement. We now present probabilities of detecting two-photon coincidence at the interferometer outputs when controllably delayed pairs of photon are incident at the input ports. If a controllably delayed photon pair is incident at input ports j and j , then the probability C ii jj (τ ) of coincidence measurement at detectors placed at output ports i and i is On substituting according to (7), we obtain [40] where τ is the time delay between the two photons, f j (ω), f j (ω) describe the spectrum of light just before it enters the detectors and γ is the mode-matching parameter, which we described in the remainder of this section. Two-photon coincidence probabilities (12) depend on the mode matching in the source field. Spatial and polarization mode mismatch is quantified by the mode-matching parameter γ [40]. Perfectly indistinguishable light sources, such as light from a singlemode fibre, have relative mode matching γ = 1 whereas γ = 0 indicates that the sources are completely distinguishable. Figure 4 depicts how imperfect mode matching, i.e., γ < 1, alters the observed two-photon coincidence counts. Our calibration procedure estimates and accounts for imperfect mode matching, which is assumed to be constant over the runtime of the characterization experiment. The calculation of the expected coincidence probabilities as a function of the time delay between the photons is detailed in Algorithm 1. The next section describes how single-photon transmission probabilities (10) and two-photon coincidence probabilities (12) are used for characterizing the linear optical interferometer. U Figure 3: Schematic diagram for coincidence measurement the interferometer output when single-photon pairs are incident on two different input ports of an interferometer. The star symbol represents a source of single-photon pairs and the semicircles at the output ports represent single-photon detectors. The coincidence logic, which is depicted by ⊗, counts two-photon coincidence events at the detectors. Characterization of linear optical interferometer In this section, we describe our procedure to characterize linear optical interferometers. The outline of this section is as follows. Subsection 3.1 describes the experimental data required by our characterization procedure. This experimental data are processed by various algorithms to determine the transformation matrix (8). The algorithm to determine the amplitudes {α ij } of the transformation-matrix elements is presented in Subsection 3.2. In Subsection 3.3, we describe the calibration of the source field by determining the mode-matching parameter γ. The estimation of {θ ij } using two-photon interference is detailed in Subsection 3.4. Maximum-likelihood estimation is employed to find the unitary matrix U that best fits the calculated {α ij }, {θ ij } values and serves as the representative matrix (8). We discuss the calculation of the best-fit unitary representative matrix in Subsection 3.5. Experimental procedure and inputs to algorithms Our characterization procedure relies on measuring (i) the spectral function f j of the source light, (ii) single-photon detection counts, (iii) two-photon coincidence counts from a beam splitter and (iv) two-photon coincidence counts from the interferometer. The measurement data constitute the inputs to our algorithms, which then yield the representative matrix. Before presenting the algorithms, we detail the experimental procedure and the inputs received by the algorithm in this subsection. We characterize the spectral function f (ω i ) of the incoming light for a discrete set Ω = {ω 1 , ω 2 , . . . , ω k } of frequencies. The integer k of frequencies at which the spectral function is characterized is commonly equal to the ratio of the bandwidth to the frequency step of the characterization device. The characterized spectral function f (ω i ) is used to calculate the coincidence probabilities as detailed in Algorithm 1. Algorithm 1 Coincidence: Calculates the expected coincidence rate for twophoton interference for a given 2 × 2 submatrix of an arbitrary SU(m) transformation. Input: Frequencies at which f 1 , f 2 are given. Time delay values. Mode-matching parameter of photon source. Output: • C : T → R + Two-photon coincidence probabilities correct up to multiplicative factor. for τ in T do 3: Numerically integrate RHS of (12) over ω i , ω j with κ i = κ i = ν j = ν j = 1. return C 6: end procedure The amplitudes {α ij } are determined by impinging single photons at the interferometer and counting single-photon detections at the outputs. Single-photon counting is repeated multiple (B ∈ Z + ) times in order to estimate the precision of the obtained {α ij } values. Specifically, the number of single-photon detection events are counted at all m output ports {i} for single photons impinged at the j-th input ports in the b j -th repetition. The counting is then performed for each of the input ports j ∈ {1, . . . , m} of the interferometer. Algorithm 2 uses N ijb j , b j ∈ {1, . . . , B} values to estimate α ij and the standard deviation of the estimate. The experimental setup for {α ij } measurement is depicted in Figure 2. Arguments {θ ij } are calculated by fitting curves of measured coincidence counts to curves calculated using measured spectra according to (12). Appendix B elucidates the inputs and outputs of the curve-fitting procedure, such as the Levenberg-Marquardt algorithm [41,42], employed by our algorithms. Before calculating {θ ij }, we calibrate the source field for imperfect mode matching by measuring coincidence counts on a beam splitter of known reflectivity. Controllably delayed single-photon pairs are incident at the two input ports of the beam splitter and coincidence counting is performed on the light exiting from its two output ports. Algorithm 3 details the estimation of γ using coincidence counts C cal (τ ) for time delay τ between the incoming photons. The absolute values and the signs of the arguments {θ ij ∈ (−π, π]} are calculated separately. To estimate the absolute values {|θ ij |} of the arguments, pairs of single photons are incident at two input ports 1 and j ∈ {2, . . . , m} and coincidence measurement is performed at two output ports 1 and i ∈ {2, . . . , m}. The choice of the input and output ports labelled by index 1 is arbitrary. The signs of the arguments are estimated using an additional (m − 1) 2 coincidence measurements. Algorithm 6 details the choice of input and output ports for estimating {sgn θ ij }. A schematic diagram of the experimental setup for {θ ij } estimation is presented in Figure 3. Single-photon transmission counts to estimate {α ij } (Algorithm 2) Now we present our procedure to estimate {α ij } values using single-photon counting. Single-photon transmission probabilities are connected to the amplitudes {α ij } according to the relation The amplitudes {α ij } are determined by estimating transmission probabilities. The probabilities P 11 , P i1 , P 1j , P ij of single-photon detection at output ports 1, i when single photons are incident at input ports 1, j are expresses in terms of the α ij values according to The probabilities P 11 , P i1 , P 1j , P ij are estimated by counting transmitted photons. The definition (8) of α ij implies that α 11 = α i1 = α 1j = 1. Hence, the values of α ij are connected to the single-photon transmission probabilities according to which is independent of the losses at the input and the output ports. The transmission probabilities P ij are estimated by counting transmitted photons as follows. The estimated values of {α ij } are random variables that are amenable to random error from under-sampling and experimental imperfections. Thus, data collection is repeated multiple times. For accurate estimation of α ij and its standard deviation δα ij , the number B of repetitions is chosen such that the standard deviation of Algorithm 2 AmplitudeEstimation: Uses single-photon detection counts to calculate the amplitudes of the complex entries of the transformation matrix.• represents our estimate of •. Input: • m ∈ Z + , Number of modes of interferometer. Number of times single-photon counting is repeated . Output: The probabilities P ij are estimated by counting single-photon detection events. Suppose N ijb j photons are transmitted from input port j to the detector at output port i when N b j photons are incident and b j ∈ {1, . . . , B}. For large enough B, the transmission probability converges according to Likewise, the amplitudes {α ij } are estimated by averaging the single-photon detection counts according to The estimate of α ij relies on single-photon counts measured by impinging photons at the first input port repeatedly (repetition index b 1 ∈ {1, . . . , B}) and independently at the j-th input port (with repetitions labelled by a different index b j ∈ {1, . . . , B}). Henceforth, we represent our estimate of any parameter • by•. The estimatẽ α ij calculated using (18) is independent of N b j and thus resistant to variations in the incident-photon number N b j over different input modes j and different repetitions b j . Thus, our estimates {α ij } are accurate in the realistic case of fluctuating light-source strength and coupling efficiencies. Finally, the standard deviations σ(α ij ) of our estimates are calculated according to which converges for a large enough B. In line with standard nomenclature, we refer to these standard deviations as error bars. Algorithm 2 details the estimation of {α ij } and error bars on the obtained estimates. Calibration to estimate mode-matching parameter γ (Algorithm 3) In this subsection, we describe the procedure to calibrate our light sources for imperfect mode matching. The mode-matching parameter γ is estimated using one-and two-photon interference on an arbitrary beam splitter. First, the reflectivity of the beam splitter is determined using single-photon counting [33]. Next, controllably delayed photon pairs are incident at the beam splitter inputs and coincidence counting is performed on the beam splitter output . We introduce a curve-fitting procedure to estimate the value of γ such that (12) best fits the measured coincidence counts. The beam-splitter reflectivity, which is denoted by cos ϑ, is estimated as follows. A beam splitter of reflectivity cos ϑ effects the 2 × 2 transformation U bs = cos ϑ i sin ϑ i sin ϑ cos ϑ. which is in the form of (8) with α 22 def = cot 2 ϑ. The value of α 22 is estimated using singlephoton counting as described in Algorithm 2. The estimated beam-splitter reflectivity is The error bar on cosθ is estimated by repeating the photon counting along the lines of Algorithm 2. Algorithm 3 Calibration Calculates the mode-matching parameter γ of sourcefield using a beam splitter of known reflectivity. Input: Frequencies at which f 1 , f 2 are given. 2: A ← {cos ϑ, sin ϑ, sin ϑ, cos ϑ} Beamsplitter of reflectivity R (20) 3: Beamsplitter of reflectivity R (20) 4: Least-squares curve fitting to obtain the value of γ that minimizes . The argument 1/C cal (τ ) is the weight function [44] that accounts for experimental noise, which is assumed to be proportional to C(τ ). Ignore values of τ at which C(τ ) = 1. Appendix B details the choice of initial guesses to the algorithm. 6: end procedure Next we estimate γ using two-photon coincidence counting. Controllably delayed pairs of photons are incident at the two input ports of the beam splitter. Coincidence measurement is performed at the output ports for different values of time delay between the two photons. A curve-fitting algorithm is employed to find the best-fit value of γ, i.e., the valueγ that minimizes the squared sum of residues between the measured counts and the coincidence counts expected from (12) for the beam splitter matrix (20). Algorithm 3 details the calculations ofγ, which is used to estimate {θ ij } values accurately. 3.4. Two-photon interference to estimate {θ ij } (Algorithms 4-6) In this subsection, we describe our procedure to estimate the arguments {θ ij } of the representative matrix U (8). Our procedure requires the measurement of coincidence counts for 2(m − 1) 2 different choices of input and output ports. Of these measurements, (m − 1) 2 are used to estimate the absolute values {|θ ij |} of the arguments and the remaining (m − 1) 2 are used to estimate the signs {sgn θ ij }. The absolute values {|θ ij |} are estimated as follows. Single-photon pairs are incident at input ports 1 and j and coincidence measurements are performed at output ports 1 and i for i, j ∈ {2, . . . , m}. The state (3) of a photon pair is transformed under the action of the 2 × 2 submatrix of U labelled by the rows 1 and i and columns 1 and j. The probability of detecting a coincidence at the output ports 1, i is which is obtained by setting i = j = 1 in (12). The measured coincidence counts are used to estimate the value of |θ ij | as follows. The shape of the coincidence-versus-τ curve (23) depends on the values of α ij and θ ij . The shape does not depend on the parameters κ 1 , κ i , λ 1 , λ i , µ 1 , µ j , ν 1 , ν j , which lead to a constant multiplicative factor to the coincidence expression. Furthermore, the shape is unchanged under the transformation θ ij → −θ ij for θ ij ∈ (−π, π] if the spectral functions are identical. Hence, |θ ij | can be estimated using the shape of the coincidence function (23) and the values {α ij } estimated using Algorithm 2. A curve-fitting algorithm estimates the value |θ ij | ∈ [0, π] that best fits the measured coincidence counts. The calculation of {|θ ij |} is detailed in Algorithm 4. Our procedure computes the signs by using an additional (m − 1) 2 coincidence measurements. First we arbitrarily set θ 22 as positive sgn θ 22 = 1 (24) because of the invariance of one-and two-photon statistics under complex conjugation U → U * [33]. The signs of the remaining arguments {θ ij } are set using the coincidence Expectation values of Fock-state projection measurement with Fock-state inputs are unchanged under U → U * if the spectral functions are equal f 1 (ω) = f 2 (ω). Otherwise, the sign of −α 22 can be ascertained using the difference in the τ > 0 and τ < 0 coincidence counts in C 2,2,1,1 (τ ). Algorithm 4 Argument2Port: Calculates the unknown complex argument in the entries of a 2 × 2 transformation using a two-photon coincidence curve. Input: Three complex arguments of submatrix. • γ Mode-matching parameter of photon source. Output: • |θ ij | Estimated magnitude of the unknown complex argument. Set of three known phases and one unknown phase. 3: . 5: end procedure counts between output ports {i, i } when photon pairs are incident at input ports {j, j } for a suitable choice of {i , j } as we describe below. The coincidence probability at the output ports i, i is where Curve fitting is employed to estimate the value of β ii jj that best fits the measured coincidence counts. The estimated value of β ii jj is employed by Algorithm 5 to ascertain the sign of θ ij . Algorithm 5 relies on the identity and on known values of to ascertain the sign of θ ij . If the sign of θ ij is positive, then β ii jj = β + ii jj and (27) returns a positive sgn θ ij . Otherwise, β ii jj = β − ii jj , in which case (27) gives a negative sign. In summary, sgn θ ij is determined using the values of β ii jj , which are estimated by curve fitting, and of β ± ii jj , which are computed using the signs and amplitudes of θ ij , θ i j , θ i j . Algorithms 4-6 detail the step-by-step procedure to determine the absolute values and the signs of {θ ij }. For certain interferometers U , the ordering of indices ii jj depicted in Figure 5 can lead to instability in the characterization procedure. Appendix A elucidates on this instability and presents strategies to counter the instability. This completes our procedure to characterize the matrix A for representative matrix U = LAM . In the next subsection, we present a procedure to estimate the matrix that is most likely for the characterized matrix A. The experimentally determinedà is different from the actual A because of random and systematic error in the experiment, where we denote the experimentally determined (33) forà (rather than A) differ from the actual L and M respectively. The estimatedŨ =LÃM is thus a non-unitary matrix and is not equal to U in general. Furthermore,Ũ is a random matrix, which depends on the random errors in the one-and two-photon experimental data. We employ maximum-likelihood estimation to calculate the unitary matrix W that best fits the collected data. First, bootstrapping techniques are used to estimate the probability-density function (pdf) of the entries of the random matrixŨ [46,47]. Next, standard methods in maximum-likelihood estimation [48] are employed to find the unitary matrix W . Maximum-likelihood estimation simplifies under the assumption that the error onŨ is a Gaussian random matrix ensemble, i.e, that the matrix entries Ũ ij are complex independent and identically distributed (iid) Gaussian random variables centred at the correct matrix entries. In this case, the most likely unitary matrix W is the one that minimizes the Frobenius distance ¶ fromŨ [49]. The unitary matrix minimizes the Frobenius-norm distance fromŨ [50]. Thus, if the random errors {U ij −Ũ ij } in the matrix elements are iid Gaussian random variables with mean zero, then W is the best-fit unitary matrix. Figure 6 is a depiction of the actual, the estimated and the most likely transformation matrices. Algorithm 7 computes W . This completes our procedure to estimate the most-likely unitary matrix W that represents the linear optical interferometer. In the next section, we present a procedure to estimate the error bars on the entries of the estimated representative matrix W accurately. Figure 6: A depiction of the error in reconstruction of the interferometer matrix U . The matrix U represents the unitary transformation effected by the interferometer.Ũ is the complex-valued transformation matrix returned by the reconstruction procedure. Algorithm 7 returns W , which represents the unitary matrix that is most likely to have generated the data collected in the characterization experiment. ¶ The Frobenius norm of a matrix A m×m is defined as SU(m) The Frobenius-norm distance between matrices U and V is defined as and is a symmetric, positive-definite and subadditive distance function on the set of matrices. Bootstrapping to estimate error bars (Algorithm 8) In this section, we present a procedure to estimate the error bars on the matrix entries {W ij } of the characterized representative matrix W . The entries {W ij } computed by Algorithms 1-7 are random variables because of random error in experiments. Obtaining accurate error bars on these random variables is important for using characterized linear optical interferometers in quantum computation and communication. Current procedures compute error bars under the assumption that Poissonian shot noise is the only source of error in experiment [21,23]. We choose to employ bootstrapping on the data determine error bars [46,47,[51][52][53]. Monte-Carlo simulation is widely used but this technique is not applicable here because the Poissonian shot noise assumption is not reliable given the presence of other sources of error some of which are not understood. Bootstrapping is preferred because the nature of the error need not be characterized and instead relies on random sampling with replacement from the measured data. Bootstrapping can be employed toyield estimators such as bias, variance and error bars. Algorithm 8 calculates the error bars σ(W ij ) using estimates of the {W ij } pdf's, which are obtained using bootstrapping as follows. The algorithm simulates N characterization experiments using the one-and two-photon data, i.e., the inputs to Algorithms 1-7. In each of the N rounds, the one-and two-photon data are randomly sampled with replacement (resampled) to generate simulated data. The data thus simulated are given as inputs to Algorithms 1-7, which return the simulated representative matrices The pdf's of the simulated-matrix entries {W b ij : b ∈ {1, . . . , N }} converge to the pdf's of the respective elements {W ij } for large enough N [54,55]. The simulated data are obtained in each round by resampling from the one-and two-photon experimental data as follows. Single-photon detection counts are simulated by resampling from the set {N ijb j : b j ∈ {1, . . . , B}} of experimental detection counts (Line 17 of Algorithm 8). Two-photon coincidence counts are simulated by shuffling residuals obtained on curve-fitting experimental data. Specifically, the algorithm (Line 12) resamples from the set of residuals obtained by fitting experimentally measured coincidence counts to function C ii jj (τ ) (12). The resampled residuals are added to the fitted curve to generate the simulated data (Line 14) + . Algorithms 1-7 are used to obtain the simulated elements + The pdf of the residuals is different for different values of τ . We assume that the pdf's for different τ are of the same functional form, albeit with different widths. The distribution of the residuals for different values of τ are determined using standard methods for non-parametric estimation of residual distribution [56,57]. Algorithm 8 normalizes the residuals before resampling from the residual distribution. W ij of the representative matrix. Finally, the error bars on the {W ij } are estimated by the standard deviation of the pdf of the elements. This completes the characterization of representative matrix W and the error bars on its elements. The next section details a procedure for the scattershot characterization of the interferometer to reduce the experimental time required for characterizing a given interferometer. Scattershot characterization for reduction in experimental time In this section, we present a scattershot-based characterization approach to effect a reduction in the characterization time [58,59]. Our scattershot approach reduces the time required to characterize an m-mode interferometer from O (m 4 ) to O (m 2 ) with constant error in the interferometer-matrix entries. The straightforward approach of characterization involves coupling and decoupling light sources successively for each one-and two-photon measurement. In contrast, the scattershot characterization relies on coupling heralded nondeterministic single-photon sources to each of the input ports of the interferometer and detectors to each of the output ports. Controllable time delays are introduced at two input ports, which are labelled as the first and second ports. All sources and detectors are switched on and the controllable time-delay values are changed first for the first port and then for the second port. Single-photon data are collected by selecting the events in which exactly one of the heralding detectors and exactly one of the output detectors register a photon simultaneously. Two-photon coincidence events at the outputs are counted when two heralding detectors register photons. The controllable time delays introduced at the first and second input ports ensure that each of the 2(m − 1) 2 coincidence measurements is performed. Note that our characterization procedure (Algorithms 1-8) yields accurate estimates of interferometer parameters even when photon sources with different spectral functions are used. In summary, the required characterization data are collected by selectively recording one-and two-photon events. The setup for the scattershot characterization of an interferometer is depicted in Figure 7. Now we quantify the experimental time required in the characterization of a linear optical interferometer. Our characterization procedure requires Bm 2 single-photon counting measurements and 2(m−1) 2 coincidence-counting measurements to characterize an m-mode interferometer. We estimate the time required for each of these measurements such that random errors in the A ← {cos ϑ, sin ϑ, sin ϑ, cos ϑ}, Φ ← {0, π/2, π/2, 0} 3: Assumption: Residuals cal (τ ) pdf width ∝ C fit (τ ). 5: 10: end for 11: for n = 1 to N do 18: ShuffledNormalResiduals ii jj (τ ) ← |T | entries in NormalResiduals ii jj (τ ) 19: respectively. More photons need to be incident at the interferometer input ports to offset this decrease in transmission probabilities. Therefore, maintaining a constant standard deviation in the {α ij } and {θ ij } measurements requires O (m) and O (m 2 ) scaling respectively in the number of incident photons, which amounts to an overall O (m 4 ) scaling in the experimental time requirement. Scattershot characterization allows (m − 1) 2 different sets of the one-and two-photon data to be collected in parallel thereby reducing the time required to characterize the interferometer by a factor of (m − 1) 2 . The overall time required for the characterization decreases from O (m 4 ) to O (m 2 ) if the scattershot approach is employed. Our analysis of scattershot characterization assumes that the coupling losses are small and that weak single-photon sources are used, i.e., that the probability of multiphoton emissions from the heralded sources is small as compared to single-photon emission probabilities. These assumptions are expected to hold for on-chip implementations of linear optics that have integrated single-photon sources and detectors. Light sources used at each input port in our scattershot-based characterization procedure differ spectrally in generally. Our characterization procedure is accurate despite this difference because we measure source-field spectra and using these data in the curve-fitting procedure. We have developed the scattershot approach which has advantages and disadvantages but on balance is a superior experimental approach to consecutive measurement. The advantage is that the time requirement for characterization if reduced by a factor that scales as O (m 2 ). The disadvantage is the overhead of requiring one source at each input port and one detector at each output port. The disadvantage is not daunting because these requirements are commensurate with other active investigations of QIP such as LOQC and scattershot BosonSampling. In fact, state of the art implementations [59] meet our increased requirements for scattershot characterization. Summary of procedure and discussions In this section, we summarize our characterization procedure for a less formally-oriented audience. We describe the processing of the collected experimental data by the various algorithms presented in Section 3. We compare our procedure with the existing procedure for the characterization of linear optics using one-and two-photons [33]. We provide numerical evidence that our characterization procedure promises enhanced accuracy and precision even in the presence of shot noise and mode mismatch. The experimental data required by our procedure to characterize an m-mode interferometer includes the following one-and two-photon measurements. The number N ijb j (13) of single-photon detection events is counted at the j-th output port when single photons are incident at the i-th input port. This single-photon counting is repeated B times for each of the input ports and output ports, where B is chosen such that the cumulants of the set {N ijb j : b j ∈ {1, . . . , B}} converge. The single-photon counts {N ijb j } are received by Algorithm 2, which returns the {α ij } (8) estimates using Eq. (18). The spectral function f j (ω) (2) of the light incident at each input port j is measured. This function is used by Algorithm 1 to calculate the expected two-photon coincidence curves using Eq. 12. Fitting experimental data to these coincidence curves yields an accurate estimate of the mode-matching parameter during calibration and the arguments {θ ij } in the argument-estimation procedure. Thus, the spectral function f j (ω) serves as an input to the algorithms for the estimation of the mode-matching parameter and of the arguments {θ ij } (Algorithms 3-6). The mode-matching parameter γ is estimated by performing coincidence measurement on a beam splitter that is separate from the interferometer but is constructed using the same material. First, we use single-photon data to estimate the reflectivity cos ϑ of the beam splitter according to Eq. (21). Imperfect mode-matching changes the shape of the coincidence curve, and we find γ by comparing the shapes of (i) the curve expected for reflectivity cos ϑ and (ii) the curve obtained experimentally. The estimated beam splitter reflectivity, the measured spectra and the coincidence counts are received as inputs by Algorithm 3, which returns an estimate of γ. Algorithm 6 uses two-photon coincidence counts to estimate the arguments {θ ij }. Coincidence counts are measured for the input ports j, j and output ports i, i for the 2(m − 1) 2 sets Bootstrapping is employed to test the goodness of fit between the experimental curve and expected curves [61]. Experiments [11,36] can employ bootstrapping instead of the incorrect χ 2 -confidence measure to test if the data are consistent with quantum predictions or with the classical theory. Finally, we recommend a scattershot approach for reducing the experimental time required to characterize interferometers. The approach involves coupling heralded nondeterministic single-photon sources at each of the input ports and single-photon detectors at each of the output ports. All the sources and the detectors are switched on in parallel. Single-photon counts are recorded selectively as two-photon coincidences between the heralding detectors and the output detectors, while two-photon events are recorded when two heralding detectors and two output detectors record photons. Controllable time delays are introduced at the first and second input ports so coincidences between each of the 2(m − 1) 2 choices (39) of input and output ports are recorded. The scattershot approach reduces the experimental time required to characterize an m-mode interferometer from O (m 4 ) to O (m 2 ). Now we compare and contrast our procedure with the Laing-O'Brien procedure [33]. Our procedure is inspired by the Laing-O'Brien procedure in that it employs (i) a 'real-bordered' parameterization (8) of the representative matrix and modelling of linear losses at the interferometer ports, (ii) a ratio of single-photon data to estimate the complex amplitudes of the matrix elements and (iii) an iterative procedure that uses two-photon data to estimate the amplitudes of the complex arguments and to estimate the signs of the complex arguments. Our procedure differs from the Laing-O'Brien [33] procedure in that we use averaged value (18) Figure 8: The fitting of coincidence data to curves obtained from spectral functions using (12) and to Gaussian functions. Coincidence counts are simulated using experimentally measured spectra. Another advance in our method is the curve-fitting procedure for estimating complex arguments of interferometer matrices. The Laing-O'Brien procedure requires coincidencecurve visibilities to estimate complex arguments α ij . Whereas the Laing-O'Brien procedure recommends coincidence probabilities be measured at zero time delay and also at time delays large as compared to the temporal spread of the wave-packet, in practice, current implementations determine the visibilities by fitting experimental data to Gaussian curves [35,[63][64][65][66][67]. These implementations are flawed because source spectra differ from Gaussian in general. Our procedure is accurate because the data are fit to curves computed from spectral functions, rather than fitting to Gaussians. Figure 8 illustrates the distinction between fitting experimental coincidence counts to the coincidence function (12) simulated using spectra and fitting to Gaussian functions. Figure 9 demonstrates the increase in accuracy and precision of characterization by using the correct curve-fitting function. We introduce the calibration subroutine, which relies on the estimation of the mode mismatch in the source field. Spatial and polarization mode mismatch is not an issue of major concern in waveguide-based interferometers, which typically operate in the single-photon regime. In these interferometers, the calibration step of our procedure can be neglected without decreasing accuracy. The mode-mismatch parameter γ, which is an input of the curve-fitting procedure, is set to unity. In the context of bulk-optics, our calibration step ensures accuracy and precision if (i) γ is identified as the maximum-possible source overlap in the spatial and polarization degrees of freedom and (ii) the experimentalist adjusts the setup to maximize coincidence One-and two-photon interference data was simulated for a five-channel interferometer using experimentally measured spectra and simulated Poissonian shot-noise. Characterization was performed by fitting coincidence curves to Gaussians (red curve) and to correct curves according to our procedure (blue curve). matlab code for the simulations depicted in this figure is available on GitHub [62] visibility for the calibrating beam splitter and for each choice of interferometer inputs ports. Such an adjustment will ensure that the source overlap acquires its maximumpossible value γ in each of the coincidence-curve measurements. This maximum value is a property of the sources used and is independent of source alignment and focus so is expected to remain unchanged between different confidence measurements. Figure 10 demonstrates the increase in accuracy and precision of characterization by using the calibration procedure Other advances made in our characterization procedure over existing procedures include (i) a maximum likelihood estimation approach to determine the unitary matrix that best fits the data (ii) a bootstrapping based procedure to obtain meaningful estimates of precision and (iii) a scattershot-based procedure to improve the experimental requirements of characterization. Conclusion In conclusion, we devise a one-and two-photon interference procedure to characterize any linear optical interferometer accurately and precisely. Our procedure provides an algorithmic method for recording experimental data and computing the representative transformation matrix with known error. The procedure accounts for systematic errors due to spatiotemporal mode mismatch in the source field by means of a calibration step and corrects these errors using an estimate of the mode-matching parameter. We measure the spectral function of the incoming light to achieve good fitting between the expected and measured coincidence counts, thereby achieving high precision in characterized matrix elements. We introduce a scattershot approach to effect a reduction in the experimental requirement for the characterization of interferometer. The error bars on the characterized parameters are estimated using bootstrapping statistics. Bootstrapping computes accurate error bars even when the form of experimental error is unknown and is, thus, advantageous over the Monte Carlo method. Hence, our bootstrapping-based procedure for estimating error bars can replace the Monte Carlo method used in existing linear-optics characterization procedures. We open the possibility of applying bootstrapping statistics for the accurate estimation of error bars in photonic state and process tomography. Appendix A. Removal of instability in characterization procedure In this section, we describe an instability in our characterization procedure, which can yield large error in the {W ij } output for small error in the experimental data C exp ii jj (τ ) in case of certain interferometers W . We present a strategy to circumvent this instability by means of collecting and processing additional experimental data. The instability in the characterization procedure arises because of an instability in estimation of {sgn θ ij } (Algorithm 5). Small error in the measured coincidence counts can lead to the wrong inference of sgn θ ij , which can lead to a large error W − U in the characterized matrix W . Recall that Algorithm 5 uses the identity ii jj (27) to determine the sign of the arguments, where β ± ii jj def = |θ i j −θ ij −θ i j ±|θ ij || and the values of β ii jj , θ i j , θ i j , θ ij , |θ ij | are estimated by curve fitting. Random and systematic error in measured coincidence counts can lead to estimate of variables β ii jj , θ i j , θ i j , θ ij , |θ ij | differing from their actual values. The estimation of sgn θ ij is unstable if the θ i j − θ i j − θ ij term (27) is close to 0 or π because, in this case, a small error in the β ii jj estimate can lead to an incorrect sgn θ ij estimate. In other words, the sign estimates are unstable if the values of are small compared to the error in our β ii jj , θ i j , θ i j , θ ij , |θ ij | estimates. We mitigate the sign-inference instability by making two modifications to our characterization procedure; the first modification removes instability from the signinference of the second row and second column elements whereas the second modification prevents incorrect inference of the remaining signs. The inference of {sgn θ i2 }, {sgn θ 2j } (Lines 14-17, Figures 5b, 5c) is unstable if is small as compared to the error in the β i2j2 , θ 22 , θ 2j , θ i2 , |θ ij | estimates. Hence, we relabel the interferometer ports such that θ 22 is as far away from 0 and π as possible. Specifically, after the amplitudes of the phases have been estimated (Line 8 of Algorithm 6), we choose i, j for which |θ ij − π/2| is minimum, and we swap the labels of input ports 2, j and output ports 2, i. We measure two-photon coincidence counts based on this new labelling and process it using Algorithm 6. The instability in the procedure for estimation of the {θ i2 }, {θ 2j } signs is removed as a result of the relabelling. The second modification is aimed at removing the instability in the remaining signs. The procedure estimates the remaining signs by using {C exp ii jj (τ )} values for i = j = 2. The estimation of θ ij is unstable if θ ref i2j2 is small as compared to the error in the β i2j2 , θ 22 , θ 2j , θ i2 , |θ ij | estimates. We make a heuristic choice of a threshold angle θ T that accounts for the error in these variables, and we reject any sgn θ ij inferred using θ ref i2j2 ≤ θ T . Additional two-photon coincidence counting is performed and employed to estimate these values of θ ij , as detailed in the following lines that can be added to the algorithm to remove the instability if θ r i2j2 < θ T then 3: Choose i = 1, i and j = 1, j such that |θ i j − θ ij − θ j j | is closest to π/2. 4: C exp ii jj (τ ) ← Coincidence counts for input ports j, j and output ports i, i . Appendix B. Curve-fitting subroutine Our characterization procedure employs curve fitting in Algorithm 3 to estimate the mode-matching parameter γ and in Algorithms 4-6 to estimate {θ ij } values. The curvefitting procedure determines those values of unknown parameters that maximize the fitting between experimental and expected coincidence data. In this section, we describe Figure B1: Simulated coincidence counts for output ports i, i and input ports j, j of interferometer with α ii = α i j = √ 3/4 and α ii = α ij = 1/4 and for different values of β ii jj . The value of β ii jj in each respective figure is (a) π, (b) 0, (c) π/3 and (d) 2π/3. The coincidence counts corresponding to τ = 0 and τ → ∞ are marked on each plot by C exp (0) and C exp (∞) respectively. the inputs and outputs of the curve-fitting subroutines. We present heuristics to compute good initial guesses of the fitted parameters. The curve-fitting subroutine receives as input (i) the choice of parameters to be fitted; (ii) the coincidence counts {C exp ii jj (τ )}; (iii) an objective function, which characterizes the least-square error between expected and experimental counts; and (iv) the initial guesses for each of the fitted parameters. The output of the curve-fitting subroutine is the set of parameter values that optimize the objective function. The first input to the subroutine is the choice of the parameters to be fit. The curve-fitting subroutine fits three parameters. One of these three (namely the modematching parameter γ in Algorithm 3 or the |θ ij | or β ii jj value in Algorithm 6) is related to the shape of the curve, whereas the other two are related to the ordinate scaling and the abscissa shift of the curve respectively. The ordinate scaling factor comprises the unknown losses {κ i , ν j }, transmission factors {λ i , µ j } and the incident photon-pair count. The horizontal shift factor accounts for the unknown zero of the time delay between the incident photons. The algorithm returns the values of the shape parameter, the abscissa shift and the ordinate scaling that best fit the given coincidence curves. The objective function quantifies the goodness of fit between the experimental data and the parameterized curve. We use a weighted sum τ ∈T w(τ )|C exp (τ ) − C (τ )| 2 (B.1) of squares between the experimental data and the fitted curve as the objective function [44] for weighs w(τ ). We assume that the pdfs of the residues are proportional to C exp (τ ) and we assign the weights to the squared sum of residues. In case the pdf's of the residuals for different values of τ is not known, standard methods for non-parametric estimation of residual distribution can be employed to estimate the pdf's [56,57]. Thus, the curve fitting algorithm returns those values of the fitting parameters that that minimize weighted sum of squared residues between experimental and fitted data. The curve-fitting procedure optimizes the fitness function over the domain of the fitting parameter values. Like other optimization procedures, the convergence of curve fitting is sensitive to the initial guesses of the fitting parameters. The following heuristics give good guesses for the three fitting parameters. We guess the ordinate scaling as the ratio C exp (∞) C ii jj (∞) (B.3) of the experimental coincidence counts to the coincidence probability C ii jj (∞) for large (compared to the temporal length of the photon) time-delay values. The γ value is guessed for Algorithm 3 as the ratio of the visibility of the experimental curve to the expected visibility in the curve. The initial guesses for ϑ ≡ |θ ij | and ϑ ≡ β ii jj are based on the known estimate of γ and the visibility V = 2γ cos 2 ϑ sin 2 ϑ cos 4 ϑ + sin 4 ϑ . (B.5) of the curve. As there are four kinds of curves (see Figure B1) possible for different values of the shape parameter (γ, |θ ij |, β ii j ), another approach is to perform curve fitting four times, each time with a value from the set π/4, 3π/4, 5π/4, 7π/4 of initial guesses and choose the fitted parameters that optimize the objective function. Finally, the initial value of the abscissa shift parameter is guessed such that the global maxima or minima (whichever is further from the mean of the coincidence-count values over τ ) of the coincidence curve is at zero time delay. In summary, the curve fitting procedure uses the measured coincidence counts, the objective function and the initial guesses to compute the best fit parameters. This completes our description of the curve-fitting procedure and of heuristics that can be employed to computed the initial guesses for the fitted parameters.
12,726
2015-08-02T00:00:00.000
[ "Physics" ]
Inequalities in the Use of Family Planning in Rural Nepal This paper explores inequalities in the use of modern family planning methods among married women of reproductive age (MWRA) in rural Nepal. Data from the 2012 Nepal Household Survey (HHS) were utilized, which employed a stratified, three-stage cluster design to obtain a representative sample of 9,016 households from rural Nepal. Within the sampled households, one woman of reproductive age was randomly selected to answer the survey questions related to reproductive health. Only four out of every ten rural MWRA were using a modern family planning method. Short-acting and permanent methods were most commonly used, and long-acting reversible contraceptives were the least likely to be used. Muslims were less likely to use family planning compared to other caste/ethnic groups. Usage was also lower among younger women (likely to be trying to delay or space births) than older women (likely to be trying to limit their family size). Less educated women were more likely to use permanent methods and less likely to use short-term methods. To increase the CPR, which has currently stalled, and continue to reduce the TFR, Nepal needs more focused efforts to increase family planning uptake in rural areas. The significant inequalities suggest that at-risk groups need additional targeting by demand and supply side interventions. Introduction In Nepal, the National Health Policy (1991), the Second Long-Term Health Plan , and the National Reproductive Health Strategy (1998) have all emphasized the need to improve equitable access to quality reproductive health services. Since 2010, it has been government policy to provide at least five different family planning methods at all levels of health facility from health post and above [1]; however, just 8% of health posts have met this target [2]. In the community, injectable contraceptives are available from Community Health Workers (CHWs), and Female Community Health Volunteers (FCHVs) undertake educational and promotional activities on family planning and distribute oral contraceptive pills and condoms [1]. Many barriers prevent the use of family planning and result in unplanned pregnancies [3]. These barriers are multifactorial, including both client-related factors such as a lack of education and exposure to media resulting in poor knowledge about family planning methods and services [3], low economic status [3], and concerns and experience of side effects [3] and health system factors such as poor coverage of health facilities [4], lack of outreach services [4], stockouts and poor method mix [3], limited providers and poor provider competence [4], and lack of advice and counselling [4]. Both client-and system-related barriers contribute to increased inequality in the utilization of family planning services. Between 1996 and 2006 the national contraceptive prevalence rate (CPR) increased by 69% in Nepal, from 26% in 1996 [5] to 44% in 2006 [6], but between 2006 and 2011 it stalled [7] (Figure 1). Similarly, the CPR in rural areas increased from 24% in 1996 [5] to 42% in 2006 [6] but then stalled between 2006 and 2011. A different pattern was observed in urban areas: the CPR increased from 45% in 1996 to 56% in 2001 [8] but then declined to 50% in 2011 [7]. Despite the stalled CPR in rural areas and the decreased CPR in urban areas, the TFR has continued to decline in both: from 2.8 in 1996 [6] to 1.6 in 2011 [7] in urban areas and from 4.8 in 1996 [6] to 2.8 in 2011 [7] in rural areas. However, the decline in the TFR may be due to the momentum gained in last decade, and if substantial efforts are not put in place now to increase the CPR it is unlikely that this decline will continue. Given that most of the Nepalese population live in rural areas (83%), the national CPR and total fertility rate (TFR) are aligned to the rural figures and a better understanding of inequalities in family planning use within rural areas is required to inform efforts to meet the national MDG target for the TFR of 2.5. Many studies have documented lower availability and use of family planning methods in rural areas [9,10] and inequalities between different groups [11]. Rural women often have lower levels of education and lower socioeconomic status, which may reduce access to family planning [12], and input into decision making [13]. The objective of this paper is to assess the prevalence and inequalities (by age, level of education, economic status, caste/ethnicity, access to health facility, and ecological zone) in the use of modern family planning methods, among married women of reproductive age in rural Nepal. Study Design and Sampling. This paper used data collected between August and September 2012 for the nationally representative, cross-sectional 2012 Nepal Household Survey (HHS 2012) coordinated by authors of this paper, in collaboration with the Ministry of Health and Population (MoHP). The primary objective of the survey was to provide national estimates for key reproductive, maternal, neonatal, and child health indicators [14]. Since some of the survey questions related to the uptake of family planning, the MoHP was keen to see further analysis of these data to explore inequalities in family planning in rural Nepal. Nepal consists of 75 districts divided into three ecological zones (mountain, hill, and Terai), five administrative regions, and 13 subregions. Districts are divided into village development committees (VDCs) (considered to be rural) and municipalities (considered to be urban). These in turn are divided into wards, with each VDC having nine wards. A stratified, three-stage cluster design was employed in the HHS 2012, first selecting districts, then wards, and then households. Districts were the primary sampling units (PSUs), and one PSU was randomly selected from each of the 13 subregions. This resulted in three districts being selected from the mountain zone, five from the hill zone, and five from the Terai zone. Within these 13 PSUs, wards were used as the basis for clusters, and 180 clusters were selected with probability proportionate to size (PPS) (based on the number of households as per the National Population and Housing Census 2011) [15]. From each cluster, 57 households were selected using systematic sampling to obtain a representative sample of 10,260 households of which 9,016 were in rural areas. Within the sampled households, one woman of reproductive age (15-49 years) was randomly selected to answer the survey questions related to reproductive health. However, the current analysis is based on responses from 7442 married women of reproductive age (15-49 years) from the rural households. A more detailed description of the sampling methodology is presented in the HHS 2012 report [14]. Data Entry and Coding. All questionnaires were checked by a supervisory level at the time of data collection and coding of data was undertaken prior to data entry. All data were double entered into a CSPro 4.0 database and any inconsistencies were corrected. Data entry was closely supervised by a data manager. Prior to analysis data were checked for any anomalies and, where necessary, data were cross-checked with the original questionnaires. Variables Included. The main outcome variable was the use of any modern contraceptive. This was broken down into permanent, long-acting, and short-acting methods, and these categories were used in the multinomial logistic regression analysis as outcome variables. The use of these broader categories, as opposed to showing results by individual method, is more relevant for government efforts to improve service delivery. Three levels of predictor variables were available from the HHS 2012 and were included in the analysis: individual (mother's age: 15-24 years, 25-34 years, and 35-49 years; maternal education: never attended school to higher education); household (caste/ethnic group categorized based on the classification recommended by Bennett et al. [16]; wealth quintile and distance to health facilities: less than 30 minutes, 30-60 minutes, and more than 60 minutes), and community (ecological zone: mountain, hill, and Terai). Data Analysis. All analyses in this paper were conducted using STATA 12 SE Version. Prevalence values were weighted by sample weights to provide population estimates. The prevalence and 95% confidence intervals (95% CI) were calculated taking into consideration the complex survey design of the HHS 2012. The crude and adjusted odds ratios were assessed through binomial and multinomial logistic BioMed Research International 3 regression to estimate the inequalities, and a < 0.05 was considered as statistically significant. All of the predictors (mother's age, education, economic status, caste/ethnicity, distance to health facilities, and ecological zone) were used in the final adjusted model. Any Modern Methods. Forty-one percent of MWRA in rural Nepal were using a modern family planning method ( Table 1). The uptake of modern family planning methods increased with age ( Table 2). Compared to Brahmins/Chhetris, Newars were nearly twice as likely (AOR: 1.9; 95% CI: 1.4-2.7) to use a modern method while Muslims and Terai Madhesi other castes were least likely. Women residing in hill districts were less likely to use a modern method than those in the mountain districts (AOR: 0.7; 95% CI: 0.5-0.9). No significant differences in the use of modern methods were noted between wealth quintiles, levels of education, and the time taken to reach the nearest government health facility. The use of short-term (21%) and permanent methods (18%) was far higher than the use of long-acting reversible contraceptives (LARCs) (2%) ( Table 1). Permanent Methods. Permanent methods (18%) were the second most commonly used group of modern methods. The likelihood of using permanent methods increased with age (as attainment of desired family size increases with age) and was more common among those who had never been to school. The multinomial regression supported this finding showing that the likelihood of using a permanent method decreased with increasing education level ( Table 2). Permanent methods were most likely to be used in the Terai (23%) and the least likely to be used in the hill districts (12%). There were large differences in the use of permanent methods by caste/ethnic group, with Terai Madhesi other castes having the highest use (27%) and Muslims the lowest use (4%) ( Table 1). The multinomial logistic regression analysis showed low use of permanent methods among Janajatis (AOR: 0.7; 95% CI: 0.5-0.9) and Muslims (AOR: 0.1; 95% CI: 0.1-0.2) compared to Brahmins/Chhetris. No significant differences were observed in wealth quintile, time taken to the nearest government health facility, or ecological zone ( Table 2). Long-Acting Reversible Methods. Use of LARCs (implants and IUCDs) was very low, with around 2% of MWRA using each method. Use of LARCs increased slightly for each agegroup up to the peak use at 35-49 years ( Table 1). The multinomial logistic regression analysis also showed use of LARCs to be higher among those aged 35 years or above compared to those aged below 25 (AOR: 2.0; 95% CI: 1.2-3.5). Use of LARCs among Terai/Madhesi other castes (AOR: 0.2; 95% CI: 0.1-0.7) was lower than among Brahmins/Chhetris ( Table 2). Use of LARCs was slightly higher among those who lived less than 60 minutes travel time from a government health facility in comparison to those living more than 60 minutes away (Table 1). Multinomial logistic regression analysis also showed that use of LARCs was lower among those who resided more than 60 minutes away from their nearest government health facility (AOR: 0.5; 95% CI: 0.3-0.9) in comparison to those living less than 30 minutes away. No significant association was noted by ecological zone, wealth quintile, or level of education (Table 2). Short-Term Methods. Short-term methods (21%) were the most commonly used group of methods of contraception. MWRA aged 25-34 years were most likely to use short-term methods (24%) among all age-groups, and those living in mountain (25%) or hill (25%) districts were more likely to use them compared to those living in Terai districts (15%). Use of short-acting methods was the lowest for those who had never attended school (17%) and the highest amongst the most educated (30%). This was supported by the multinomial logistic regression analysis, which showed that women with higher education were nearly twice as likely (AOR: 1.8; 95% CI: 1.1-2.6) to use a short-acting method compared to those who never attended school. The likelihood of using a shortacting method was higher among Newars and Janajatis than Brahmins/Chhetris, while Muslims were less likely to use short-acting method ( Table 2). Those in the highest wealth quintile (23%) were more likely to use short-acting methods compared to other wealth quintiles (Table 1). No significant association was observed with time taken to reach the nearest government health facility (Table 2). Discussion In rural Nepal the challenging topography and lack of road infrastructure and transportation mean that many have to walk long distances over difficult terrain to reach health facilities. The 2012 HHS found that just over half of the rural population (53%) were within half an hour travel time of their closest health facility, compared to 80% in urban areas [14]. Distance to the nearest health facility has been identified as a barrier to family planning uptake in other studies [12,17]. A study in Bangladesh revealed that couples who resided more than 30 minutes travel time from a facility were 25% less likely, and those living between 15 and 30 minutes were 20% less likely, to use FP methods in comparison to women who lived at a distance of less than 15 minutes [18]. However, except for LARCs, no significant association was found in this study between distance to a health facility and use of modern family planning methods. This may be partly attributed to the increased availability of family planning methods (injectables, pills, and condoms) at community level through outreach clinics, private pharmacies, and FCHVs, whereas use of LARCs, which are only available at facilities, decreased with increasing distance. Male sterilization was higher among those living further from a government health facility (data not shown). Males often use mobile camps for sterilization [7,19,20] and may be more likely to opt for sterilization if they face greater difficulties in accessing services for other family planning methods. Use of female sterilization was most common among those living in the Terai, while male sterilization was least common in the Terai (data not shown). Male sterilization 3577 Note: Numbers may not sum to total due to rounding. is sometimes believed to lead to impurity and exclusion from rituals and also cause physical weakness [21], but it is not clear whether this belief is more common in the Terai. Newars were most likely to use family planning methods and Muslims and Terai Madhesi other castes were least likely to use family planning methods. Similar findings have been reported in other studies [22]. Caste-based discrimination has been reported by Dalits, Muslims, and Terai Madhesi other castes at health facilities in regard to reduced access to care, delayed care, and poor quality of care, including reluctance by service providers to touch Dalits leading to fewer physical examinations and discourteous behaviour [23]. Continuing social exclusion also results in families not visiting health facilities to avoid potential discrimination and poor quality care [23][24][25]. Studies have reported increased use of family planning with increased education [25][26][27]. This paper showed that the use of permanent methods (male and female) was greater among those who have never attended school. This may reflect the higher use of sterilization among older couples, as they have already attained their desired family size, who were less likely to have attended school [7]. The use of shortterm methods was almost double among those who had higher education compared to those who had never attended school. Use of contraceptives increased with age, contrary to the NDHS, which found that use was lower among younger and older women [7]. Other studies have reported higher contraceptive use among those with higher economic status [25][26][27], but this paper did not show a significant association. The findings from current analysis present some implications for policy and future research. First, since use of LARCs has been found to be significantly associated with distance between health facility and home, women residing far away need to be reached through satellite, mobile, or outreach clinics [28] or by supplying LARCs through lower-level health facilities. Furthermore, short-term family planning methods can also be promoted through these clinics [20]. This could increase the CPR in rural areas because LARCs, especially implants, are becoming popular among MWRA in rural Nepal [29]. Second, efforts should also be made to increase FP use by Muslims, Dalits, and Terai Madhesi other castes. One approach to increase FP adoption could be to train the same caste health workers [30] to provide FP services in such areas or to orientate existing health workers in accountability and interpersonal communication. Third, given the link between education and use of family planning, female education and women's empowerment should be high on the agenda. Fourth, interaction with health providers during antenatal, delivery, postnatal care, and child health visits is an ideal opportunity to promote FP use, especially given that younger women had significantly higher unmet need for spacing during 2 years postpartum [31]. Conclusions To increase the national CPR, which has currently stalled, and to ensure that the TFR continues to decline, additional efforts need to be focused on rural Nepal, including addressing the significant inequalities that exist. The findings from this paper suggest that efforts to supply LARCs within 30 minutes walking distance from homes in rural areas are likely to increase uptake. High risk groups, with lower use of family planning, such as Muslim, younger, and less educated women, need additional targeting by demand and supply side interventions. Ethical Approval Ethical approval for the 2012 HHS was received from the Ethical Review Board of the Nepal Health Research Council. Permission was also obtained from the relevant authorities in the selected districts and VDCs. Verbal consent was taken from all of the participants, after explaining the purpose of the research and guaranteeing the confidentiality of any information given.
4,137
2014-08-28T00:00:00.000
[ "Economics" ]
Meson life time in the anisotropic quark-gluon plasma In the hot (an)isotropic plasma the meson life time τ is defined as a time scale after which the meson dissociates. According to the gauge/gravity duality, this time can be identified with the inverse of the imaginary part of the frequency of the quasinormal modes, ωI, in the (an)isotropic black hole background. In the high temperature limit, we numerically show that at fixed temperature(entropy density) the life time of the mesons decreases(increases) as the anisotropy parameter raises. For general case, at fixed temperature we introduce a polynomial function for ωI and observe that the meson life time decreases. Moreover, we realize that (s/T3)6, where s and T are entropy density and temperature of the plasma respectively, can be expressed as a function of anisotropy parameter over temperature. Interestingly, this function is a Padé approximant. Introduction A new phase of quantum chromodynamics, quark-gluon plasma (QGP), is produced at relativistic heavy ion collider (RHIC) or these days at large hardon collider (LHC) by colliding two heavy nuclei such as gold (Au) or lead (Pb), relativistically. Experimental observations imply that the plasma is strongly coupled [1,2] and hence the perturbative calculation is not trustworthy. Therefore non-perturbative methods such as gauge/gravity duality may be applied to explain various properties of the plasma. The gauge/gravity duality claims that for certain strongly coupled gauge theories the dynamics of the quantum fields can be described by the dynamics of the classical fields living in a higher dimensional space-time [3]. In particular, N = 4 super Yang-Mills theory (SYM) in the limit of large colors N and large but finite t'Hooft coupling λ, which is expected to behave in a similar way with the strongly coupled QGP, is dual to type IIb supergravity on AdS 5 ×S 5 background [4]. Similarly a thermal SYM theory corresponds to the supergravity in an AdS-Shwarzschild background where the temperature of the SYM theory is identified with the Hawking temperature of AdS black hole [5]. Moreover Mateos and Trancanelli have introduced an interesting generalization of this duality to the thermal and spatially anisotropic SYM theory [6,7]. In order to add matter (quark) in the fundamental representation of the corresponding gauge group, one needs to introduce a D-brane into the background in the probe limit [8]. The probe limit means that D-brane does not back-react the geometry. Then the asymptotic shape of the brane gives the mass and condensation of the matter field. In addition, the shape of the brane can be classified into two types, one is the Mikowski embedding (ME) and the other is black hole embedding (BE). While the ME does not see the horizon, the BE crosses it. Various aspects of these embeddings have been studied in the literature, for instance see [9]. The results reported in [10] show that the mesons living in the QGP can be described by quasinormal modes. They are considered as certain small fluctuations around the BE with a complex frequency. Therefore, they are unstable modes where the imaginary part of their frequencies is identified with the inverse of the meson life time. The question we JHEP06(2014)115 would like to answer in this paper is how the anisotropy affects the mass of the meson and its life time. Quasinormal modes The background we are interested in is an anisotropic solution of the IIb supergravity equations of motion. This solution in the string frame is given by [7] where a is a constant. χ and φ are axion and dilaton fields, respectively. H, F and B depend only on the radial direction, u. In terms of the dilaton field, they are where the dilaton field satisfies a third-order equation (see equation (13) in [7]). In order to find the solution one needs to solve the equation of motion for dilaton field. Then the above equations for metric components and suitable boundary conditions will specify the solution. For more detail see [7]. Note also that the solution also contains a self dual five-form field. The function F(u) in the temporal and radial components of the metric is the blackening factor. Therefore the horizon is located at u = u h where F(u h ) = 0 and the Hawking temperature is given by The boundary lies at u = 0 and the metric approaches AdS 5 × S 5 asymptotically. The coordinates of the spacetime where the gauge theory lives are (t, x, y, z) where there is a U(1) symmetry in the xy-plane. We call x and y the transverse directions and the longitudinal direction is z. An anisotropy is clearly seen between the transverse and longitudinal directions. The entropy density per unit volume in the xyz-directions is given by In order to add the fundamental matter to the SU(N ) gauge theory we have to introduce a D7-brane into the anisotropic background in the probe limit. The probe limit means that the D7-brane does not modify the geometry. Flavour D7-branes in this background have been studied previously, for example see [11]. In fact the open strings stretched between probe D7-brane and the D3-D7 system leading to the geometry (2.1) give rise to the JHEP06(2014)115 matter in the fundamental representation of the gauge group. The dynamics of the open strings is described by the DBI action where in the large N and t' Hooft coupling limits the D3-D7 system is replaced with g M N given by (2.1). The D7-brane is extended along t, x, y, z, u and wrapped around S 3 ⊂ S 5 . Although the four-form and the axion fields are non-zero in the background, in such an embedding the Chern-Simon action has no contribution to the action. The shape of the brane is given by the transverse directions θ and ϕ where we choose ϕ to be zero. Since we do not like to study the effect of the gauge field living on the brane, we also set A a to be zero. Because of the translational symmetry of the metric components in xyz directions and the rotational symmetry in Ω 3 directions, we consider that θ depends on the radial direction and time as it is shown in (2.6). Therefore, the Lagrangian reduces to The physical parameters we are interested in can be found from the asymptotic solution to θ(u) equation of motion, θ c (u) = θ 0 u + θ 2 u 3 + . . . [12], where m = θ 0 2πα is the mass of the fundamental matter and c = θ 2 − 1 6 θ 3 0 corresponds to condensation that is proportional to ψ ψ . It is well known that the small fluctuations about the shape (the equilibrium configuration) of the probe branes represent the low spin mesons [10]. They are classified into two types according to their frequencies. In the MEs the normal modes, which are the fluctuations with discrete real frequencies, only exist. However, in the case of the BH embeddings, the fluctuations fall into the black hole and the corresponding frequencies, the so-called quasinormal modes, are complex. Applying the AdS/CFT corresponding, the meson will be dissociated in the QGP after the life time, which is given by the inverse of the imaginary part of the frequency i.e. τ ∝ ω −1 I [10]. In order to find the meson life time τ , let us start with the following ansatz where the +(-) sign corresponds to the ingoing(outgoing) modes. On the other hand the near boundary equation can be analytically solved and the solution is To find the quasinormal modes we have to force the source term, ζ 1 , to equal zero or equivalently ζ (u)| u=0 = 0. Considering the field redefinition one can see that ψ(u) has the regular expansion near the horizon. Since the equation for ψ is linear, ψ 0 can be set to 1 and the other coefficients will be determined from the equation of motion for ψ(u). Interpolating between two asymptotic solutions (2.7) and (2.8) is possible only by a set of discrete complex values of ω which can be found by some standard methods such as shooting method. We would like to emphasize that the meson in its ground state, corresponding to the first quasinormal mode, is considered in this paper. High temperature limit Fortunately in the high temperature limit, T a, the anisotropic solution has been analytically introduced in [7]. In this limit, up to leading order in a, the functions F, B and the dilaton field are given by JHEP06(2014)115 The temperature and the entropy density of the solution in terms of the anisotropy parameter are On the other hand at low temperature limit, i.e. a T , the entropy density is where c ent ≈ 3.2 [7]. We would like to find the frequency of the quasinormal modes in the high temperature background. Setting the anisotropy parameter equal to zero, both the real and imaginary parts of the frequency increase linearly as one raises the temperature. These are consistent with the results reported in [10]. As it is expected from the metric components at high temperature limit, ω R,I depends on anisotropy parameter as a 2 for any given value of the temperature i.e. where ω 0 R,I (T, m) are frequencies of the quasinormal modes for the isotropic case i.e. a = 0. Figure 1a shows that at fixed temperature although α R is almost constant with increasing the mass of the faundamental matter, α I decreases (a few values for ω 0 R,I are given in table 1). It is important to notice that in a region around m = T we expect a first order phase transition between black hole and Mikowski embeddings [13] and therefore our results are not reliable in this region. Moreover, we numerically observe that for any given value of the mass a raise in the anisotropy parameter will increase ω I . And, in turn, as it is clearly seen from figure 1b, it means that the τ /τ 0 decreases. Note that τ 0 is the value of meson life time at a = 0 for each corresponding mass. As a result the mesons will melt sooner in the QGP. This somehow indicates that anisotropy parameter and temperature behave similarly and it is in agreement with results in [14,15]. We observe that the decrease in τ /τ 0 is almost the same for different masses. In the case of fixed entropy density, the behaviour of the real and imaginary parts of the frequency is similar to (2.15) as where its coefficients have been shown in the figure 1a. Compared to the mass dependence of α R,I (T, m) at fixed temperature case, a notable increase can be seen for α R,I (s/N 2 , m). Opposite to that seen in the fixed temperature case, raising the anisotropy in the system will increase the value of the τ /τ 0 . General case In this section we are going to compute the real and imaginary parts of the frequency for arbitrary values of the temperature and anisotropy parameter. Our numerical computations show that both ω R and ω I grow linearly with increasing the temperature over a ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae limited range of mass 0 < m < T when a = 0. For instance, at fixed temperature, in the zero mass case we find ω R = 6.86T + δω R , (2.17) The two second terms in above equations, δω R and δω I , are a consequence of the anisotropy parameter. The function for the deviations seems complicated but for example in massless case δω R and δω I may be approximated by the following polynomials δω R,I = T Here we would like to emphasize that these functions can be applied in the range of our numerics ( 0.5 < T < 15 and 0 < a < 30 provided that a < 9T ). One can also calculate the quasinormal modes when the entropy density is kept fixed. However, it is not easy to find suitable functions for δω R,I (s, m) which fit our numerical results. Instead, our data turn out to be fit with the following function Using the expansion of the entropy density in the high temperature limit [7], α 4 and β 2 can be obtained in terms of α 2 as (2.20) Notice that this function gives the correct expression for the entropy density of N = 4 super Yang-Milles theory (a = 0). The value of α 2 can be found by using the best fit for the entropy density and is obtained as α 2 = 1/4 with an error less than 0.1% (see figure 2b). Surprisingly, this value leads to c ent ≈ 3.205 which is in perfect agreement with (2.14). Now (2.19) clearly leads to (2.14) in the low temperature limit. The above discussion may be generalized by considering higher order terms. As a result we suggest which is a [(2n + 2)/2n] f (a/T ) Padé approximant for f = (s/T 3 ) 6 . In principle, all coefficients can be achieved in terms of α 2 by utilizing the higher order expansion of entropy density in terms of a [7]. At fixed entropy density we found that the effect of anisotropy on the frequencies is very small (less than 1%). In principle from (2.19), for a fixed value of the entropy density and given a, the temperature can be found. Inserting the resultant temperature into (2.18), we obtain δω R and δω I . Although it is promising that we can achive the real and imaginary parts of the frequecy at fixed entropy density, unfortunately the effect of anisotropy on the frequencies (1%) is less than the error of the polynomials (2.18) (4%) and therefore the error washes away the effect. Discussion Main aim in this paper is to understand the effect of the anisotropy on the life time of the mesons living in the plasma. As it was already mentioned, according to gauge/gravity duality, the life time and the mass of the meson are described by ω −1 I and ω R , respectively. By recalling (2.17), one can calculate the following ratios and therefore the mesons dissociate more in the presence of anisotropy. This conclusion is in agreement with the result reported in [17]. In this paper it was shown that the screening length as a function of the anisotropy decreases indicating that the life time of the bound states become shorter in the anisotropic plasma. Furthermore, at RHIC(LHC) energies an increase in the mass of the mesons occurs which is about 12(8)%. Since the QGP produced in laboratory is intrinsically anisotropic, one can not measure the mass of the meson living in the QGP for a = 0. But, interestingly, this mass can be eliminated from our results and we then have (M meson ) RHIC (M meson ) LHC ≈ 1.037 (3.4) In other words the effect of anisotropy can experimentally be observed by comparing the mass of the meson at RHIC and LHC. In fact at LHC energies, the meson is lighter.
3,733.8
2014-06-01T00:00:00.000
[ "Physics" ]
The Acoustics of the Double Elliptical Vault of the Royal Palace of Caserta (italy) This work investigates the acoustic characteristics of the double elliptical vault, which overlooks the Grand Staircase of the Royal Palace of Caserta (Italy). The Royal Palace was built by the architect Luigi Vanvitelli in the Seventeenth Century and it is the largest royal building in Italy. The double elliptical vault presents a great scenography effect. Inside the vault, on the planking level, musicians used to play for the king and his guests when the royal procession, going up the grand staircase, entered the royal apartments, creating astonishment among the guests who heard the music without understanding from where it was coming. Since the musicians were inside the vault, the long reverberation made the listeners perceive the vault to be enveloped by the music. To investigate this effect, the acoustic characteristics of the double vault were measured, putting the sound source on the planking level of the vault, while the microphones were put along the staircase and in the vestibule towards the royal apartments. Finally, the spatial distribution of several acoustic parameters is evaluated also using architectural acoustic simulations. Introduction The Royal Palace of Caserta, considered one of the most significant works of Italian Baroque, was commissioned by Carlo III of Bourbon and was designed by the architect Luigi Vanvitelli.The construction of the palace started in 1752 and was completed in 1845.The Palace was built by the Bourbon King in response to the Palace of Versailles in Paris and the Royal Palace in Madrid [1].The Royal Palace of Caserta is among the 51 world heritage sites designated by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) [2].The building exemplifies the Italian way of bringing together a magnificent palace with a magnificent park. Inside the Royal Palace, the Grand Staircase connects the lower vestibule to the upper vestibule, making possible entry to the Royal apartments.The Grand Staircase is overlooked by a double vault with an elliptical opening (Figures 1 and 2).The double vault has two caps, with the lower assuming the role of a large oval cornice.A similar architectural solution was widely used for the realization of domes.Generally, the first dome was blunt, with the upper dome having more points of view for the structural requirements and the weights on the horizontal structures being stabilized.In the 16th century, numerous projects showed the realization of double vaults and domes [3,4].Double-dome structures were used for the dome of St. Paul's Cathedral by Christopher Wren in London as well as the dome of Les Invalides in Paris, where Mansart developed three caps in 1680.The double-cap model was widely diffused in Italy too, as proved by the dome of St. Maria of Flower in Florence, as well as the dome of St. Peter's church in Rome. In the Royal Palace of Caserta, the double elliptical vault presents a great stenographic effect since inside it, on the planking level, musicians used to play for the king and his guests during royal receptions.The musicians played when the royal procession, going up the grand staircase, entered the royal apartments, creating astonishment among the guests who listened to the music without understanding from where it was coming.In fact, the large volume between the two vaults allowed the musicians to hide.This means that anyone going up the stairs had the sensation of being enveloped by the music, which was generally played by stringed instruments inside the double vault. Buildings 2017, 7, 19 2 of 11 model was widely diffused in Italy too, as proved by the dome of St. Maria of Flower in Florence, as well as the dome of St. Peter's church in Rome. In the Royal Palace of Caserta, the double elliptical vault presents a great stenographic effect since inside it, on the planking level, musicians used to play for the king and his guests during royal receptions.The musicians played when the royal procession, going up the grand staircase, entered the royal apartments, creating astonishment among the guests who listened to the music without understanding from where it was coming.In fact, the large volume between the two vaults allowed the musicians to hide.This means that anyone going up the stairs had the sensation of being enveloped by the music, which was generally played by stringed instruments inside the double vault.In the Royal Palace of Caserta, the double elliptical vault presents a great stenographic effect since inside it, on the planking level, musicians used to play for the king and his guests during royal receptions.The musicians played when the royal procession, going up the grand staircase, entered the royal apartments, creating astonishment among the guests who listened to the music without understanding from where it was coming.In fact, the large volume between the two vaults allowed the musicians to hide.This means that anyone going up the stairs had the sensation of being enveloped by the music, which was generally played by stringed instruments inside the double vault.The aim of this work is to investigate the acoustic characteristics of the double vault of the Royal Palace of Caserta in order to verify the sound effects created by the musicians inside the vault for the listeners on the underlying staircase and in the entrance vestibule to the Royal apartments.For this scope, a sound source was positioned on the planking level of the double vault and microphones were arranged along the steps on the grand staircase as well as in the entrance vestibule to the Royal apartments.Furthermore, the spatial distribution of the acoustic parameters along the staircase and in the entrance vestibule to the Royal apartments are also evaluated using the architectural acoustics software Odeon [5].The goal is to describe this ancient Cox's 'sonic wonderland' [6], which used to be created in this UNESCO world heritage. Description of the Double Elliptical Vault From the central gate of the Royal Palace of Caserta, there is a large atrium where a long gallery with three naves begins.In the middle of the central nave, the lower vestibule is connected to the upper vestibule with a grand staircase overlooked by a double vault with a central elliptical hole (Figure 1).Vanvitelli designed the staircase on the axis of the octagonal vault in correspondence to the central vestibule to avoid interrupting the series of sixty-four Doric columns towards the park's natural backdrop. The staircase has a 'fork' or 'E' shape, with a central flight and two side flights overlooked by a double vaulted structure.This type of double vault was widely used in the architecture of the XVI century to expand the space.The staircase is composed of a central flight, almost 8 m wide, which leads to a first landing from where two parallel flights start towards the upper vestibule. The large stair has 117 steps, with the double flight being 18.50 m wide and 14.50 m high.The volume which houses the staircase is approximately 20 m wide, 24 m long, and 34 m high up to the intrados of the first vault, while the overall height is around 42 m (Figures 3-5).Due to its size, the royal staircase is one of the most complex elements of this Royal Palace, with an obstructed plant of more than 600 m 2 , which develops over the entire height of the palace and is overcome by a structure with a double vault.From the stairs, it is possible to simultaneously see both vaults thanks to a central large elliptical hole occurring on the lower vault.The elliptical hole has an axis with the dimensions 10.85 m by 14.60 m, while its planking level has an area of about 500 m 2 .From the pierced planking level, it is possible to see the second vault, where, at the intrados, there is a painting by Girolamo Starace Franchis titled 'The Four Seasons and the Royal Palace of Apollo'. Buildings 2017, 7, 19 4 of 11 construction of a room with an ellipsoidal vault, while it was possible to use the focus of the ellipse to help people communicate with each other at larger distances [9].In Section IV of the first book of the 'Phonurgia Nova', Kircher described the echo that could be perceived in the interior structure of the Palace of the Powerful Elector of Heidelberg.This room, due to its circular vaulted ceiling, allowed the amplification sound, creating surprising acoustic effects [10].Kircher's works express the typical Baroque vision of the 'marvellous world', with machines revealing a strong alliance between science and the magic.For example, among Kircher's creations, he is often known for his 'talking statues', devices that were able to capture whispers from the square [9].However, it is important to remember that the rational understanding of the rules of the modern architectural acoustics was started at the end of the 19th century by Sabine, almost 200 years after the death of Vanvitelli.Consequently, it is safe to assume that the acoustic effects of the double vault were more a result of the baroque preferences for unique scenography and large decorated volumes than a rational result of an understanding of the acoustic implications of the double vault.Please consider that, due to the symmetrical space and, in particular, to the double stairs (Figure 1), the microphone positions on the stairs indicate two points of measure. Acoustics Measurements In order to understand the acoustic characteristics of the double vault and the resulting spatial distribution in the underlying grand staircase, a series of acoustic measurements was carried out.The measurements were taken using an omnidirectional sound source on the planking level of the vault.In room acoustics studies, the standard for performing indoor measurements is the ISO 3382-1 [11].This has been defined for performance spaces such as theatres or concert halls.For this case, while the authors agree that the investigated space may not fall under the category of classical performance spaces, it was considered appropriate, given the use of this space, to follow the standard.From a structural point of view, the decision to realize a dome with a double vault was probably due to structural needs, since otherwise a single vault would have been 30 m high.Lateral thrusts on the walls have the static function of compressing the resilient ring and limiting the thrust of the lower dome, thus reducing the weight on the wall. In a letter sent to his brother, dated 14 July 1767, Vanvitelli described all of his satisfaction about the palace, writing that it 'will be so good when completed, it will surprise everybody, while everything that you do enhances beauty'.For Vanvitelli, the double vault had to have a scenography function thanks to its oval frame.Below the vault, there should have been an iron railing which would have made the view of the staircase more beautiful and the palace, for the people on the top floor, more comfortable; however, this railing was never realized.Only later, at the time of King Ferdinand II, did the elliptical vault start to accommodate the orchestra during royal parties, inspiring surprise among those going up to the apartments of the King accompanied by music that enveloped them fully. While designing the Grand Staircase, Vanvitelli did not know much about architectural acoustics and the sound behavior in large places.However, several centuries before Vanvitelli, the 'De Architectura' by Vitruvio had been reprinted (originally it was written in the first century B.C.).This old text described some fundamentals of theatrical acoustics [7], together with studies about vaulted places by Athanasius Kircher who worked on how the geometric shape of a room would influence its acoustic behavior [8,9], was probably known by Vanvitelli. One of the most interesting studies by Kircher regarded the elliptical shape of ceilings and the ability of this geometry to reinforce voices.Kircher understood that the ellipse could be used for the construction of a room with an ellipsoidal vault, while it was possible to use the focus of the ellipse to help people communicate with each other at larger distances [9].In Section IV of the first book of the 'Phonurgia Nova', Kircher described the echo that could be perceived in the interior structure of the Palace of the Powerful Elector of Heidelberg.This room, due to its circular vaulted ceiling, allowed the amplification sound, creating surprising acoustic effects [10]. Kircher's works express the typical Baroque vision of the 'marvellous world', with machines revealing a strong alliance between science and the magic.For example, among Kircher's creations, he is often known for his 'talking statues', devices that were able to capture whispers from the square [9].However, it is important to remember that the rational understanding of the rules of the modern architectural acoustics was started at the end of the 19th century by Sabine, almost 200 years after the death of Vanvitelli.Consequently, it is safe to assume that the acoustic effects of the double vault were more a result of the baroque preferences for unique scenography and large decorated volumes than a rational result of an understanding of the acoustic implications of the double vault. Acoustics Measurements In order to understand the acoustic characteristics of the double vault and the resulting spatial distribution in the underlying grand staircase, a series of acoustic measurements was carried out.The measurements were taken using an omnidirectional sound source on the planking level of the vault.In room acoustics studies, the standard for performing indoor measurements is the ISO 3382-1 [11].This has been defined for performance spaces such as theatres or concert halls.For this case, while the authors agree that the investigated space may not fall under the category of classical performance spaces, it was considered appropriate, given the use of this space, to follow the standard. Two source positions, close each other, were selected on the vault plane to get to the 'engineering' accuracy; the average results are reported below.While more source positions were originally planned, this was not possible as it was not fully safe to stay with all the instruments at the height of the vault, since it has no balaustre, making hard to move at that height in a free manner. The acoustic parameters early decay time (EDT), reverberation time (T 30 ), clarity (C 80 ), and definition (D 50 ), as defined in the ISO 3382-1, were analyzed.In room acoustics, the reverberation time (T 30 ) is the most common parameter, and it is often described as the persistence of a sound after a source has stopped.The early decay time (EDT) is often shown to correlate better to the reverberation effect in a room, since it focuses on the early reverberation only.The clarity (C 80 ) measures the balance between the useful and detrimental sound for the listening perception, and it is based on an early-to-late arriving sound energy ratio.In particular, the C 80 is defined as the ratio, expressed in decibels, of the early energy (that reaching the listener in the first 80 ms) over the late reverberant energy (all the sound energy reaching the listener from 80 ms to infinite).The definition (D 50 ) considers the ratio of the early arriving sound energy over the total sound energy, and in order to reflect the speech intelligibility, it is often calculated using 50 ms as the early time limit. The sound source used to perform the acoustic measurements consisted of a dodecahedron loudspeaker Peeker Sound JA12 (Peeker Sound Corporation).MLS signals of order 16 with a length of 5 s were generated by a 1 dB Symphonie system.The impulse response was recorded with 1/2' microphone GRAS 40 connected with a preamplifier 1 dB PRE 12 H.The points of the microphone measure were distributed on the staircase at a constancy pitch in order to obtain the spatial average values of the monaural parameters. All the acoustic measurements were taken under unoccupied conditions.Figure 5 shows the section of the grand staircase and double vault, with the 16 receiver positions and the position of the sound source in the double vault.Due to the symmetry of the space and to the double set of stairs, each microphone position on the stairs indicates two points of measure on each stair ramp.The acoustic measurements provided the values of the impulse responses that were processed with the Dirac 4.0 software to calculate the values of the acoustic parameters (EDT, T 30 , C 80 and D 50 ) in the octave bands from 125 Hz to 4 kHz. Figure 6 reports the measured values of EDT, T 30 , C 80 , and D 50 , averaged among the fifteen receiver locations, together with the intervals of the standard deviation for each band from 125 Hz to 4.0 kHz.These parameters have been defined in order to better describe the perception of a sound field, even though their prediction depends on many factors, such as the relative position of the sources and receivers.Using recent studies on the typical listening preference, the criteria established in [12] were used to assess the results of the acoustics of this space.The results suggested that the hall is over-reverberant and not well suited for chamber music or for speech perceptions, given its low speech intelligibility.In particular, the clarity assumes values that are well below those suggested for the sound perception, which confirms the condition of the sound envelope that the listeners experienced without having a clear perception of its direction. The results at different frequencies allow the important role played by the air absorption of the large volume to be seen.In fact, while the hard marble and rich decorations of the room guarantee long reverberation up to 1000 Hz and extremely low clarity and definition, at the frequencies where the sound absorption may not be neglected anymore such as at 4 kHz, the reverberation time is significantly shorter (below 4 s in all the positions), while the clarity assumes an average value of −8 dB.The relative high standard deviations for clarity and definition are due to the different sound-receiver positions and the fact that the distances from the sound source in the vault for the positions at the end of the stairs are much shorter than at the bottom of the stairs. In order to look more in detail at the distribution of the acoustic parameters in the room, Figure 7 reports the value obtained by the different parameters in the different positions, organized according to the typical procession from the entrance to the balaustrade at the end of the stairs.The results of the measurements suggest that the EDT and the T30 do not vary significantly through the different positions.For example, the EDT is around 5 s at a high frequency (2000 Hz) practically in all the positions, while it assumes a value between 7 and 8 s in the frequency bands of 250 and 500 Hz.This difference is clearly due to the high frequency absorption provided by the large volume of air in this space.The results of the reverberation time reflect in general those of the EDT with the exception of the low frequency values, while some unexpected variations were recorded at the top of the stairs.The general idea of the authors is that the long reverberation, the multiple spaces with the potential coupling of their volumes, and the opened door towards the royal apartments (whose effect was hence more evident only for the last receivers) represent all possible factors for this variation. long reverberation up to 1000 Hz and extremely low clarity and definition, at the frequencies where the sound absorption may not be neglected anymore such as at 4 kHz, the reverberation time is significantly shorter (below 4 s in all the positions), while the clarity assumes an average value of −8 dB.The relative high standard deviations for clarity and definition are due to the different soundreceiver positions and the fact that the distances from the sound source in the vault for the positions at the end of the stairs are much shorter than at the bottom of the stairs.In order to look more in detail at the distribution of the acoustic parameters in the room, Figure 7 reports the value obtained by the different parameters in the different positions, organized according to the typical procession from the entrance to the balaustrade at the end of the stairs.The results of the measurements suggest that the EDT and the T30 do not vary significantly through the different positions.For example, the EDT is around 5 s at a high frequency (2000 Hz) practically in all the positions, while it assumes a value between 7 and 8 s in the frequency bands of 250 and 500 Hz.This difference is clearly due to the high frequency absorption provided by the large volume of air in this space.The results of the reverberation time reflect in general those of the EDT with the exception of the low frequency values, while some unexpected variations were recorded at the top of the stairs.The general idea of the authors is that the long reverberation, the multiple spaces with the potential coupling of their volumes, and the opened door towards the royal apartments (whose effect was hence more evident only for the last receivers) represent all possible factors for this variation. To assess the temporal distribution of the sound energy and, in particular, the clarity and the definition, the preferred values reported in [12] were used.According to this, the clarity is considered adequate in the range from −2 dB to 2 dB, while the definition should assume higher values (above 0.5) only if the speech perception is important, otherwise it may have values below 0.5. The results show that the clarity (C80) is particularly low at the entrance of the staircase, while when the listener climbs the stairs, the clarity improves significantly, which is also a result of the lower distance between the sound source in the double vault and the stairs.The definition generally remained at particularly low values, confirming that this space is not suitable for speech perception. Computer Simulations Computer simulations were carried out using the software Odeon to obtain simulated data and investigate further the spatial distribution of the acoustics along the grand staircase.The reason for developing detailed software was also related to the importance of modelling the effects of different To assess the temporal distribution of the sound energy and, in particular, the clarity and the definition, the preferred values reported in [12] were used.According to this, the clarity is considered adequate in the range from −2 dB to 2 dB, while the definition should assume higher values (above 0.5) only if the speech perception is important, otherwise it may have values below 0.5. The results show that the clarity (C 80 ) is particularly low at the entrance of the staircase, while when the listener climbs the stairs, the clarity improves significantly, which is also a result of the lower distance between the sound source in the double vault and the stairs.The definition generally remained at particularly low values, confirming that this space is not suitable for speech perception. Computer Simulations Computer simulations were carried out using the software Odeon to obtain simulated data and investigate further the spatial distribution of the acoustics along the grand staircase.The reason for developing detailed software was also related to the importance of modelling the effects of different source positions within the double vault, a possibility that was impractical during the measurement sessions due to safety reasons. Oden is software that uses the principles of geometrical acoustics and adopts a hybrid calculation method that combines the image source method (in the first part of the simulation) and the ray-tracing method (in the second part of the simulation) [5].The software allows a virtual model realized by a 3D cad to be easily imported.The value of this software is confirmed by its use in similar studies [13,14].In situ measurements made possible the realization of a reticulated version of the geometry of the space (Figure 8), with the total number of surfaces equal to 3849 and the total inner surface equal to 15,864 m 2 .The software simulation usually starts from the development of a model of the space as it is and for which acoustic measurements are available.Based on previous studies, the simulations were performed by fixing the following parameters; a truncation order equal to 2 (so the image method was used up to reflections of the second order), an impulse response length equal to 10 ms, and a number of rays equal to 50,000. The acoustic model calibration was done by setting the absorption coefficients and scattering coefficients for all the virtual surfaces.A second step consisted of comparing the measured and simulated parameters in order to allow a suitable calibration of the acoustic model.This made possible the reduction of the difference between the measured and simulated acoustical parameters to minimal values, or at least below the value of a just noticeable difference [15,16]. Most of the internal surfaces of the Grand Staircase are flat marbles, which is an acoustically reflective material.The vault is made of a thick structure composed of cocciopesto, a lime mortar with crushed pottery [17,18].A relatively small fraction of the inner envelope is composed of windows, which have generally high sound absorption behavior.On the ground floor, the staircase communicates to the outside through a wide opening, which has hence a unitary absorption coefficient.Reversely, on the upper floor, the vestibule, which leads to the royal apartments where the floor is marble (reflective surfaces) and the lateral surfaces are plaster walls, is characterized by reflective surfaces.In order to validate the acoustic model, the measured averaged values of the different parameters were compared with the corresponding values calculated with the software.An iterative procedure was used to reduce the difference between the measured and calculated values.This implied slight adjustments to the sound absorption and scattering coefficients.In order to maintain the simplicity of the model together with its practicality, richly decorated surfaces were simplified as flat ones, while their absorption and scattering coefficients were modified accordingly. Discussion The values of the standard deviations are higher for T 30 and EDT, while they increase with the frequency for the early-to-late energy distribution parameters (C 80 and D 50 ).This means that the values of these acoustic parameters change significantly from point to point, as evident in Figure 7, as a result of the change in the source-receiver distance.The measured reverberation time was about 8 s at low frequencies and about 6 s at middle frequencies.The measured T 30 and EDT times are long due to the large volume of the room, which encloses the staircase, and to the presence of the plastered surfaces of the walls and marble floor, which are acoustically reflective. To better understand the spatial distribution of the acoustic parameters, the spatial distribution with a mesh of width 1 meter was obtained.Figures 8-10 show the spatial distributions of T 30 , C 80 , and D 50 in the octave band of 1 kHz, respectively.The acoustic parameter values on the staircase were plotted on an inclined surface. The maps of the acoustic characteristics provided by the elaboration of the virtual model with the software Odeon give the following indications; the reverberation time is slightly shorter (around 6.0 s) down the staircase, and it is around 7.0 s in the area of the vestibule in front of the entrance of the royal apartments.In this area, the reverberation time remains almost uniform, as if the room was perfectly diffused. At the frequency of 1 kHz, the parameter D 50 along the whole staircase is always below 0.1, confirming that speech intelligibility is particularly poor.In the area of the vestibule on the first floor, the D 50 increases slightly, obtaining values up to 0.4 only in front of the door to the royal apartments.The maps of the acoustic characteristics provided by the elaboration of the virtual model with the software Odeon give the following indications; the reverberation time is slightly shorter (around 6.0 s) down the staircase, and it is around 7.0 s in the area of the vestibule in front of the entrance of the royal apartments.In this area, the reverberation time remains almost uniform, as if the room was perfectly diffused. At the frequency of 1 kHz, the parameter D50 along the whole staircase is always below 0.1, confirming that speech intelligibility is particularly poor.In the area of the vestibule on the first floor, the D50 increases slightly, obtaining values up to 0.4 only in front of the door to the royal apartments. Conclusions The decision of Vanvitelli to build a dome with a double elliptical vault was due to structural requirements and scenography needs, with the cornice running along the vault welcoming music maestros during receptions.In fact, when going up the wide staircase and standing in the vestibule Conclusions The decision of Vanvitelli to build a dome with a double elliptical vault was due to structural requirements and scenography needs, with the cornice running along the vault welcoming music maestros during receptions.In fact, when going up the wide staircase and standing in the vestibule on the first floor, the royal procession could hear the music without seeing where it was coming from. Based on the measurement results and with the help of the architectural acoustic simulations, it was possible to evaluate the spatial distribution of acoustic parameters that depend on the early-to-late energy distribution.The results indicate that the acoustic parameters vary greatly along the staircase and are relatively more uniform and generally more favorable closer to the entrance of the vestibule towards the royal apartments. Overall, this study confirms that the practice of placing the musicians in the vault had a great scenic and acoustic effect, which seems to be in line with the baroque decoration of the Palace.Thanks to the large volume of the room with the doubled symmetrical staircase and to the abundance of marble decorated surfaces, which increase the reverberation and guarantee a sufficient diffusion, this room represents a good example of the 'marvellous world', which was typical of Baroque culture. Figure 1 . Figure 1.The grand staircase and the double vault of the Royal Palace of Caserta. Figure 2 . Figure 2. Photo of the double vault seen from the grand staircase. Figure 1 . Figure 1.The grand staircase and the double vault of the Royal Palace of Caserta. Figure 1 . Figure 1.The grand staircase and the double vault of the Royal Palace of Caserta. Figure 2 . Figure 2. Photo of the double vault seen from the grand staircase.Figure 2. Photo of the double vault seen from the grand staircase. Figure 2 . Figure 2. Photo of the double vault seen from the grand staircase.Figure 2. Photo of the double vault seen from the grand staircase. Figure 3 . Figure 3. Photo of the double vault taken from the upper level between the two vaults.Figure 3. Photo of the double vault taken from the upper level between the two vaults. Figure 3 . Figure 3. Photo of the double vault taken from the upper level between the two vaults.Figure 3. Photo of the double vault taken from the upper level between the two vaults. Figure 3 . Figure 3. Photo of the double vault taken from the upper level between the two vaults. Figure 4 . Figure 4. Plant of the double vault.Figure 4. Plant of the double vault. Figure 5 . Figure 5. Section of the grand staircase and double vault with the 16 receivers (in black) and the source position (in blue).Please consider that, due to the symmetrical space and, in particular, to the double stairs (Figure1), the microphone positions on the stairs indicate two points of measure. Figure 5 . Figure 5. Section of the grand staircase and double vault with the 16 receivers (in black) and the source position (in blue).Please consider that, due to the symmetrical space and, in particular, to the double stairs (Figure1), the microphone positions on the stairs indicate two points of measure. Figure 6 . Figure 6.Average measured acoustic parameters among the 15 receiver positions and relative standard deviations. Figure 6 . Figure 6.Average measured acoustic parameters among the 15 receiver positions and relative standard deviations. Figure 7 . Figure 7. Measured acoustic parameters in the different receiver positions for different frequency bands. Figure 7 . Figure 7. Measured acoustic parameters in the different receiver positions for different frequency bands. Figure 8 . Spatial distribution of reverberation time (T30) in the octave band of 1 kHz. Figure 9 . Figure 9. Spatial distribution of clarity (C80) in the octave band of 1 kHz. Figure 10 . Figure 10.Spatial distribution of definition (D 50 ) in the octave band of 1 kHz.
7,550.2
2017-03-02T00:00:00.000
[ "Physics" ]
Revocable Anonymisation in Video Surveillance: A “Digital Cloak of Invisibility” . Video surveillance is an omnipresent phenomenon in today’s metropolitan life. Mainly intended to solve crimes, to prevent them by realtime-monitoring or simply as a deterrent, video surveillance has also become interesting in economical contexts; e.g. to create customer pro-files and analyse patterns of their shopping behaviour. The extensive use of video surveillance is challenged by legal claims and societal norms like not putting everybody under generalised suspicion or not recording people without their consent. In this work we propose a technological so-lution to balance the positive and negative effects of video surveillance. With automatic image recognition algorithms on the rise, we suggest to use that technology to not just automatically identify people but blacken their images. This blackening is done with a cryptographic procedure allowing to revoke it with an appropriate key. Many of the legal and ethical objections to video surveillance could thereby be accommodated. In commercial scenarios, the operator of a customer profiling program could offer enticements for voluntarily renouncing one’s anonymity. Customers could e.g. wear a small infrared LED to signal their agreement to being tracked. After explaining the implementation details, this work outlines a multidisciplinary discussion incorporating an economic, ethical and legal viewpoint. Introduction Today, life in urban areas is hardly imaginable without omnipresent video surveillance (VS).Screens showing the recorded images are installed in prominent locations to remind us that we are constantly being watched or even recorded.Ideally, this makes us feel more secure; but it might also reveal intimate details about our lives and make us change our behaviour in subtle yet profound ways, thereby threatening our rights to political liberty and personal self-determination. VS can of course help to convict a criminal, preemptively detect imminent danger, or chase a fleeing suspect more effectively.It is also reported that the visible installation of cameras does in fact reduce crime in that respective area.Thus, from a crime fighter's point of view there are clearly advantages of having as much VS as possible.With more installed cameras the monitoring and evaluation of recorded data becomes insurmountable for human operators.Therefore, efforts are made towards automatising the video analysis through computer algorithms -as it was e.g. the goal of the infamous EU project INDECT. But not only crime fighters are interested in VS.In an emerging trend, VS has also come into the focus of commercial applications.Similar to internet users being tracked and analysed, people can be automatically identified and tracked on video recordings.Thus, e.g. a supermarket can track the paths customers take through the aisles, analyse where they stop or which advertisements catch their attention.The resulting data allows to optimise the arrangement of products or send customised promotions or discount offers based on the costumer's behaviour.Again, there are obvious advantages of VS in these scenarios: both for the shop owner (optimisation of products and advertising) and for the customers (individual discounts and a more seamless shopping experience). However, in spite of legal norms governing the allowable use of VS, the public debate on its drawbacks or even threats to an open free society is not ceasing.A most prominent example is the so-called 'Big Brother Award', an annual ironic award by civil-rights activist to persons or organisations who have in their views greatly contributed to shifting society towards George Orwell's dystopia from '1984'.Among the German awardees, there were particularly VS related cases in the years 2000 (German Railways, surveillance of station platforms), 2004 (Lidl supermarkets, surveillance of employees) and 2013 (University of Paderborn, surveillance of lecture halls and computer labs). In this work, we are discussing a possible reconciliation between these concerns about already present VS and its advantages for both crime fighting and economical endeavours.The 'Digital Cloak of Invisibility' (DCI) is a generally applicable concept of anonymising personal information in vastly collected data [4] that is here applied to VS.This anonymisation, however, can be partially revoked if necessary.While there have been several studies about automatic privacy and intimacy preserving in VS and even some about revocable anonymisation, we first suggest an alternative method to achieve revocable anonymisation andto best of our knowledge for the first time -present a scenario of how such a technology could be implemented in a modern society.In contrast to purely technical approaches, this work's main contribution is the multidisciplinary discussion of VS with revocable anonymisation within its societal (legal, economic and ethical) context. Section 2 outlines the computer scientific details of the DCI, preparing the ground for a multidisciplinary discussion of the approach.Section 3 evaluates VS and the DCI from a legal perspective, exemplarily taking into account the German legislation.In order to provide a more holistic discussion of the societal implications of VS and the DCI, Section 4 discusses the DCI from an economical point of view, while Section 5 provides an ethical analysis of VS and how the respective concerns are met by the DCI.To preserve the scope of this paper, these viewpoints are kept very brief.The intend is to initiate a debate, whose main points and future directions are concluded in the final section. Technological implementation The problem of compromised privacy in VS has been addressed by several works; e.g.[10], [13], [17], [20], [24,25].Most approaches automatically detect and irreversibly obfuscate privacy critical image regions like human silhouettes, faces or car licence plates.Some approaches like [7,8,9] have also suggested methods for revocable obfuscation.In contrast to these purely technical approaches, this work's main contribution is the multidisciplinary discussion of VS with revocable anonymisation within its societal (legal, economic and ethical) context.We therefore draft a rather simple yet efficient way for revocable image obfuscation; namely to XOR their pixel values with a pseudo-random cipher stream generated from a secret key seed.This scheme is sufficient to demonstrate the relevant concepts of embedding it into the societal context but could also be interchanged for any other possibly more sophisticated reversible obfuscation technique. As more and more of the recorded video footage is going to be analysed automatically by pattern recognition algorithms, we propose to use the same algorithms to identify persons but blacken them before the footage is stored or viewed by a human.This blackening is done by a cryptographic method that allows to restore the original image with a key.This key is securely stored in the camera and by a publicly accepted key keeper authority (KKA).Whenever video footage is required to identify criminal suspects after an event, the crime fighter requests the required key from the KKA.For cases of imminent danger, a "break glass" functionality can immediately grant a key, leaving a log entry for the KKA to double-check.For commercial applications, the DCI allows shop owners to do their tracking of filmed customers -however, only of those who have agreed to being tracked, similar to the loyalty program 'Payback' where people agree to their shopping receipts being recorded and analysed in exchange for monetary compensation.('Payback' was incidentally awarded a Big Brother Award in 2000.)People who agree to being tracked could signify their approval e.g. by wearing an inconspicuous tag on their clothes or by inserting a personal smartcard into their shopping cart. As with classical VS, the recordings are made by a camera we assume to be digital, i.e. the video image is processed by digital circuits before the data is digitally transmitted out of the camera -an assumption that is valid for many VS cameras today and will in the future be true for all VS.The DCI extends such camera with additional internal circuitry that performs a certain post-processing on the video data before it leaves the camera's hardware.The workflow is depicted in Figure 1. First, an image recognition algorithm identifies all persons in each video frame.The perfectly reliable implementation of such algorithms is nowadays still in its beginning ( [3], [6], [11], [28]) but most certainly the future will see them running reliably on embedded systems like those of digital cameras.Each DCIenhanced camera has a unique cryptographic key securely embedded in its hardware, called Camera Master-Key (CMK).For each video frame and image region showing a person, an individual Sub-Key (SK) is created by feeding the CMK with frame number and region coordinates to a hash function [26].Strong hash functions have the property that the input cannot be derived from the output.Thus, it is not possible to derive the CMK from the SK -even if the used frame number and region coordinates are known. Generate pseudorandom bit sequence from each Sub-Key The SKs are used to generate a pseudo-random cipher-stream of bits that is XORed with the pixel data of the corresponding region in the original video frame.In the resulting video, this region appears obscured (in fact the pixels have random colours).The meaning of pseudo-random is that the generated bits look random, but the sequence solely depends on the respective SK, such that it can always be reproduced.The XOR function (⊕) is reversible: Thus, the blackening of a region in a frame can be undone, when the respective SK is known.This is applied in the DCI deanonymisation scheme shown in 2. If a crime is recorded, the crime fighter makes a request to the KKA which verifies its legitimacy and then grants the SKs for the requested frames and image regions.Only the suspect persons in a recording can be deanonymised while all others remain anonymous. To cater for cases of imminent danger, a "break glass" functionality is implemented such that a sequence of SKs can be requested remotely (e.g. via internet) and is automatically granted.This, however, leaves a log entry with the KKA such that the request's legitimacy and whether the "break glass" was justified can be verified afterwards.In a first proof of concept, we implemented a DCI camera as an opt-out system instead of opt-in.I.e.instead of anonymising everybody by default except those who opt-in, nobody is anonymised except those who opt-out (conceptually similar to [24]).This was done to firstly abstract from the person-identifying image recognition.We designed an infrared LED beacon that is picked up by the camera to subsequently anonymise the region around this beacon.Figure 3 shows the practical results.The anonymisation is done with the cryptographic scheme as described above.With sufficiently reliable person-identifying algorithms, the system can easily be transformed into the DCI opt-in variant. Legal considerations In 1995, the European Union issued the Data Protection Directive (95/46/EC) to be implemented by all member states.In this section, we exemplarily focus on the German implementation of the directive in its Federal Data Protection Act (Bundesdatenschutzgesetz, BDSG).The legal basis regulating the use of VS ( § 6b BDSG [1]) only allows it under specific circumstances.The VS has to be both sufficient to reach the intended purpose and necessary; i.e. there has to be no less severe economically reasonable alternative [12, paragraph 236].Furthermore, a weighing of interests must be fulfilled between the intended VS purpose and the constitutional personal rights of the affected (Article 2 paragraph 1 of the Basic Law for Germany), i.e. in particular the right to one's own image and the right to informational self-determination [5, paragraph 22]. The sufficiency of VS is mostly given, insofar as it is assumed to fulfil its typical purposes: crime prevention, detection and deterrence.But also the necessity is generally easy to prove with the argument that high personnel costs are hardly an economically reasonable alternative to the comparably cheap VS equipment [5, paragraph 21].The weighing of interests is mostly decided in favour of the intended purpose, as § 6b BDSG allows VS to be used for exercising one's right to domestic authority, or -even more generally -to exercise any justified interest for a concretely defined purpose; and justifications -like the state's obligation to avert danger and prosecute crime or the individual's interest in protection of one's property -mostly outweigh the mentioned personal rights of the VS affected, as long as the VS is not done covertly but clearly signified.Furthermore, recordings must not be stored longer than required to fulfil the respective purpose, which of course can allow for rather long time spans depending on the purpose interpretation. Evaluating the necessity of classical VS versus the DCI, it can be asserted that the DCI is in fact a less severe alternative.As all people are anonymised by default, there is no infringement of personal rights any more.These benefits should outweigh the slightly higher costs in most cases, such that the DCI can also be considered an economically reasonable alternative. Whether it is also sufficient in the same way as classical VS requires a more thorough analysis.The foremost purpose of VS is to identify recorded suspects in hindsight, which is definitely also provided by the DCI.If recordings are to be analysed in a typically already protracted criminal proceeding, the relatively short delay of requesting the SKs from the KKA does no harm.For emergencies, there is the "break glass" functionality to immediately get a set of SKs.Another purpose of VS is the deterrent effect, which is also catered for by the DCI.Because people will be aware that they will be deanonymised if the crime fighter convinces the KKA of the crime having taken place.This will in most cases be possible by pointing out the respective scenes in the anonymised recordings, because most suspicious actions are still recognisable, even if the "protagonists" are obscured.This is also the reason why DCI-enhanced VS is just as suitable for real-time monitoring.Turmoils or robberies, for example, show typical patterns of movement that are easily spotted irrespective of whether the persons are obscured or not.It can thus be concluded that the sufficiency is fulfilled. In economical scenarios, where customers renounce their anonymisation in a loyalty program (cf.Section 4), the DCI is legally rather unproblematic.The operator simply has to comply with § 6b BDSG by signifying the use of VS and to let the participating customers sign his general terms and conditions (cf.§ 4 and § 28 BDSG). Of course, this exemplary discussion of the German legal context is not exhaustive and other legal contexts could be included.Furthermore, technical concepts like the DCI have hardly been taken into account in the legal practice.Thus, in addition to the following economic consideration, Section 5 extends the limited normative discussion presented above by including an ethical analysis.This will allow us to look more broadly at normative issues and conflicts introduced by VS and how the DCI can address these in a constructive way. Economic Applications DCI systems can not only be utilised for protecting individuals' privacy in the context of VS-based crime prevention and detection.They also allow for conducting economically motivated video surveillance in a privacy-aware manner. In the following, the potential of the here presented system in the context of customer analysis for marketing in brick and mortar stores is discussed. Store owners have long used video surveillance systems not only to deter shoplifters but also for being able to present evidence in case of incidents within their premises.However, video surveillance systems are also suited to precisely track customers' movement and even their direction of view [18,19].This allows shop owners to gain valuable insights that can be used for marketing, e.g. for shop design or advertising campaigns.In Germany, however, customers' high privacy concerns are an impediment to the adoption and usage of such analysis methods.The here presented DCI system has the potential of addressing these concerns on the one hand and to guarantee that only the movements and behaviour of customers who have consented are tracked, on the other hand. The DCI system can be used analogously and complementary to the currently popular loyalty cards in order to restrict tracking and behavioural analysis within the store to customers who have consented on the one hand, and, on the other hand, to reduce the privacy concerns of customers who have not consented.Two options to harness this potential exist.In its current state of implementation, the presented DCI system can be utilised as an easy opt-out mechanism.Through wearing a respective signal emitter, e.g. on their clothes or on their shopping cart, customers can opt-out of movement tracking and behavioural analysis.However, this application would be in stark contrast to the "privacy-by-design" requirement as laid down in the current draft of the new European General Data Protection Regulation [14,Art. 23].Still, the DCI can also serve as an opt-in mechanism.Customers who consent to being tracked within the store can signal this through signal emitters on their clothes or shopping carts.For example, infrared LEDs could be used in this scenario, emitting light signals that correspond to a customer account or profile.This would also allow for combing the DCI with existing loyalty programs.In that scenario, the VS system would have to encrypt the whole video by default except for regions in which a respective signal is detected.A problem to be solved in the commercial scenario is the selection of an appropriate KKA.Further, the presented system has to be extended in order to prevent "bycatch".In case two customers, one who consented to tracking and analysis and one who did not, are standing close to each other in the store, the will of the customer who did not consent should be prioritised and both customers are anonymised. Ethical impact assessment Due to the complex nature of both society and technology development, an ethical impact assessment should not be considered an accurate prediction of the future.Rather, it can be seen as a projection of intended and unintended consequences of technology use and of the potential moral risks and chances.Especially with regard to unintended consequences (side effects) of using new technologies, legal frameworks often lag behind and do not address emerging conflicts adequately.Ethical impact assessment then sketches plausible scenarios and outcomes that can be used as a normative basis for deciding how to deal with technological change in society.In many cases, as done here with regard to the DCI, this normative basis can then be used constructively in the development process.In this way, at least some of the foreseeable moral risks -even if they are not yet fully covered by the legal framework -can be addressed by technological means [22]. 1f we take a closer look at the unintended ethical impact of implementing VS technologies in public places, two argumentative perspectives can be differentiated: (1) the unintended impact can affect specifiable individuals, especially with regard to their fundamental rights and liberties; or (2) the unintended impact can affect the character of a society as a whole, especially by contributing to developments that make it more restrictive.The latter perspective becomes especially important in cases where the impact for most specifiable individuals is comparably small or mostly indirect, but where, in sum, we can still foresee a considerable impact on the openness of society.Examples for this is the subtle but constant expansion of security technologies over a longer period of time (sometimes called the 'boiling frog argument' [27]) or the ex post expansion of the purpose of data collection ('mission creep argument' [21]). In the remainder of this section, we present a brief assessment of the ethical impact of VS with and without the use of the DCI.This is done by means of four metaphors [16] that are commonly invoked by critics in the relevant public and scientific debates in Germany. 2 Regarding the ethical impact on specifiable individuals, we first look at the commonly perceived risk that the private lives of customers or citizens become "transparent" to commercial and governmental actors (gläserner Kunde/Bürger).Afterwards we discuss the fear that persons under VS are subject to what is called a "generalised suspicion" (Generalverdacht).Regarding the ethical impact on society as a whole, we look at the metaphors of an "Orwellian" and a "Kafkaesque surveillance society". Individual centric perspective In the discussion about spatially limited VS in publicly accessible places, the metaphor of the transparent customer predominantly denotes the fear that commercial actors may collect and process data about their customers to an extent that they can infer facts about their wishes, intentions and living situations.Such information often includes private or even intimate facts that are widely considered to be worthy of protection based on cultural norms of modesty (e.g.regarding sexuality or illness) or based on the societal fear that some individuals may be affected disproportionately (e.g.due to their financial or social situation).Furthermore, since intimacy depends on selective sharing of information, the control over information about oneself has been recognised as a necessary precondition to establish relationships with varying levels of intimacy as well as for the development of our personality free from the impingement of others [15].In contrast to this, the metaphor of the transparent citizen predominantly denotes the fear of far-reaching data collection and processing on behalf of state actors.Here, the reason for considering certain information private and worthy of protection is founded additionally in the fear of governmental overreach and an overly powerful state.In democracies, therefore, state sponsored VS must always be viewed in relation to rights and liberties that defend the individual against the state.In comparison to commercial actors, state actors are therefore usually subject to stricter checks of proportionality and more often require the implicit or explicit consent of the affected individuals. In both cases, however, such privacy intrusions are often considered justifiable (especially in a legal sense) if they allow the protection of other societal values -for example if there is the suspicion of criminal activities.Here, the metaphor of the generalised suspicion expresses the fear that security measures such as VS may be used indiscriminately so that all individuals may be subject to privacy intrusions on the basis that some few individuals could be said to have criminal intentions. In classic forms of VS, individuals can generally be identified either manually or automatically by making use of biometric facial recognition or other optical criteria.In addition to that, the tracking of their movements throughout the area under surveillance can allow to establish buying patterns, to infer personal intentions or to reveal intimate aspects of their living situations.How much time did this customer spend in front of the shelves with the condoms and how often does she come to buy liquor?Does this person commonly use the public transport system during working hours?How long does that man talk to the preacher in the public square, how long to the people from the election campaign?Especially techniques of long-term storage and automated analysis of video data can present a severe infringement of individual privacy and the free development of personality that goes far beyond what people have to assume anyway when they move in public places. By obscuring the information that allows the identification of individuals in the video images, this moral risk can be mitigated effectively as the recorded data cannot be directly related to specific individuals.Depending on the implementation of the KKA, such an intrusion is only allowed in cases where the reason for it is checked and considered legitimate by an independent instancefor example to collect evidence in case of shop lifting or assault.Furthermore, even in such cases, only specific pieces of information can be revealed, such as data relating to concrete individuals during a specific time frame.At the same time, the intended benefit of the VS -e.g.police officers watching a public area to react quickly in case of assaults or a shop detective watching the customers to spot shop lifters -can still be achieved.Both scenarios show that the usage of a DCI system can protect the privacy and intimacy of customers or citizens much better than classic forms of VS.Furthermore, they allow restricting legitimate intrusions to the necessary information. Society centric perspective The term surveillance society is used to express concerns about the prevalence of surveillance measures of commercial or state actors throughout society.In the context of VS, the term does therefore not refer to singular, isolated instances or strictly limited locations, but rather to the gradual proliferation of this measure and the threat that those systems could be networked bit by bit and the recorded information merged to a large pool of surveillance data.In this context, the metaphor of the Orwellian surveillance society denotes the concern that the proliferation of surveillance measures may lead to a situation in which we are almost constantly monitored and where we can never be sure how our behaviour will be interpreted or which negative consequences might ensue later on.From a democratic point of view, this implies the risk that the realisation of some of our rights and liberties may fall victim to a form of self-control -for example because we fear a more negative credit rating or being classified as a high risk airline passenger.This can be seen as a pressure towards a certain standard of normalcy that limits the open character of our society to a considerable extent. In addition to that, the metaphor of the Kafkaesque surveillance society expresses the fear that we may loose the de facto possibility of achieving an effective remedy in case we suffer illegitimate negative consequences.This is especially relevant in cases where it is highly opaque why the relevant decisions were taken, based on which information and on which criteria and how those decisions can be disputed.With regard to VS measures, this risk becomes material especially in those cases where recorded data is handed over or even sold to third parties without the consent of affected persons, since this would facilitate the consolidation and misuse of data from different sources -for example in order to allow pattern recognition for the detection of potential criminals or insurance risks. In classic forms of VS, it is very difficult to effectively limit the circulation and processing of the recorded data.Even in cases where signs inform customers or citizens explicitly about the VS, it is almost impossible to foresee what information can be inferred from the recorded data and how it could be used in the future.Some of these risks can be mitigated to a certain extent when surveillance actors promise to restrict themselves in the use of such data -but it is unclear what would be the incentive for commercial actors to do so and how misuse could be sanctioned effectively.The use of a DCI system by commercial or governmental actors, on the other hand, allows the use of the KKA as an independent party to effectively mitigate the risk of circulation, consolidation from different sources and misuse of the recorded data.This is especially true if only those pieces of data are revealed that are strictly necessary for a certain legitimate purpose.Furthermore, for commercial surveillance actors, it could be an incentive for the use of a DCI system if they can advertise their use of higher standards of protection of their customers' privacy.For both the commercial and governmental case, we can thus conclude that a DCI system allows to also effectively mitigate the society centred ethical risks of circulation of surveillance data, of data consolidation from different sources and of misuse of that data. Discussion and Conclusion The advantages of a DCI-enhanced VS system over the kind of VS that is already massively in use today has been demonstrated in each perspective of our multidisciplinary discussion.Still, there might be remaining criticisms towards DCI that shall be addressed in the following. A concern raised from someone generally opposing VS could be, that a DCI technology would merely be a fig leaf for VS; leading to a higher rate of social acceptance followed by an implementation of even more VS systems.That concern would be justified if VS today would in fact be used very sparsely.Reality, however, shows that we are already living in a society where VS is implemented on a large scale, mostly accepted or not pondered over by large parts of the populace.Replacing it with DCI systems will be given just as little thought by those, but people concerned about VS today will experience a real improvement. Another such concern could be about the DCI security.The whole DCI system relies on the CMK's secure storage both in the respective camera and in the KKA's central storage.If the CMK is obtained e.g. by a hacker attack, all recordings made with the respective camera could be deanonymised.As a precaution, it is therefore recommendable to keep the same strict constraints of how long recordings may be stored that are in place today for classical VS.Then, even if the DCI security should get compromised, it would only be as "bad" as it is right now without DCI.To keep a CMK save in the hardware of a camera there is a manifold of techniques from the field of hardware security and trusted hardware.e.g.tamper-sensing Meshes [2] or Physically Unclonable Functions (PUFs) [23].Equivalently, the KKA's storage has to be secured with state of the art security measures. Another question is: "Who is the KKA?"The trust of the populace in the KKA's integrity is essential.Thus, one could consider a democratic board unsuspicious of collaborating with the respective camera operator.We presume that among the civil-rights activist now fighting against VS many would volunteer to be part of a KKA and that they would be trusted; particularly by those sceptical of VS.Another possibility could be a judge or ombudsman to decide about when a key request is granted.In any case, transparency and accountability of the KKA's decisions and procedures are paramount for the creation of trust. One also has to be aware, that DCI-enhanced VS does not provide perfect anonymity.An obscured person might still be identified by a diligent analyst e.g. by the fact that she was walking a dog or because he was -although anonymised -observed leaving his residency.With an extension of the DCI algorithm these sources of identification could be further hampered, but one should not be illusionary about the limits of anonymisation. A last concern could come from advocates of classical VS; namely that a DCI is more expensive than classical VS.This is of course true.The extra hardware in the cameras and the administrative efforts to regulate the exchange of requests and keys between the crime fighting instances and the KKA does not come for free.The question we have to ask ourselves as a society is, whether a decrease of intrusion into our privacy and intimacy as well a decrease of infringements of our civil liberties would be worth that extra cost. Fig. 1 . Fig. 1.The schematic concept of a DCI camera system. Fig. 2 . Fig. 2. Deanonymisation is only possible with the SKs granted by the KKA. Fig. 3 . Fig. 3.A first proof-of-concept implementation of the DCI as opt-out: only regions surrounding a detected infrared beacon are anonymised.
7,043.4
2016-09-07T00:00:00.000
[ "Computer Science" ]
Hepatoprotective Triterpene Saponins from the Roots of Glycyrrhiza inflata Two novel oleanane-type triterpene saponins, licorice-saponin P2 (1) and licorice-saponin Q2 (3), together with nine known compounds 2, 4–11, have been isolated from the water extract of the roots of Glycyrrhiza inflata. The structures of these compounds were elucidated on the basis of spectroscopic analysis, including 2D-NMR experiments (1H–1H COSY, HSQC, HMBC and ROESY). In in vitro assays, compounds 2–4, 6 and 11 showed significant hepatoprotective activities by lowering the ALT and AST levels in primary rat hepatocytes injured by D-galactosamine (D-GalN). In addition, compounds 2–4, 6, 7 and 11 were found to inhibit the activity of PLA2 with IC50 values of 6.9 μM, 3.6 μM, 16.9 μM, 27.1 μM, 32.2 μM and 9.3 μM, respectively, which might be involved in the regulation of the hepatoprotective activities observed. Introduction The genus Glycyrrhiza consists of about 30 species with a nearly global distribution, of which 18 species are found in China. Among them, three species named Glycyrrhiza uralensis, Glycyrrhiza glabra and Glycyrrhiza inflata, have been used as traditional Chinese medicine for the treatment of hepatitis, spasmodic cough, gastric ulcer, and so on. Phytochemical studies have showed that triterpenoid saponins and flavonoids were the two of major kinds of active substances of Glycyrrhiza, which have a variety of pharmacological activities, including hepatoprotective [1,2], antiviral [3], anti-inflammatory [4] and antioxidative [5] effects. Recently, we reported the chemical constituents of G. uralensis and G. glabra, as well as their cytotoxic or neuraminidase bioactivities [6,7]. As part of our ongoing research on the genus Glycyrrhiza, an extensive phytochemical investigation on the roots of G. inflata has now led to the isolation of two new oleanane-type saponins 1, 3 and nine known saponins 2, 4-11. All compounds were screened for their protective activities against D-galactosamine (D-GlaN) induced toxicity in vitro. In addition, the inhibitory activities on phospholipase A2 (PLA2) were presented. Herein, we report the isolation and structural elucidation of these saponins, along with the investigation of their protective activities. Results and Discussion The total saponin fraction of G. inflata was prepared by co-application of polyamide and macroporous resin column chromatography [7]. The resulting extract was subjected to ODS column chromatography and preparative HPLC to afford two new oleanane-type saponins 1, 3 together with nine known ones 2, 4-11. Their structures were shown in Figure 1. Structural Determination Compound 1 was obtained as a white amorphous powder and showed a protonated peaks in the low-resolution positive HR-ESI-MS spectrum at m/z 861. 3929 , five methines (including one oxygenated methine and one unsaturated methine), and nine quaternary carbons (including one carbonyl quaternary carbon, one unsaturated quaternary carbon and one carboxyl carbon). Therefore, compound 1 was considered to be an oleanane-type triterpene glucuronide bearing a 12(13)-double bond and a keto group at C-11. In the HMBC spectrum, correlations of δH 5.03 (H-1′) to δC 91.3 (C-3) and δH 5.38 (H-1′′) to δC 85.5 (C-2′) could be observed. In addition, the correlations in the HMBC spectrum from H-1′ at δH 5.03, H-23 at δH 1.27 and H-24 at δH 1.08 to C-3 at δC 91.3 helped in assigning one oxygenated methine at C-3. Detailed analysis of the above 1D-NMR data and 2D-NMR correlations indicated that 1 is an oleanane-type saponin derivative and is structurally related to the known compound licorice-saponin G2 (4). The comparison of the NMR data of 1 with those of 4 suggested that the hydroxyl group at C-24 in 4 was transposed to C-29 in 1. The HMBC correlations from δH 3.98, 4.06 (H-29) to δC 39.1 (C-19) and δC 180.2 (C-30) and the 1 H-1 H COSY correlations between the proton signal at δH 2.49 (H-18) and δH 1.98, 2.24 (H-19) confirmed that hydroxyl group was connected to C-29 in compound 1 ( Figure 2). Hepatoprotective Activity All the separated compounds were assessed for their hepatoprotective activities against the increase of AST and LDH levels in primary rat hepatocytes injured by D-GalN. The maximum nontoxic concentrations of tested compounds on primary rat hepatocytes were in the range of 120-240 μM. A set of cells in culture medium treated with D-GalN was used as the model group, and in comparison to the model group, macedonoside A (2), licorice-saponin Q2 (3), licorice-saponin G2 (4), 22β-acetoxy-glycyrrhizin (6) and glycyrrhizin (11) notably lowered AST (10.3-16.5 U·L −1 ) and LDH (200.7-242.8 U·L −1 ) in the range of concentration 30-120 μM. (Table 2). Comparing the activities of these saponins, compound 5 and 7 was shown to have significantly weaker hepatoprotective activities than the compound 2 and 11 owing to presence of a lactone ring at position 22(30). Compound 11 showed stronger activity than 1. That might be because an additional CH2OH group is preferable to improve the steric hindrance, thus resulting in a decrease in the bonding capacity with active targets. Interestingly, compound 3 displayed higher activity than compound 4. The reason might be that compound 3 with a 18α-H group was found to be favorable for the anti-liver injury activity. On the basis of the above analysis, it seemed that a carboxyl residue at position 29 or 30 was possibly the necessary group for hepatoprotective activity. Enzyme Inhibition Activity As a regulator associated with the stability of the liver cell membrane, phospholipase A2 (PLA2) is a promising target for hepatoprotective drug development [16]. To examine whether the compounds inhibit activities on PLA2, the enzyme inhibitory potency of all isolated compounds was conducted and the results were summarized in Table 3. Among these, two saponins (compounds 2 and 3) and glycyrrhizin (11) exhibited efficient inhibitory activity with IC50 value of 6.9 μM, 3.6 μM and 9.3 μM, respectively. Compounds 4, 6 and 7 showed moderate inhibitory activities with IC50 values of 16.9 μM, 27.1 μM and 32.2 μM, respectively. What was noteworthy, is that analysis of the two assays of 1-11 showed that there was good relationship between PLA2 inhibitory activities and hepatoprotective effects, leading to the hypothesis that inhibition of PLA2 was one of the possible mechanisms of the hepatoprotective effect of licorice saponins. Table 3. Inhibitory activities of isolated saponins on PLA2. Material The roots of Glycyrrhiza inflata were collected in Weli County, Xinjiang Uygur Autonomous Region, China, October 2013. A voucher sample (No. 20131015) was preserved in Nanjing University of Chinese Medicine, and identified by Prof. Qi-Nan Wu. Extraction and Isolation The roots of G. inflata (dry weight, 25 kg) were exhaustively extracted two times with boiling water ( Acid Hydrolysis The configuration of the sugars of compounds 1 and 3 was determined by acid hydrolysis and GC experiments based of the literature procedure [6,9]. The specific steps were as follows: a solution of compounds 1-3 (1.0 mg each) in 1 N HCl (1 mL) was stirred at 90 °C for 2 h. After cooling, the solution was evaporated under a stream of N2. Anhydrous pyridine solutions (0.1 mL) of each residue and L-cysteine methyl ester hydrochloride (0.06 N) were mixed and warmed at 60 °C for 1 h. The trimethylsilylation reagent trimethylsilylimidazole (0.15 mL) was added, followed by warming at 60 °C for another 30 min. After drying the solution, the residue was partitioned between H2O and CH2Cl2 (1 mL, 1:1 v/v). The CH2Cl2 layer was analyzed by GC/MS. The peaks of authentic sample of D-glucuronic acid after treatment in the same way were detected at 14.23 min. The final result was to compare the retention times of monosaccharide derivatives with standard sample. The absolute configuration of sugar was confirmed to be D-glucuronic acid (D-glucuronic acid for compound 1 with retention time 14.21 min; D-glucuronic acid for compound 3 with retention time 14.22 min). Cell Assay Isolated rat hepatocytes were prepared from male Wistar rats by a collagenase perfusion technique as described previously [17]. The D-GalN concentration used for cell culture treatment was previously determined according to a modification of the method of Morikawa et al. [18]. The cultured cells in logarithmic growth phase were made into a single-cell suspension and seeded in 96-well plates (1 × 10 4 cells/well) in the DMEM/F 12 with 2% FBS complete medium for 24 h at 37 °C. Then, the hepatocytes were exposed to 2 mM D-GalN for 2 h to induce hepatotoxocity. The medium with silibin meglumine (as positive drug, purity 95.6%, Hunan Xieli Pharmaceutical Co., Ltd., Zhuzhou, China) and different concentrations of test compounds was mixed in cell medium (final test compounds concentration were 30 μM, 60 μM and 120 μM, respectively), and incubated for 24 h. The obtained reacted supernatant was directly used to detect ALT and AST levels. The control group was a set of cells maintained in culture medium, while the model group was a set of cells maintained in culture medium and treated only with D-GalN. All data are expressed as the mean ± SD of at least three independent experiments as indicated. The test for the paired samples was used to determine statistical difference between parameters. These differences were considered significant for p < 0.05 or 0.01. Assay for Inhibition against PLA2 The PLA2 inhibitory assays of compounds 1-11 and the positive drug diethylenetriaminepentaacetic acid (Purchased from Aladdin, Los Angeles, CA, USA, purity > 98.0%) were carried out according to the literature [19]. First of all, each tube was added with 1 mL fresh substrate buffer solution (pH = 8.2). After that, 50 μL tested compounds at various concentrations were placed at reaction tube and blank tube, respectively. As for control tube, 50 μL deionized water was instead. Then each tube incubated at 40 °C for 10 minutes. The reaction tube and blank tube were followed by the treatment with PLA2 enzyme (5 μL) at the concentration of 5 μg/mL. Before put them into the incubator at the temperature of 40 °C to react 30 minutes, the content of the tube should be fully blending. The optical density value of each tube was then read in an ELISA plate reader using a wavelength of 495 nm. The IC50 values were
2,321.2
2015-04-01T00:00:00.000
[ "Chemistry", "Biology" ]
Higher-order Galilean contractions A Galilean contraction is a way to construct Galilean conformal algebras from a pair of infinite-dimensional conformal algebras, or equivalently, a method for contracting tensor products of vertex algebras. Here, we present a generalisation of the Galilean contraction prescription to allow for inputs of any number of conformal algebras, resulting in new classes of higher-order Galilean conformal algebras. We provide several detailed examples, including infinite hierarchies of higher-order Galilean Virasoro algebras, affine Kac-Moody algebras and the associated Sugawara constructions, and $W_{3}$ algebras. Introduction The Galilean Virasoro algebra appears in studies of asymptotically flat three-dimensional spacetimes, see [1] and references therein. It can be constructed [2,3,4,5,6] as an Inönü-Wigner contraction [7,8,9,10] of a commuting pair of Virasoro algebras. The Galilean W 3 algebra [11,12,13,14] likewise follows by contracting a pair of W 3 algebras [15]. Many other Galilean conformal algebras with extended symmetries have been worked out [16,13,14], including contractions of higher-rank W N algebras [17,18,19,20,21]. These constructions are all based on contractions of pairs of symmetry algebras, or equivalently, contractions of tensor products of two vertex algebras. In this note, we present a generalisation to allow for inputs of any number of symmetry algebras. This solidifies ideas put forward in [14] and gives rise to new infinite hierarchies of higher-order Galilean conformal algebras. In Section 2, we outline the generalised contraction prescription and illustrate it by working out the higher-order Galilean Virasoro and affine Kac-Moody algebras. In Section 3, we construct a Sugawara operator [22] for each Galilean Kac-Moody algebra; its central charge is given by the product of the contraction order and the dimension of the underlying Lie algebra. We also show that the Sugawara construction commutes with the Galilean contraction procedure. In Section 4, we apply the Galilean contractions to the W 3 algebra and thereby obtain an infinite hierarchy of higher-order W 3 algebras. Section 5 contains some concluding remarks. Galilean contractions 2.1 Operator-product algebras and star relations It is often convenient to combine the generators of the symmetry algebra of a conformal field theory into generating fields of the form where ∆ A is the conformal weight of A. We are interested in the corresponding operator-product algebra (OPA) A, where the operator-product expansion (OPE) of the two fields A, B ∈ A is given by Here, if nonzero, [AB] n is a field of conformal weight ∆ A + ∆ B − n. As the nontrivial information of an OPE is stored in the singular terms, one often ignores the non-singular terms, writing The normal ordering of A, B ∈ A is given by (AB) = [AB] 0 . We use I to denote the identity field. An OPA A is said to be conformal if it contains a distinct field T generating a Virasoro subalgebra. In that case, a field A ∈ A is called a scaling field if with structure constants C Q A,B and Compactly, we may represent the OPE (2.5) by the so-called star relation where {Q} represents the sum over n. We refer to [14,23] for more details on the algebraic structure of an OPA. Contraction prescription For N ∈ N, we consider the tensor-product algebra where, for simplicity, A (0) , . . . , A (N −1) are copies of the same OPA A, up to the value of their central parameters (such as central charges). For ǫ ∈ C, let where A (j) (respectively c (j) ) denotes the field A ∈ A (j) (respectively the central parameter c), and ω is the principal N th root of unity: ω = e 2πi/N . For ǫ = 0, the map (and similarly for the central parameters) is invertible, with In the special case N = 2, we have ω = −1 and with inverses In [14], these fields are denoted by (2.14) For ǫ = 0, the map (2.10) is singular (unless N = 1), indicating that a new algebraic structure emerges in the limit ǫ → 0, where If the resulting algebra is a well-defined OPA, we refer to it as the N th-order Galilean OPA A N G . In particular, if A is an OPA of Lie type (that is, the underlying algebra of modes is a Lie algebra), then all the corresponding higher-order Galilean contractions are indeed well-defined and readily obtained. This is illustrated by the Virasoro and affine Kac-Moody algebras in Section 2.3. Galilean Virasoro and affine Kac-Moody algebras The Virasoro OPA Vir of central charge c is of Lie type and generated by T , with star relation The Galilean Virasoro algebra of order N , Vir N G , is generated by the fields T 0 , . . . , T N −1 , with central parameters c 0 , . . . , c N −1 and star relations This yields an infinite family of extended Virasoro algebras, Vir 2 G is the familiar Galilean Virasoro algebra [2,3,4,5,6,13,14]. For small N , the Galilean Virasoro algebras Vir N G have recently appeared in [24]. The OPE of two fields in an affine Kac-Moody (or current) algebra g (where the central element K has been replaced by k I, with k the level) is given by where f ab c are structure constants and κ the Killing form of the underlying finite-dimensional Lie algebra g. (As is customary, the summation over the basis label c is not displayed.) The corresponding OPA is of Lie type, and we find that extending to general N the construction of the Takiff algebras considered in [25,26]. We similarly have Generalised Sugawara constructions In [14], we constructed a Sugawara operator for Galilean affine Kac-Moody algebras (of order 2), and showed that this process commutes with the Galilean contraction procedure. We find that a similar result holds for the higher-order Galilean affine Kac-Moody algebras, manifested by the commutativity of the diagram Gal Gal Gal Sug To verify this, separate analyses of the two branches are presented in the following two subsections: The lower branch is considered in Section 3.1; the upper one in Section 3.2. Galilean Sugawara construction For the generators of Vir N G , we make the ansatz where κ ab are elements of the inverse Killing form on g. The task is now to determine the coefficients λ r,s i such that We show below that this is indeed possible. It subsequently follows that where h ∨ is the dual Coxeter number of g, arising through the relation κ bc f ab d f dc e = 2h ∨ δ a e . To satisfy (3.2), the first sum must equal J a i+j (w)/(z − w) 2 while the second sum must vanish. The second-sum constraint implies that The first-sum constraint then requires that For each i, this translates into a lower-triangular system of linear equations: where k ′ 0 = k 0 + N h ∨ , and where the only nonzero component on the right-hand side is a 1 in position i + 1. To solve these systems, we must assume that k N −1 = 0, in which case the problem reduces to inverting the lower-triangular Toeplitz matrix The inverse is itself a lower-triangular Toeplitz matrix with 1's on the diagonal, and we find that the nontrivial matrix elements are given by ip i , p = (p 1 , . . . , p n ). (3.12) It follows that so the unique expression for T i of the form (3.1) is given by (3.14) For N = 2, we thus recover the Galilean Sugawara construction obtained in [14], 15) whereas for N = 3, we find the new expressions For each i = 0, . . . , N − 1, the value of the central parameter c i follows from the leading pole in the OPE T 0 (z)T i (w). Using (3.14), we compute suppressing all subleading poles. Since k a = 0 for a ≥ N , this term is zero unless n + i = 0, that is, unless n = i = 0. From κ ab κ ab = dim g, we then obtain the announced result (3.3). Sugawara before Galilean contraction On the individual factors of g ⊗N , the Sugawara construction is given by Changing basis as in (2.9) introduces Now, using that a lower-triangular N × N Toeplitz matrix of the form (3.8) decomposes as where I is the identity matrix and η the N × N matrix we can use the result for A −1 in (3.10)-(3.11) to expand the expression for T i,ǫ in powers of ǫ. We thus find that where b 0,ǫ = 1, b n,ǫ = p∈(N 0 ) n (−1) |p| δ ||p||,n |p|! p 1 ! · · · p n ! a p 1 1,ǫ · · · a pn n,ǫ , n = 1, . . . , N − 1. (3.23) The summation over j yields a factor of the form (3.24) and since N − 1 + i − ℓ − ℓ ′ + n > −N , it follows that the T i,ǫ -coefficients to ǫ m for m negative are 0. The limit ǫ → 0 is therefore well-defined, resulting in whose nonzero terms are seen to match the expression in (3.14). For the central parameters, we evaluate from which it follows that Galilean W 3 algebras Higher-order Galilean contractions can also be applied to W-algebras. Below, we present the results for the W 3 algebra. W 3 algebra The W 3 algebra [15] of central charge c is generated by a Virasoro field T and a primary field W of conformal weight 3, with star relations is quasi-primary. Galilean W 3 algebra of order 2 Following [13,14], we now recall the structure of the second-order Galilean W 3 algebra [11,12,29]. It is generated by the four fields T 0 , T 1 , W 0 , W 1 , with central parameters c 0 and c 1 , and nontrivial star relations and are quasi-primary. We note that a nonzero c 1 can be scaled away by renormalising as T 1 , Infinite hierarchy For any N ∈ N, the algebra W First, it straightforwardly follows that To determine W i * W j in (W 3 ) N G for i + j = 0, . . . , N − 1, we compute the corresponding star relation Recycling the expansion techniques of Section 3, we find that where b n,ǫ (and b n appearing in (4.11) below) are given as in (3.23) (respectively (3.11)), but now based on (4.10) In the limit ǫ → 0, this yields (4.11) Observing that, for every pair r, s ∈ {0, . . . , N − 1} such that r + s ∈ {N − 1, . . . , 2N − 2}, is a quasi-primary field with respect to T 0 , we then conclude that, for i + j ∈ {0, . . . , N − 1}, Using that Λ 2,2 r,s = Λ 2,2 s,r , this can be written as where the last term is present only if N −1+i+j+n 2 is integer. Let us illustrate our findings by summarising the nontrivial star relations for the third-order Galilean algebra (W 3 ) 3 G : The six generating fields T 0 , T 1 , T 2 , W 0 , W 1 , W 2 satisfy (4.6)-(4.7) with N = 3 as well as where are quasi-primary. Renormalisation We now consider (W 3 ) N G in the special case where Correspondingly, the inverse of the matrix A in (3.20) is given by so (for N > 2) Let us also introduce the renormalised generators In terms of these, the nontrivial star relations are given by (i + j ∈ {0, . . . , N − 1}) and The central parameter c has thus been absorbed by a renormalisation of the algebra generators. A similar absorption is also possible in the Galilean Sugawara construction of Section 3, with where k i = k i , i = 1, . . . , N − 1, for some k ∈ C × . The renormalised Galilean Virasoro generators are then given by while the nontrivial star relations read (i + j ∈ {0, . . . , N − 1}) Discussion In our continued exploration [13,14] of Galilean contractions, we have presented a generalisation of the contraction prescription to allow for inputs of any number of OPAs or vertex algebras. This has resulted in hierarchies of higher-order Galilean conformal algebras, including Virasoro, affine Kac-Moody and W 3 algebras. Asymmetric Galilean N = 1 superconformal algebras, corresponding to an N = (1, 0) supersymmetry, can be obtained [27,28,29,30] from a Galilean contraction of the tensor product, SVir ⊗ Vir, of an N = 1 superconformal algebra, SVir, and the Virasoro algebra. As we hope to discuss in detail elsewhere, this extends to contractions of a conformal symmetry algebra with any subalgebra thereof. For example, one readily generalises our contraction prescription to the asymmetric tensor product W 3 ⊗ Vir, where one contracts the Virasoro subalgebra of W 3 with a separate Virasoro algebra. This yields an OPA generated by fields T 0 , T 1 , W , with nonzero star relations (i + j ∈ {0, 1}) There is significant freedom in such contractions, leading to a variety of inequivalent Galilean algebras. Other avenues for future research include representation theory and free-field realisations. The representation theory of the Galilean Virasoro algebra, also known as the W (2, 2) algebra, has already been studied in some detail [31,32,33,34,35,36]. In general, though, the representation theory of Galilean algebras remains largely undeveloped and is entirely unexplored in the case of the higher-order algebras introduced in the present note. Free-field realisations [37,38,39,17,40,41,42,43,44,45] have been central to many developments in and applications of conformal field theory, and it seems natural to expect that free fields will play a similar role when Galilean conformal symmetries are present. This includes the representation theory of the Galilean algebras alluded to above. Although realisations of the Galilean Virasoro algebra and some of its superconformal extensions have been considered [46,36,30], a systematic approach and general results are still lacking.
3,334.8
2019-01-18T00:00:00.000
[ "Mathematics", "Physics" ]
Plasma Oscillatory Pressure Sintering of Mo-9Si-8B Alloy with ZrB2 Addition Oscillatory pressure sintering is a novel crystal refinement technology. The doping of different concentrations of ZrB2 under oscillatory sintering technology (9 Hz) is discussed here, focusing on its macroscopic mechanics and oxidation resistance. In particular, doping 2.5 wt% ZrB2 can effectively increase the hardness of the alloy, slightly increase the fracture toughness of the alloy and have an outstanding effect on the oxidation resistance of the alloy at 1300 °C, achieving the effect of reducing mass loss by 80.3%. Introduction The excellent high-temperature properties of Mo-Si-B alloys have attracted much attention in the past three decades, and they have a great prospect in replacing nickel-based alloys and the static rings of high-pressure turbine guide [1][2][3]. Molybdenum is a typical refractory metal with a high melting point of about 2870 K, but its oxidation resistance is poor, so it is difficult to use it as a high-temperature structural material alone [4]. However, adding a silicon element to form silicide phase MoSi 2 can form a protective silicon glass oxide film at high temperature, which gives the material system excellent high-temperature oxidation resistance [5]. In addition, adding 1% boron to form a borosilicate layer can also greatly improve the oxidation resistance of molybdenum at 800 K-1500 K [6]. In particular, the Mo-Si-B alloy contains the microstructure of α-Mo phase (Mo solid solution), Mo 3 Si and Mo 5 SiB 2 (T 2 ) compound phases, and its melting point can reach 2270 K [7]. In its microstructure, α-Mo phase is regarded as a ductile structure, and its volume fraction and morphology play an important role in the room temperature fracture toughness, but it is not conducive to improving the oxidation resistance and high-temperature creep resistance. Mo 3 Si phase and Mo 5 SiB 2 phases, as brittle structures, play the role of a strengthening phase, which can improve the oxidation resistance and high-temperature creep resistance of materials, but is not conducive to enhancing the fatigue properties and fracture toughness [8,9]. The Si:B ratio of 9:8 can keep the balance between the α-Mo phase and the intermetallic phase to achieve the best mechanical properties and oxidation resistance [10]. Yan Jianhui [11] proposed the method of strengthening and toughening the bimodal grain size Mo-12Si-8.5B alloy by adding nano-ZrO 2 (Y 2 O 3 ) particles. However, the development of Mo-Si-B alloy still has the following problems: (1) Mo-Si-B alloys prepared by a powder metallurgy process have high porosity and low density, and usually there are many unclosed pores, which leads to the loss of mechanical toughness. (2) During the preparation process, oxygen impurities segregate to the boundary to reduce the bonding force of the grain boundary, causing brittleness in the alloy. In order to solve the above problems, many studies have been carried out. However, most of the research has not changed the problem of high porosity in traditional powder metallurgy [12]. The oscillatory sintering method recently proposed by Xie Zhipeng [13,14] can be used to solve the defects of traditional powder metallurgy for ceramic materials. Great progress has been made in the preparation of ceramic materials with high density, fine grains, high strength and high reliability. Dynamic pressure can make the powder particles slip and rearrange, and can also make the agglomerates fully depolymerize, compress the pores and increase the density. Therefore, this method also has great research value for solving the porosity problem of Mo-Si-B alloy. What needs to be added in particular is that, in our previous study [15], the oscillatory frequency of 9 Hz resulted in a gain in alloy grain refinement and an improvement of mechanical properties. Furthermore, due to its excellent high-temperature stability, ZrB 2 has been widely used in SiC ceramic materials and zircaloy-4 alloys to improve its hightemperature oxidation resistance by increasing the viscosity and stability of the SiO 2 layer [16,17]. Specially, in Guojun Zhang's research, although Zr or ZrB 2 were added, the density of the alloy could only be maintained between 94% and 95%, and its high porosity was closely related to the limitations of its hot-pressing process. The porosity had a major impact on the mechanical properties of the alloy. In our previous research [15], we achieved up to 97.78% relative density of the Mo-Si-B alloy through oscillatory sintering technology. In this work, an oscillatory sintering process was used with a frequency of 9 Hz for obtaining high-density alloy. Moreover, the addition of ZrB 2 in the Mo-Si-B alloy was designed for better property. Therefore, a detailed investigation was performed to explore the mechanical mechanism and characterize the oxidation behavior of a Mo-Si-B alloy via oscillatory pressure sintering with the addition of ZrB 2 . Our goal was to obtain a better performing Mo-Si-B alloy with ZrB 2 added using oscillatory sintering technology. Materials and Experimental Procedure The samples with a nominal composition of Mo-9Si-8B (at.%) were prepared from Mo, Si and B of 99.9%, 99.5% and 99.5% purities, respectively (the following is abbreviated as MSB). Moreover, their particle sizes were ≤5 µm, 2~3 µm and ≤5 µm. The ZrB 2 powders used had a particle size of less than 50 nm with a purity of 99.9%. The mixed powders were placed into a planetary ball with a speed of 300 rmp and a powder-to-ball weight ratio of 1:10 for 6 h to obtain a powder mixture. Then, referring to the experimental procedure of B. Li [7,18], the powders were placed into graphite mould and compacted in a vacuum environment at 1200 • C for 1 h so as to eliminate the internal stress of the powder and promote the interfacial reaction. The last step was to adopt the method of oscillatory sintering: a constant axial pressure of 40 MPa, a temperature of 1600 • C, an oscillatory pressure of 5 MPa and a sintering time of 6h were applied, as shown in Figure 1. In particular, the oscillatory sintering frequency was 9 Hz. In addition, the mass fraction of ZrB 2 in the MSB alloy was designed as four components: 0 wt%, 0.5 wt%, 1.5 wt% and 2.5 wt%, respectively, as shown in Table 1. The samples were determined by the Archimedes method, and the porosity was determined by mercury porosimetry. The samples were cut into Φ10 mm × 5 mm sizes. The hardness and fracture toughness were measured by the Vickers indentation method. The Vickers hardness test uses a force of 20 kg with a loading duration of 15 s. The phase was determined by X-ray diffraction (XRD), the corrosion solution was Keller's etchant (an aqueous solution of 10 vol.% potassium ferricyanide and 10 vol.% sodium hydroxide) and the polishing method was vibration polishing. In order to determine the oxidation resistance, cyclic oxidation experiments at 1300 • C for 15 h were performed. the polishing method was vibration polishing. In order to determine the oxidation resistance, cyclic oxidation experiments at 1300 °C for 15 h were performed. Figure 1. Schematic diagram of sintering mechanism in this study [19]. "Reproduced with permission from Guo Zhenping, Journal of Alloys and Compounds; published by Elsevier, [2021]". Figure 2 shows the X-ray diffraction patterns of alloys with different Zr contents. It can be seen from the Figure that all microstructures contained three phases: an α-Mo phase, a Mo3Si phase and a T2 phase. It is a remarkable fact that an m-ZrO2 diffraction peak was observed in the Mo-9Si-8B-2.5 wt% alloy. However, no ZrB2 phase was found. This indicates that ZrB2 was decomposed to Zr, and the m-ZrO2 phase was formed. The microstructures of four alloys are given in Figure 3. The polishing of the grain showed light and dark color differences under the action of corrosion. According to the effect of corrosive liquid and our previous research [19], it was pointed out that the α-Mo phase was light phase, the Mo3Si phase was grey phase and the T2 phase was dark phase. In addition, as depicted in Figure 3, it can be clearly seen that as the content of ZrB2 increased, the degree of refinement of the structure was higher. [19]. "Reproduced with permission from Guo Zhenping, Journal of Alloys and Compounds; published by Elsevier, [2021]". Figure 2 shows the X-ray diffraction patterns of alloys with different Zr contents. It can be seen from the Figure that all microstructures contained three phases: an α-Mo phase, a Mo 3 Si phase and a T 2 phase. It is a remarkable fact that an m-ZrO 2 diffraction peak was observed in the Mo-9Si-8B-2.5 wt% alloy. However, no ZrB 2 phase was found. This indicates that ZrB 2 was decomposed to Zr, and the m-ZrO 2 phase was formed. The microstructures of four alloys are given in Figure 3. The polishing of the grain showed light and dark color differences under the action of corrosion. According to the effect of corrosive liquid and our previous research [19], it was pointed out that the α-Mo phase was light phase, the Mo 3 Si phase was grey phase and the T 2 phase was dark phase. In addition, as depicted in Figure 3, it can be clearly seen that as the content of ZrB 2 increased, the degree of refinement of the structure was higher. XRD and Microstructural Features of Alloy via Plasma Oscillation Sintering Materials 2021, 14, x FOR PEER REVIEW 3 of 10 the polishing method was vibration polishing. In order to determine the oxidation resistance, cyclic oxidation experiments at 1300 °C for 15 h were performed. Figure 1. Schematic diagram of sintering mechanism in this study [19]. "Reproduced with permission from Guo Zhenping, Journal of Alloys and Compounds; published by Elsevier, [2021]". Figure 2 shows the X-ray diffraction patterns of alloys with different Zr contents. It can be seen from the Figure that all microstructures contained three phases: an α-Mo phase, a Mo3Si phase and a T2 phase. It is a remarkable fact that an m-ZrO2 diffraction peak was observed in the Mo-9Si-8B-2.5 wt% alloy. However, no ZrB2 phase was found. This indicates that ZrB2 was decomposed to Zr, and the m-ZrO2 phase was formed. The microstructures of four alloys are given in Figure 3. The polishing of the grain showed light and dark color differences under the action of corrosion. According to the effect of corrosive liquid and our previous research [19], it was pointed out that the α-Mo phase was light phase, the Mo3Si phase was grey phase and the T2 phase was dark phase. In addition, as depicted in Figure 3, it can be clearly seen that as the content of ZrB2 increased, the degree of refinement of the structure was higher. In order to accurately assess the grain size, we used the Archimedes section method as our measurement. As shown in Figure 4, the size of the α-Mo phase dropped from 2.6 μm to 1.3 μm, which was close to the general size reduction. This fully verified that the addition of ZrB2 helped heterogeneous nucleation during the sintering process, and that this was more conducive to the formation of grain boundaries, reducing grain growth during the sintering process. As for the 0.6 μm ultrafine microstructure reported in a publication from Guojun Zhang [20], we conjecture that this was largely related to the particle size of the initial powder. Under the unified process, the addition of ZrB2 seems to contribute to crystal fineness. In order to accurately assess the grain size, we used the Archimedes section method as our measurement. n = N 2 /L t 2 n-Number of grains per unit area, N-Number of grains counted, L t -length of test lead. As shown in Figure 4, the size of the α-Mo phase dropped from 2.6 µm to 1.3 µm, which was close to the general size reduction. This fully verified that the addition of ZrB 2 helped heterogeneous nucleation during the sintering process, and that this was more conducive to the formation of grain boundaries, reducing grain growth during the sintering process. As for the 0.6 µm ultrafine microstructure reported in a publication from Guojun Zhang [20], we conjecture that this was largely related to the particle size of the initial powder. Under the unified process, the addition of ZrB 2 seems to contribute to crystal fineness. In order to accurately assess the grain size, we used the Archimedes section method as our measurement. As shown in Figure 4, the size of the α-Mo phase dropped from 2.6 μm to 1.3 μm, which was close to the general size reduction. This fully verified that the addition of ZrB2 helped heterogeneous nucleation during the sintering process, and that this was more conducive to the formation of grain boundaries, reducing grain growth during the sintering process. As for the 0.6 μm ultrafine microstructure reported in a publication from Guojun Zhang [20], we conjecture that this was largely related to the particle size of the initial powder. Under the unified process, the addition of ZrB2 seems to contribute to crystal fineness. Porosity and Density Evaluation As shown in Figure 5, the density of the alloy was identified. The theoretical density calculation formula refers to the method used in previous studies [21]: , ω Mo5SiB2 and ω ZrB2 are the proportions of each phase. ρ α−Mo , ρ Mo3Si , ρ Mo5SiB2 and ρ ZrB2 are the densities of each phase. The calculated results are given in Figure 6. It can be concluded from the results that the density of the alloy fluctuates with increases in doping concentration, but the relative density increases from 97.78% to 98.75%. This was due to the decrease in the theoretical density value. However, the density results confirmed that the densities of four alloys were maintained at relatively high levels. At the same time, when adding Zr at the cost of α-Mo, the density of the alloy decreases. The relatively high density, on the one hand, is the effect of oscillatory sintering on breaking agglomerates, and on the other hand, thanks to the effect of 24 h high-energy ball milling, the powder has high surface energy. Porosity and Density Evaluation As shown in Figure 5, the density of the alloy was identified. The theoretical density calculation formula refers to the method used in previous studies [21]: are the densities of each phase. The calculated results are given in Figure 6. It can be concluded from the results that the density of the alloy fluctuates with increases in doping concentration, but the relative density increases from 97.78% to 98.75%. This was due to the decrease in the theoretical density value. However, the density results confirmed that the densities of four alloys were maintained at relatively high levels. At the same time, when adding Zr at the cost of α-Mo, the density of the alloy decreases. The relatively high density, on the one hand, is the effect of oscillatory sintering on breaking agglomerates, and on the other hand, thanks to the effect of 24 h high-energy ball milling, the powder has high surface energy. A porosity analysis was also conducted. It can be seen from Figure 6 that the change of porosity was consistent with the change of density predicted in Figure 5. The apparent porosities of the alloys from in this article were all lower than 1%, indicating that the 9 Hz oscillatory sintering frequency was effective in suppressing the porosity. A porosity analysis was also conducted. It can be seen from Figure 6 that the change of porosity was consistent with the change of density predicted in Figure 5. The apparent porosities of the alloys from in this article were all lower than 1%, indicating that the 9 Hz oscillatory sintering frequency was effective in suppressing the porosity. Fracture Toughness and Hardness Evaluation The hardness test was performed under the condition of 20 Kg load for 15 s. It can be clearly seen from Figure 7 that as the amount of doping increased, its contribution to the hardness of the alloy gradually increased. In particular, the hardness of the alloy doped with 2.5 wt% ZrB 2 was increased by up to 14.9%. This was for the following two reasons. First, the addition of ZrB 2 had a refinement effect on the alloy structure, which increased the strength of the alloy; its corner was a "soft" α-Mo. Secondly, the phase grain size was reduced by nearly half, which can be considered according to the Hall-Petch relationship. On the other hand, it was also due to the increase in the density of the alloy. Materials 2021, 14, x FOR PEER REVIEW 7 of 12 Fracture Toughness and Hardness Evaluation The hardness test was performed under the condition of 20 Kg load for 15 s. It can be clearly seen from Figure 7 that as the amount of doping increased, its contribution to the hardness of the alloy gradually increased. In particular, the hardness of the alloy doped with 2.5 wt% ZrB2 was increased by up to 14.9%. This was for the following two reasons. First, the addition of ZrB2 had a refinement effect on the alloy structure, which increased the strength of the alloy; its corner was a "soft" α-Mo. Secondly, the phase grain size was reduced by nearly half, which can be considered according to the Hall-Petch relationship. On the other hand, it was also due to the increase in the density of the alloy. Fracture Toughness and Hardness Evaluation The hardness test was performed under the condition of 20 Kg load for 15 s. It can be clearly seen from Figure 7 that as the amount of doping increased, its contribution to the hardness of the alloy gradually increased. In particular, the hardness of the alloy doped with 2.5 wt% ZrB2 was increased by up to 14.9%. This was for the following two reasons. First, the addition of ZrB2 had a refinement effect on the alloy structure, which increased the strength of the alloy; its corner was a "soft" α-Mo. Secondly, the phase grain size was reduced by nearly half, which can be considered according to the Hall-Petch relationship. On the other hand, it was also due to the increase in the density of the alloy. The fracture toughness of the alloy was measured by increasing the experimental loading force (load 30 Kg), which is consistent with our previous article [15]. The fracture toughness values are shown in the Table 2. This was consistent with our previous research conditions. It was found that ZrB 2 had a certain effect on the fracture toughness of the alloy, which was increased by 7.6%. We suspected that doping ZrB 2 helped to improve the intergranular bonding force, purify the grain boundaries and help retard crack propagation. The fine grain effect made the crack propagation tortuous, consumed more energy and improved the fracture toughness. Oxidation Behavior of Mo-Si-B Alloy at 1300°C Cyclic oxidation was performed at 1300 • C, which was similar to the process of repeatedly starting and stopping the engine. At the same time, in order to explore the effect of the application of shock pressure and the ZrB 2 trace element on the oxidation resistance of the alloy, the alloys' sintering frequency at 0 Hz, 3 Hz, 6 Hz and 9 Hz were added here for comparison. The experimental results are plotted in Figure 8. Firstly, it can be concluded that under different oscillation frequencies, the oxidation resistance of alloys at high oscillation frequencies (9 Hz) was better than that of low-oscillation-frequency sintered alloys. This was because in our previous research high-oscillation-frequency alloys showed a fine structure [15]. Moreover, the defects were greatly suppressed. It was generally believed that the fine structure can form a borosilicate protective layer faster, and that fewer defects can reduce the intrusion of oxygen. Secondly, under the same 9 Hz oscillatory frequency sintering, the alloys with a small amount of ZrB 2 showed a higher level of oxidation resistance, and as the amount of doping increased, the oxidation resistance was stronger. The mass loss of the Mo-Si-B-2.5ZrB 2 alloy after 15 h oxidation was −28 mg/cm 2 , while the mass loss of the Mo-Si-B-9Hz alloy was −141.9 mg/cm 2 (mass loss reduced by 80.3%) and the mass loss of the Mo-Si-B-0Hz alloy without oscillatory pressure was −271 mg/cm 2 (mass loss reduced by 89.7%), which shows that the microstructure was obviously refined (especially the refinement of the α-Mo phase) so that it can improve the oxidation resistance of the Mo-Si-B alloy. The Mo-Si-B alloy doped with 2.5 wt% ZrB 2 not only further refined the structure, but also improved the protective ability of the oxide film. In order to better explore the relationship between the cyclic oxidation time and the change of the oxide layer, we further observed the oxide cross-section and analyzed its oxide composition. The distribution of the outermost borosilicate layer, the intermediate layer and the matrix can be clearly found in all alloys. The oxidation cross-section of the alloy at 1300 °C is shown here. The thickness of the oxide layer of the alloy was detected at 60 min and 900 min, respectively. Oxidation cross-sections showed different shapes at different time scales. As time progressed, the depth of oxygen intrusion became deeper and deeper. The MSB_0ZrB2 alloy in particular showed a more complicated protective layer after 1h, and the thickness reached 52.3 μm. Moreover, between the borosilicate layer By fitting the dependence of ∆W/A on time t, the oxidation kinetics of different alloys are analyzed [22,23]: In the equation, where ∆W/A is the weight loss per unit area, t is the oxidation, K is power law rate constant, and n is the power law exponent. The results for the n value of different alloys showed that: (i) the n values of Mo-Si-B-0Hz, Mo-Si-B-3Hz, Mo-Si-B-6Hz and Mo-Si-B-9Hz were 0.96, 0.82, 0.72 and 0.69, respectively, suggesting that the oxidation behaviors do not follow linear, parabolic or cubic laws. However, it can be concluded that the oxidation rate tended to slow down. (ii) The n values of Mo-Si-B-0.5ZrB 2 , Mo-Si-B-1.5ZrB 2 and Mo-Si-B-0.5ZrB 2 were 0.59, 0.37 and 0.34. The oxidation kinetics of doped alloys at 9 Hz oscillatory frequency were closer to parabola (0.59) and cubic (0.34). This suggests that doped ZrB 2 contributes greatly to the oxidation resistance rate of the alloy. We suspected that the oxide generated at 1300 • C could effectively act as a path to hinder oxygen diffusion, reduce the oxidation rate and improve the protection. In order to better explore the relationship between the cyclic oxidation time and the change of the oxide layer, we further observed the oxide cross-section and analyzed its oxide composition. The distribution of the outermost borosilicate layer, the intermediate layer and the matrix can be clearly found in all alloys. The oxidation cross-section of the alloy at 1300 • C is shown here. The thickness of the oxide layer of the alloy was detected at 60 min and 900 min, respectively. Oxidation cross-sections showed different shapes at different time scales. As time progressed, the depth of oxygen intrusion became deeper and deeper. The MSB_0ZrB 2 alloy in particular showed a more complicated protective layer after 1h, and the thickness reached 52.3 µm. Moreover, between the borosilicate layer and the intermediate layer appeared a very small separation, as shown in Figure 9a. When the oxidation time reached 900 min, the separation reached the degree of exfoliation shown in Figure 9e. The middle molybdenum oxide layer in Figure 9k shows a loose porous structure which does not have the ability to protect the substrate and which accelerates the intrusion of oxygen, thereby reducing the adhesion between the borosilicate layer and the substrate layer. The oxide layer of the MSB_0ZrB 2 alloy experienced relatively severe oxidation in the initial hour, showing greater mass loss, which can be verified in Figure 9. The thickness of the oxide layer added with ZrB 2 was less than that of the MSB_0ZrB 2 alloy, and the protective layer generated was relatively uniform, covering the surface of the substrate completely. There was also no peeling of the oxide layer caused by uneven cyclic oxidative stress within 900 min. The thickness of the borosilicate layer was calculated at an average of 10 points for each sample, and the statistics are shown in Table 3. It can be clearly seen that the thickness of the oxide layer of the MSB_0ZrB 2 alloy reached 85.5 µm after 900 min, which was almost double the thickness of the oxide layer of the MSB_2.5ZrB 2 alloy doped with 2.5 wt% ZrB 2 . This shows that oxygen invades deeper positions of the substrate, which also causes the violent performance of oxidative mass loss. This may be due to the addition of B, which improves the flow effect of the protective layer and speeds up the coverage rate of the borosilicate layer. It is worth noting that through EDS analysis, ZrO 2 and ZrSiO 4 were also found in the cross-section, as oxides in the borosilicate layer, which is the same as the result found in [24,25]. In short, the preparation of 2.5 wt%-doped alloy under the oscillating sintering process has excellent performance and can effectively block the intrusion of oxygen; the formed Zr oxide can also hinder the entry of oxygen. The addition of B increases the flow rate of the oxide layer and promotes protective layer flow, so that the thickness of the oxide layer is greatly reduced and has a better protective effect. Conclusions In this paper, the macroscopic mechanical properties and oxidation resistance performance of ZrB 2 -doped Mo-Si-B alloys were discussed, and all alloys were fabricated via the plasma oscillatory sintering technology (oscillatory frequency: 9 Hz). Doped alloys showed better performance under the preparation of grain refinement technology, which is worthy of further study. The following conclusions were drawn: (1) On the basis of fine-grain technology (9 Hz oscillatory frequency), the doping of ZrB 2 can refine its structure and enable it to reach half the size of the undoped alloy. At the same time, the apparent porosities of the doped alloys were all controlled below 0.7%, reflecting the characteristics of high density. (2) Doping ZrB 2 can improve the hardness and fracture toughness of the alloy, and it will continue to increase with increases in the doping amount. This was the effect not only of fine-grain strengthening, but also of ZrB 2 purifying the grain boundary and improving the intercrystalline bonding force. (3) The non-doped oscillating sintered alloy (0, 3, 6, 9 Hz) and the alloy doped with ZrB 2 (0.5 wt%, 1.5 wt%, 2.5 wt%: 9 Hz) were subjected to cyclic oxidation experiments at 1300 • C. The experimental results show that as the oscillatory frequency increased, the oxidation resistance was improved, and the mass loss of the alloy doped with ZrB 2 was much better than that of the undoped alloy (mass loss was reduced by at least 80.3%), indicating that the use of oscillatory sintering technology (9 Hz) and the alloy doped with ZrB 2 can effectively block oxygen invasion, greatly improving antioxidant performance. Data Availability Statement: No data were generated or analyzed in the presented research. Conflicts of Interest: The authors declare no conflict of interest.
6,328
2022-03-24T00:00:00.000
[ "Materials Science" ]
A triple product formula for plane partitions derived from biorthogonal polynomials A new triple product formulae for plane partitions with bounded size of parts is derived from a combinatorial interpretation of biorthogonal polynomials in terms of lattice paths. Biorthogonal polynomials which generalize the little q-Laguerre polynomials are introduced to derive a new triple product formula which recovers the classical generating function in a triple product by MacMahon and generalizes the trace-type generating functions in double products by Stanley and Gansner. Résumé. Une nouvelle formule pour des partitions planes donnée dans un produit triple est obtenue d’une interprétation combinatoire des polynômes biorthogonaux en termes de chemins sur le réseau carré. Des polynômes biorthogonaux qui généralisent les petits q-polynômes de Laguerre sont introduits pour obtenir une nouvelle formule qui généralise la fonction génératrice dans un triple produit établie par McMahon et les fonctions génératrice de traces dans des produits doubles établies par Stanley et Gansner. Introduction A plane partition π of a nonnegative integer N is a two-dimensional array π = (π i,j ) i,j=1,2,3,... of nonnegative integers such that ∞ i,j=1 π i,j = N and π i,j ≥ max{π i+1,j , π i,j+1 } for every (i, j) ∈ Z 2 ≥1 .(Throughout the paper we write Z ≥k for the set of integers at least k.)A plane partition π distributes N among its parts π i,j so that each row and each column are non-increasing, and gives a two-dimensional analogue of an (integer) partition.MacMahon studied plane partitions in depth and found the following generating function in a triple product [9,Section IX] π∈P(r,c,n) with |π| = ∞ i,j=1 π i,j where P(r, c, n) denotes the set of plane partitions of at most r rows and at most c columns whose parts are bounded above by n.Namely π ∈ P(r, c, n) if and only if π r+i,j = π i,c+j = 0 for every (i, j) ∈ Z ≥1 and π 1,1 ≤ n. Let P(r, c) = ∪ ∞ n=0 P(r, c, n), the set of plane partitions of at most r rows and at most c columns.MacMahon also showed the following generating function in a double product π∈P(r,c) that is obtained from (1) by n → ∞.Stanley introduced the trace tr(π) = ∞ i=1 π i,i of plane partitions and generalized (2) as [11,12] π∈P(r,c) ( Gansner later refined (3) as [3,4] π∈P(r,c) ∈Z with the -traces tr (π) = j−i= π i,j for ∈ Z. (Gansner obtained more general results for (reverse) plane partitions of arbitrary shape.)Gansner's generating function (4) in a double product recovers Stanley's (3) by q = q for all ∈ Z except for q 0 = aq, that further recovers MacMahon's (2) by a = 1.We so have a series of double product formulae (2), ( 3) and ( 4) for the set P(r, c) of plane partitions with unbounded size of parts.Is there an analogous series of triple product formulae for the set P(r, c, n) of plane partitions with bounded size of parts?We find in this paper such a series of triple product formulae, which involves MacMahon's generating function (1) and generalizations of the trace-type generating functions (3) and ( 4), with the help of biorthogonal polynomials. This paper is organized as follows.In Section 2 we explain basics of biorthogonal polynomials and show a combinatorial interpretation of (general) biorthogonal polynomials in terms of lattice paths.A determinant of moments for biorthogonal polynomials will admit a combinatorial expression by nonintersecting lattice paths of Gessel-Viennot [5] type.In Section 3 we introduce specific biorthogonal polynomials, which we call the generalized little q-Laguerre polynomials, and examine lattice path combinatorics of the generalized little q-Laguerre polynomials based on the general results developed in Section 2. The results in Section 3 are used in Section 4 to derive a triple product formula for plane partitions which generalizes Gansner's generating function (4) (Theorem 9).The triple product formula will reduce to another triple product formula, which will generalize Stanley's generating function (3), and recover MacMahon's generating function (1) by specialization of parameters. Lattice path combinatorics of biorthogonal polynomials Let K be a field.Let F : K[x ±1 , y ±1 ] → K be a linear functional defined on the space of Laurent polynomials in x and y over K.The linearity of F means that F[aP (x, y) + bQ(x, y)] = aF[P (x, y)] + bF[Q(x, y)] for any constants a and b and any Laurent polynomials P (x, y) and Q(x, y).The linear functional F is thus uniquely determined by the moments We define determinants of moments where ∆ (r,c) 0 = 1.We assume throughout the paper that the determinant ∆ (r,c) n does not vanish.Let (r, c) ∈ Z 2 and n ∈ Z ≥0 .We define a (monic) biorthogonal polynomial P (r,c) n (x) ∈ K[x] with the leading term x n by the orthogonality where h (r,c) n is some nonvanishing constant, called the normalization constant, and δ j,n the Kronecker delta.The biorthogonal polynomial uniquely exists for F. (Write down (7) in a linear system of the coefficients of P (r,c) n (x) and solve it.)The monicity and the orthogonality (7) induce the determinant expression of the biorthogonal polynomial and hence We remark that the biorthogonal polynomials P (r,c) n (x) satisfy the biorthogonality relation with polynomials which may be different from P (r,c) n (x).In the case where x = y the biorthogonal polynomials reduce to ordinary orthogonal polynomials ,c) n (x) which are self-orthogonal.See, e.g., [2] for details on orthogonal polynomials.The next proposition shows biorthogonal analogues of the Christoffel and Geronimus transformations for orthogonal polynomials, see, e.g., [13]. Proposition 1 (cf.[10]) The biorthogonal polynomials satisfy the adjacent relations Proof: Expand xP (r,c+1) (x), 0 ≤ ≤ n, respectively and then equate the coefficients by using the monicity and the orthogonality (7). 2 In the rest of this section we show a combinatorial interpretation of biorthogonal polynomials in terms of lattice paths.Let us view a two-dimensional integral lattice in the first quadrant, Z 2 ≥0 , as a square lattice.We depict the square lattice in matrix-like coordinates where the south and east neighbors of the lattice point (i, j) ∈ Z 2 ≥0 are (i + 1, j) and (i, j + 1) respectively.For any lattice points S ∈ Z 2 ≥0 and T ∈ Z 2 ≥0 a lattice path P going from S to T is a path on the square lattice which travels between S and T with north steps (−1, 0) and east steps (0, 1). Figure 1 shows a lattice path going from (4, 0) to (0, 6).When S = T we conventionally consider the empty lattice path of no steps at S = T . Let α i,j , (i − 1, j) ∈ Z 2 ≥0 , be arbitrary constants.As shown in Figure 1 we label the edges of the square lattice by α i,j or 1; the vertical edge between the lattice points (i, j) and (i − 1, j) by α i,j , and every horizontal edge by 1.The weight w(P ) of a lattice path P is defined to be the product of the labels of all the edges passed by P .For example, the lattice path in Figure 1 has the weight w(P ) = α 4,2 α 3,4 α 2,4 α 1,5 .The weight of any empty lattice path is assume to be 1. The next theorem gives the base of the combinatorial interpretation of biorthogonal polynomials. Theorem 2 Assume that where a are coefficients of the adjacent relations (12).For each (r, c) where the sum ranges over all the lattice paths P going from (r, 0) to (0, c). We can confirm Theorem 2 by the adjacent relations (12).For example, let us consider the case where (r, c) = (2, 2).We expand the monomial x r = x 2 in a linear combination of (x) by using (12) as follows.By (12a) with ( 14), By (12b) with ( 14), where Q(x) is a linear combination of P (0,2) 1 (x) and P (0,2) 2 (x).In the last equation the coefficient of P (0,2) 0 (x) is equal to the right-hand sum of (15).Thus For this expansion of x 2 we multiply y c = y 2 to the both sides and apply the linear functional F. From the orthogonality (7) we then obtain that gives (15) since h (0,2) 0 = f 0,2 .See [6] for the complete proof of Theorem 2. Theorem 2 provides a combinatorial interpretation of moments of biorthogonal polynomials in terms of lattice paths.In view of Gessel-Viennot's method [5], [1,Chapter 31], that naturally leads to a combinatorial interpretation of determinants of moments in terms of non-intersecting lattice paths.For (r, c, n) ∈ Z 3 ≥0 we define LP(r, c, n) to be the set of n-tuples (P 0 , . . ., P n−1 ) of lattice paths such that (i) P k goes from (r +k, 0) to (0, c+k); (ii) P 0 , . . ., P n−1 are non-intersecting, namely P j ∩P k = ∅ if j = k.Figure 2 shows an example of such an n-tuple (P 0 , . . ., P n−1 ) ∈ LP(r, c, n) when (r, c, n) = (4, 5, 3). 3 Generalized little q-Laguerre polynomials In this section we apply the combinatorial interpretation of (general) biorthogonal polynomials in the previous section to a specific instance of biorthogonal polynomials which we call the generalized little q-Laguerre polynomials.In what follows we adopt the following notations.For any sequence x , ∈ Z, where [x] n n+1 = 1.Let a, p and q , ∈ Z, be indeterminates.We define the generalized little q-Laguerre polynomial of degree n by L n (x; a; p 1 , . . ., p n−1 ; q 1 , . . ., q n−1 ) = n i=0 The name comes from the (monic) little q-Laguerre polynomial (cf.[7, §14.20]) that is obtained from (21) with specialized parameters p = q = q for all ∈ Z. (For the standard notations for q-analysis, such as (a; q) n and 2 φ 1 , see, e.g., [7, Ch. 1].)Let Theorem 4 Let K = Q(a, p 0 , p ±1 , p ±2 , . . ., q 0 , q ±1 , q ±2 , . . .).The generalized little q-Laguerre polynomials satisfy the orthogonality (7) with ,c) n (x) and the normalization constants where the linear functional F has the moments f i,j = F[x i y j ] given by where f 0,j = 1. We can prove the orthogonality by means of a generalization of the q-Chu-Vandermonde identity for 2 φ 1 , see [6] for details.Proposition 1 and Theorem 4 immediately yield the following. Corollary 5 The generalized little q-Laguerre polynomials satisfy the adjacent relations (12) with ,c) n (x) and the coefficients The general results in Section 2 gives us the following combinatorial interpretation of the the generalized little q-Laguerre polynomials.See [6] for the proofs of Lemma 6 and Theorems 7 and 8 mentioned below. The labels from the coefficients (26) of adjacent relations give rise to the following weight. Theorem 7 For each (r, c) ∈ Z 2 ≥0 the moment f r,c given by (25) admits the combinatorial expression where the sum ranges over all the lattice paths going from (r, 0) to (0, c).The analogue of Corollary 3 for the generalized little q-Laguerre polynomials is the following.,c) n = det 0≤i,j<n (f r+i,c+j ) of the moments f i,j given by (25) satisfies where d k = D 0 (λ(P k )). Triple product formulae for plane partitions It is customary to depict a plane partition π = (π i,j ) i,j=1,2,3,... in a three-dimensional (3D) Young diagram in which π i,j (unit) cubes are stacked over the positions (i, j) ∈ Z 2 ≥1 .For example, the plane partition is depicted as the 3D Young diagram shown in Figure 3.For each k ∈ Z ≥1 we define a partition λ k (π) so that the Young diagram of λ k (π) is equal to the cross-section at level k of the 3D Young diagram of π.For example, the plane partition (31), or the 3D Young diagram in Figure 3, gives rise to the partitions λ 1 (π) = (5,5,4,2), and λ k (π) = ∅ = (0, 0, 0, . . . ) for k ≥ 4, see Figure 4. We will write λ k,i (π) for the i-th part of the partition λ k (π). There exists a classical bijection between P(r, c, n) and LP(r, c, n) which connects plane partitions with non-intersecting lattice paths.For completeness we here describe the bijection.From a plane partition π ∈ P(r, c, n) an n-tuple (P 0 , . . ., P n−1 ) ∈ LP(r, c, n) of non-intersecting lattice paths corresponding to π is constructed as follows. (ii) For each 0 ≤ k < n translate the lattice path P k by (k, k) (so that P k goes from (r + k, k) to (k, c + k)).We write P k for the obtained lattice path. (iii) For each 0 ≤ k < n add k consecutive east and north steps to the initial and terminal points of P k respectively (so that P k goes from (r + k, 0) to (0, c + k)).The obtained lattice path is P k . See Figure 5 that demonstrates the construction.The constructive bijection can be formulated by We hence have By means of the bijection, that admits (34), we can translate Theorem 8 for non-intersecting lattice paths into the following theorem for plane partitions, that is the main theorem of this paper. where π 1,1 denotes the (1, 1)-part of a plane partition π, and Proof: By use of the bijection between P(r, c, n) and LP(r, c, n) and (34) we can equivalently translate the formula (30) in Theorem 8 into where The proof thus amounts to the evaluation of the determinant ∆ (r,c) n of moments for the generalized little q-Laguerre polynomials discussed in Section 3. One possible way to evaluate the determinant is to utilize one of the product formulae for determinants by Krattenthaler [8].We here show an alternative way based on biorthogonal polynomials.From (9) we have for (general) biorthogonal polynomials.Substituting the normalization constants (24) for the generalized little q-Laguerre polynomials we readily know ). Substituting ( 38) for (36a) we soon get the triple product formula (35a). 2 The triple product formula (35) for plane partitions reduces to π∈P(r,c,n) q |π| a tr(π) ω n (π) = (q n−k+1 ; q) D0(λ k (π)) (aq n−k+1 ; q) D0(λ k (π)) with the specialized parameters a ← aq and p = q = q for all .Furthermore (39) reduces to MacMahon's triple product formulae (1) with a = 1 since ω n (π)| a=1 = 1.We have thus obtained a series of the triple product formulae (1), (39) and (35) for the set P(r, c, n) of plane partitions with bounded size of parts.This series of triple product formulae is totally analogous to the series of the generating functions (2), ( 3) and (4) in double products for the set P(r, c) of plane partitions with unbounded size of parts.Indeed the flow of reductions from (35) via (39) to (1) for triple product formulae is performed by the same specializations of parameters as the flow of reductions from (4) via (3) to (2) for double product ones, where q 0 = a and q − = p for all ∈ Z ≥1 in (4).Moreover, as MacMahon's generating function (1) in a triple product recovers his generating function (2) in a double product by n → ∞, the triple product formulae (35) and (39) respectively recover Gansner's and Stanley's generating functions (4) and (3) in in linear combinations of P (r,c) k (x), 0 ≤ k ≤ n + 1, and P
3,724
2016-07-04T00:00:00.000
[ "Mathematics" ]
CLASRM: A Lightweight and Secure Certificateless Aggregate Signature Scheme with Revocation Mechanism for 5G-Enabled Vehicular Networks The rapid deployment of 5G technology has further strengthened the large-scale interconnection between sensing devices and systems and promoted the rapid development of smart cities and intelligent transportation systems. 5G-enabled vehicular networks take advantage of cellular vehicle-to-everything (C-V2X) technology to achieve the connection between moving vehicles, between vehicles and infrastructure, and between vehicles and the cloud, which can reduce the possibility of tra ffi c jams and accidents, improve transportation e ffi ciency, and realize automatic driving. Besides, 5G-enabled vehicular networks also provide infotainment services and industry application services. High-strength data transmission, however, will bring a serious burden of resource overhead, and there are hidden dangers of security and privacy in the communication process of vehicular networks. Some current vehicular network authentication schemes adopt public key infrastructure-based (PKI-based) and identity-based authentication methods to achieve conditional privacy preservation. Still, these schemes are too expensive and cannot address the problems of costly certi fi cate management or risky key escrow. Some schemes use computationally complex bilinear pairing operations that result in low e ffi ciency and do not consider the revocation of malicious nodes so that they cannot e ff ectively prevent further malicious attacks. This paper proposes a lightweight certi fi cateless aggregate signature (CLAS) scheme with a revocation mechanism suitable for 5G-enabled vehicular networks in response to the above problems. Our proposed scheme uses aggregation signature technology to aggregate multiple signatures into a single short signature, thus reducing communication overhead and storage overhead of road side units (RSUs). Furthermore, our proposed scheme utilizes the elliptic curve cryptography (ECC) to reduce veri fi cation time and computational overhead. Moreover, in order to prevent malicious users from sending invalid signatures to attack, our proposed scheme uses binary search to identify invalid signatures and introduces a cuckoo fi lter to revoke malicious users to prevent reattack. Finally, formal proof and experimental analysis show that our proposed scheme has greater advantages with respect to security and e ffi ciency compared with the previous schemes. impersonation Introduction The large-scale commercial deployment of 5G technology has brought richer application scenarios, promoted the rapid development of smart cities and intelligent transportation systems, and provided more convenience for our lives and work. As an emerging communication technology, 5G enables higher data transmission rate and lower latency and supports direct communication between device-to-device (D2D). Simultaneously, as a crucial part of the smart city sensing system, the rapid development of intelligent sensing technology also makes innovative applications based on the Internet of Vehicles (IoVs) emerged in an endless stream. The traditional vehicular ad hoc networks (VANETs) primarily make use of the dedicated short range communication (DSRC) standard for vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) communication. However, some studies have shown that the DSRC standard has a high probability of collision [1][2][3], especially at higher vehicle density. Moreover, the DSRC standard also has defects in scalability, mobility, and latency, so that it cannot be well suited for delay-sensitive vehicle-to-everything (V2X) applications. Intelligent transportation systems and autonomous driving technologies put forward more stringent requirements on system performance such as communication rate, delay, and reliability of the IoVs. Therefore, C-V2X based on 5G communication technology that can provide low-latency and high-reliability V2X communication capabilities is proposed [4]. Multiple vehicles with intelligent sensing devices make use of C-V2X communication technology to jointly build 5G-enabled vehicular networks to provide a variety of V2X services and applications. For example, vehicles equipped with on-board units (OBUs) can communicate with adjacent vehicles and can also communicate with the RSUs to share information related to road safety and industrial applications. Then, vehicles and RSUs use the emergency and beacon information to make decisions in a timely manner to achieve intelligent driving assistance and reduce the possibility of traffic jams and accidents. In addition, vehicles can also collect sensing information in real time and send it to the cloud to provide services such as environmental monitoring and accident reporting, which realize vehicle-to-network (V2N) communication. Therefore, in order to provide high-quality applications and services, it is necessary to ensure the security and reliable connection of the IoVs. In 5G-enabled vehicular networks, due to the openness and vulnerability of wireless channels, attackers can intercept, forge, and modify information through malicious means and even inject false information to cause vehicles to change lane or accelerate, which will cause unpredictable consequences [5]. Therefore, in 5G-enabled vehicular networks, the primary problem is how to ensure security of message transmission and exchange. It is not only necessary to authenticate identity of each message sender but also to check the integrity of the message, etc. Secondly, attackers can also obtain sensitive information such as the vehicles' trajectory through analysing the information sent by vehicles and then carry out some criminal acts, thereby reducing the enthusiasm of vehicles to join the IoVs [6]. Therefore, privacy leakage of vehicles is also a problem that must be solved. A pseudonym can be used to replace the vehicle's real identity for communication to ensure privacy and unlinkability. Moreover, a trusted third party that stores the vehicle's real identity is also needed to trace the anonymous vehicle and revoke its identity. Some conditional privacy-preserving authentication schemes have been surveyed [7][8][9][10]. Among them, two authentication schemes based on PKI have been proposed [7,8] to meet security and privacy requirements. However, due to the large number of certificates that needs to be stored and the limited storage capacity of vehicles or RSUs, it is challenging to meet the requirements of PKI-based authentication mechanisms. Besides, in order to alleviate the problem of public key certificate management in PKI-based authentication schemes, many identity-based authentication schemes for VANETs have been proposed [9,10] to realize conditional privacy preservation. However, once the private key generated by the private key generation centre is leaked, the problem of risky key escrow will appear. Fortunately, the certificateless signature (CLS) scheme can solve the above two problems well while retaining the advantages of the identity-based authentication mechanism [11][12][13]. There are also many resource-constrained devices in 5Genabled vehicular networks. Large-scale communication between vehicles will place a severe burden on these devices. Therefore, the communication, computing, storage, and other overhead in IoVs are also worthy of our attention. Using aggregate signature or batch verification can reduce overhead so as to improve authentication efficiency and overcome the delay-sensitive problem [14]. Some schemes using bilinear pairing operations have been proposed [15][16][17], but these schemes incur huge computational overhead and low efficiency of message signing and verification. Therefore, these schemes cannot meet the applications requirements of low latency in 5G-enabled vehicular networks. Recently, Cui et al. [18] proposed a message authentication framework based on reputation score for delay-sensitive 5G-enabled vehicular networks, in which vehicles with poor reputation value cannot communicate with other vehicles. Moreover, Cui et al. [18] proposed an authentication scheme based on ECC, which supports batch authentication to reduce computational overhead. Taha and Shen [19] proposed a vehicular clustering algorithm based on speed, location, and signal strength and proposed a group authentication scheme for 5G scenarios to reduce the computational overhead of handover authentication. The above two schemes are both efficient and feasible, but they do not solve the problems of how to reduce communication overhead and how to revoke malicious users. Therefore, in 5G-enabled vehicular networks, in order to achieve efficient communication between large-scale vehicles and to meet the requirements of security and privacy preservation, this paper proposes a CLAS scheme with revocation mechanism for 5G-enabled vehicular networks. The main contributions of this paper are summarized as below. (1) In order to reduce the computational overhead, we propose a CLAS scheme based on ECC without using computationally complex bilinear pairing operations and map to point hash functions. When many vehicles send a large number of messages with signatures to the RSUs, there is huge communication overhead and storage overhead. To solve this problem, we utilize the aggregate signature technology to aggregate multiple signatures into a single short signature and verify it once (2) In order to prevent malicious vehicles from influencing the aggregation verification and interfering with the normal communication of vehicles by inserting invalid signatures into the aggregation signatures, we make use of binary search to identify invalid signatures and take advantage of a cuckoo filter to construct a revocation mechanism to prevent malicious vehicles from attacking again. In order to prevent information injection attacks that often appear in 2 Wireless Communications and Mobile Computing 5G-enabled vehicular networks, the method with a random vector is used to ensure security in the signature aggregation phase (3) Formal proof and security analysis can prove that the proposed CLAS scheme can meet the security and privacy preservation requirements in efficient communication. In addition, experimental results indicate that the proposed CLAS scheme has a particular improvement in computing and communication efficiency compared with the previous schemes The organization of this paper is summarized as follows. We review related work in Section 2. We introduce background knowledge in Section 3. Section 4 describes the proposed CLAS scheme in detail. In Section 5, we give the formal proof and security requirements analysis of the proposed CLAS scheme. In Section 6, we compare our proposed scheme with other schemes in detail in terms of performance. Section 7 concludes this paper. Related Work This section will introduce the authentication schemes in traditional VANETs and the security protection methods applied to 5G-enabled vehicular networks, respectively. At present, a variety of traditional PKI-based authentication schemes have been proposed [7,8]. However, as the number of users increases, a large number of public key certificates will lead to a gradual increase in storage and communication overhead. In 1984, Shamir [20] proposed the identity-based public key cryptography (ID-PKC), which solved the problems existing in the use of certificates in traditional PKI-based schemes and reduced the overhead of certificate management. Although many identity-based authentication schemes have been proposed [20], it still cannot solve the key escrow problem. In these schemes, it is assumed that all users must completely trust the key generation centre (KGC), but this assumption is too ideal to be adopted practically in many applications. Al-Riyami and Paterson [21] proposed an authentication scheme based on the certificateless public key cryptography (CL-PKC) in 2003, in which the user's private key is a combination of the partial private key generated by the KGC and the secret value generated by the user. After that, many CLS schemes and security models based on CL-PKC have been proposed [22][23][24]. Huang et al. [24] proved that the CLS scheme proposed in [21] could not resist the public key replacement attack and further proposed an improved CLS scheme. Subsequently, Yum and Lee [22] introduced a general CLS framework. However, the above solutions still have problems in terms of efficiency and are not suitable for the low-latency requirements of 5G-enabled vehicular networks. Boneh et al. [14] first proposed the concept of aggregate signature in 2003. The aggregation signature technology can aggregate n signatures of n messages from n users into a single short signature to reduce the signature length and the authentication burden of RSUs. The validity of the aggregate signature is guaranteed by verifying the validity of each sig-nature involved in the aggregate signature. Batch verification technology is also considered to be an effective method to improve verification efficiency. With this technique, multiple signatures from different signers of different messages can be verified at once. Subsequently, many CLAS schemes applied to VANETs have been proposed. In 2018, Cui et al. [25] proposed a CLAS scheme based on ECC, which has the advantages of certificateless public key cryptography and aggregate signature. This scheme can achieve trade-off between privacy preservation and traceability, and the security of the proposed scheme is proved by the random oracle model (ROM). Gayathri et al. [11] and Bayat et al. [12] proposed some effective CLS schemes for VANETs with batch verification. Unfortunately, Li and Zhang [13] found that the scheme proposed by Bayat et al. was insecure in their security model and proposed an improved scheme. In 2019, Kamil and Ogundoyin [26] proved that the CLS and CLAS schemes proposed by Cui et al. [25] were not secure against Adv II adversary in the ROM and proposed an improved CLAS scheme for VANETs. Subsequently, Zhao et al. [27] proved that the scheme proposed by Kamil and Ogundoyin [26] could not resist Adv I and Adv II adversaries and proposed an improved CLAS scheme based on ECC. However, Thumbur et al. [28] pointed out that the construction of the CLS scheme [27] was incorrect in 2020. Zhong et al. [29] also proposed a privacy-preserving authentication scheme that realized full aggregation in VANETs, in which the length of the aggregate signature was kept constant, reducing communication and storage overhead. However, Kamil and Ogundoyin [30] proved that the scheme [29] was not secure in the standard security model by designing two specific attacks in 2020. Then, they modified the signature, verification, and aggregation verification algorithms [29] to make it more secure and effective. In 2020, Ren et al. [15] and Ali et al. [16], respectively, designed a CLS scheme with batch verification for V2I secure communication in VANETs using blockchain technology. Ali et al. [16] also proposed an efficient CLAS scheme based on bilinear pairing, which took advantage of the immutability and openness of blockchain and could verify whether the identities of all vehicles in VANETs was legitimate. Recently, Mei et al. [17] proposed an effective CLAS scheme with conditional privacy preservation, which used full aggregation technology to reduce bandwidth resources and computational overhead. However, since these schemes [15-17, 29, 30] are all implemented based on bilinear pairing, the process of signing and verifying has a lot of overhead. With the arrival of the 5G era, the research on security and privacy preservation methods in 5G-enabled vehicular networks is gradually started [2,3,18,19,31,32]. Eiza et al. [3] proposed a novel system model for 5G-enabled vehicular networks, which provided a reliable and secure real-time video reporting service, and also proposed a realtime video reporting service protocol that met security and privacy preservation requirements. Similarly, Zhang et al. [2] also proposed an authentication scheme based on edge computing, which used edge computing vehicles to realize communication and verification between vehicles without the participation of RSUs. Recently, Cui et al. [31] proposed 3 Wireless Communications and Mobile Computing a reliable and efficient content sharing scheme for 5Genabled vehicular networks. Wang et al. [32] proposed a hybrid D2D message authentication scheme for 5Genabled vehicular networks. Preliminaries This section first introduces the theoretical basis of our proposed scheme. ECC and cuckoo filter are important components of the proposed CLAS scheme, in which ECC can ensure the efficient performance and security of the system, while the cuckoo filter has an efficient search feature to find malicious user quickly. After that, the system model and the authentication process are described. Then, the basic framework and execution process of the general CLAS scheme are given. Besides, the threat model elaborates the assumption conditions, attack model, and adversary model. Finally, the design goals are summarized. 3.1. ECC. ECC is widely used in the design of cryptographic protocols and security schemes because of its higher computational efficiency and communication efficiency. Suppose that a finite field F p is defined over a large prime p and an elliptic curve E defined over the field F p is the set of solutions ðx, yÞ to the equation E : where a, b ∈ F p , ðx, yÞ ∈ F p * F p , and ð4a 3 + 27b 2 Þ mod p ≠ 0. Therefore, the elliptic curve E is the set of such solutions together with a point at infinity O, where O is identity element, and the points of E form a finite Abelian group. Here, p and E are fixed and publicly known. Furthermore, suppose that P ∈ E is a fixed and publicly know point, and G p is an additive group whose order is a large prime q and whose generator is P. Therefore, G p is a finite cyclic subgroup on the elliptic curve E. A brief description of the two difficult problems of ECC is as follows. (1) Elliptic curve discrete logarithm problem (ECDLP): for any given two random points P, Q ∈ G p , where x ∈ ½0::q − 1 is unknown and Q = xP is known, it is hard to compute x in polynomial time with nonnegligible probability (2) Computational Diffie-Hellman problem (CDHP): for any given two random points aP, bP ∈ G p , where a, b ∈ ½0::q − 1 are unknown, it is hard to computer Q = abP, Q ∈ G p in polynomial time with nonnegligible probability The security of ECC is based on the difficulties of the ECDLP and CDHP. Cuckoo Filter. A cuckoo filter is a compact variant of a cuckoo hash table and a probabilistic data structure for approximate set membership tests. It only stores fingerprints for each item inserted, supports dynamic addition and deletion of items, and provides higher lookup performance compared to standard Bloom filters without incurring higher overhead in space and performance. Therefore, cuckoo filter is suitable for applications that store many items and aim for low false-positive rates [33]. The cuckoo filter uses two hash functions h 1 ðxÞ and h 2 ðxÞ to insert a new item x into a hash table. If one of x ' s two locations is empty, then the algorithm just puts x in there. But if both are full, the algorithm randomly selects one of them, kicks out the existing item, and puts x in there. At the same time, the kicked item is put to its own alternate location. The insertion operation of the cuckoo filter is shown in Figure 1 3.3. System Model. As shown in Figure 2, the system model of 5G-enabled vehicular networks mainly consists of four entities: trusted authority (TA), application server (AS), RSUs (i.e., 5G-RSUs or 5G-BSs), and vehicles, and they communicate with each other via C-V2X. The system model can be divided into two layers, in which the upper network is composed of TA and AS and the lower network is composed of RSUs and vehicles equipped with OBUs. TA: TA includes a KGC and a trace authority (TRA). TA uses secure wired channels to communicate with other facilities and is mainly responsible for initializing system parameters for each registered vehicle and RSU in 5G-enabled vehicular networks. TA can also conditionally trace and revoke malicious vehicles to meet the requirements of security and privacy preservation. KGC is responsible for generating a partial private key for each vehicle, while TRA is responsible for generating a pseudonym for each vehicle and can trace the vehicle's real identity through the pseudo identity of the vehicle. TA's executive agency is generally the government's traffic management centre (TMC) AS: AS is an application server with strong computing and storage capabilities. AS uses secure wired channels to communicate with other facilities and is mainly responsible for providing large-scale computing and storage services for 5G-enabled vehicular networks to collect and analyse traffic-related data RSUs: RSUs are located on both sides of the road and generally include 5G base stations and road side units. RSUs use V2I communication mode to receive and transmit vehicles' information. RSUs can also communicate with TA and AS using secure wired channels. Therefore, RSUs have the ability to store and forward data and cooperate with vehicles for data analysis. Moreover, RSUs also check the integrity and authenticity of the messages to fulfil security requirements Vehicles: vehicles are the main carrier for collecting driving data and sensing information in vehicular networks. Vehicles can use V2V communication mode to exchange information with adjacent vehicles and can also use V2N communication mode to communicate with TA and AS When vehicles join vehicular networks, TA initializes system parameters so that each vehicle can be assigned a pseudo identity and a corresponding public and private key pair. The vehicle must sign each message before sending it. After receiving the message signature pair, the RSU needs to verify the signature to ensure the authenticity of the 4 Wireless Communications and Mobile Computing signature and the integrity of the message. When the RSU receives a large number of traffic-related messages, the messages are aggregated, and the messages and signatures are sent to AS for verification and analysis. Finally, AS feeds back the results of authentication and analysis to the TA for further processing. System Framework. Generally, the CLAS scheme consist of the following algorithms [11,25]. According to the over-view of our proposed scheme, the process of each algorithm is roughly described as follows: (1) System initialization: according to the security parameter λ input by the system, TA generates an elliptic curve E, master secret keys x and s, system public keys K pub and T pub , and system parameters params, respectively. TA broadcasts system parameters params and keeps the master secret keys x and s Wireless Communications and Mobile Computing (2) Pseudo identity generation: vehicle V i sends its real identity RID i and its partial pseudo identity PID i,1 to TRA. TRA verifies the authenticity of the vehicle's real identity RID i and generates pseudonym PID i and then sends it to KGC and vehicle V i (3) Partial private key generation: KGC takes system parameters params, master secret key x, and a vehicle's pseudo identity PID i as input, returns vehicle's partial private key ppk i , and sends it to the vehicle V i over a secure channel (4) Vehicle key generation: vehicle V i takes its pseudo identity PID i , partial private key ppk i , state information ∇, and system parameters params as input and outputs a private key vsk i , a public key vpk i , a full public key PK i , and a full private key sk i (5) Individual signature generation: this algorithm is performed by each vehicle V i that takes state information ∇, its pseudo identity PID i and a message M i as input and responds with a signature σ i as output (6) Individual signature verification: this is an algorithm performed by a verifier, such as an RSU, that uses system parameters params, pseudo identity PID i , message M i , and signature σ i to verify the validity of the signature σ i (7) Signature aggregation: this is an algorithm performed by an aggregate signature generator, such as an RSU, that aggregates n signatures σ i of n messages M i from n users V i into a single short aggregate signature σ and send it to AS (8) Aggregate signature verification: this algorithm is performed by AS for verifying the validity of the aggregate signature σ. It takes system parameters p arams, n pseudo identities fPID 1 , PID 2 , ‥‥PID n g, n messages fM 1 , M 2 , ‥‥M n g, and the aggregate signature σ as input and outputs true if the aggregate signature σ is valid and false otherwise Based on the above algorithm process, the execution flow of our proposed scheme is shown in Figure 3. Threat Model. The proposed CLAS scheme has the following assumptions: (1) TA and AS are fully trusted, independent, and reliable entities with sufficient storage and computation capabilities (2) Each RSU is an honest-but-curious entity with slightly less storage and computation capabilities than TA and AS and provides good network coverage and ultrafast information transmission speed (3) Each vehicle is an untrusted entity, equipped with an OBU with limited storage and computation capabilities and equipped with a tamper proof device (TPD) to protect absolute data security In order to prove that our proposed CLAS scheme is not existentially forgeable against adaptive chosen-message attack in ROM, we define two types of adversaries which are similar to these schemes [10,12,13,34]. Type 1. A 1 adversary is an external adversary who has the ability to launch a public key replacement attack but cannot obtain the master secret key or the partial private key Type 2. A 2 adversary represents a malicious-but-passive internal attacker who can access the master secret key and the partial private key but cannot replace any user's public key We consider two games played by a challenger C and an adversary A ∈ fA 1 , A 2 g to define the security of our proposed scheme. The adversary A can query the following oracles. Create Vehicle ðIDÞ. When receiving a query from A, C takes a vehicle's identity ID as input, calculates vpk i , and sends it to A Partial Private Key ðIDÞ. When receiving the partial private key query for a vehicle whose identity is ID from A, C sends ppk i to A Public Key ðIDÞ. When A requests C for the public key of a vehicle whose identity is ID, C calculates PK i and sends it to A Secret Key ðIDÞ. Given a vehicle whose identity is ID, the oracle sends vsk i to A Replace Public KeyðID, PK i * Þ. When receiving this query from A, C replaces the public key PK i with PK i * Sign ðID, M i Þ. When receiving a message M i from a vehicle whose identity is ID, C calculates a certificateless signature σ i and sends it to A 3.6. Security and Privacy Requirements. Considering that 5G-enabled vehicular networks must meet the requirements of security, efficiency, and privacy protection, the proposed CLAS scheme needs to satisfy the security and privacy requirements described as follows: Message integrity and authentication: in 5G-enabled vehicular networks, when an RSU receives a signature and message sent by a vehicle, it must verify the authenticity and integrity of the message to ensure the legitimacy of the vehicle and to ensure that the message has not been tampered with, impersonated, or forged by a malicious attacker Privacy preserving: during the authentication process for 5G-enabled vehicular networks, vehicles are not allowed to communicate using their real identities and must use pseudonyms Traceability and revocability: when vehicles communicate with pseudonyms, they are likely to be attacked by malicious vehicles. Therefore, TRA must have the ability to obtain the real identity of the malicious vehicle in order to trace its malicious act, as well as put in place certain mechanisms for management, such as revoking the malicious vehicle Unlinkability: an attacker must not be able to infer a vehicle from multiple messages sent by the same vehicle by cross-linking Resistance to various attacks: 5G-enabled vehicular networks are vulnerable, so the proposed CLAS scheme must have the ability to resist various general attacks, such as 6 Wireless Communications and Mobile Computing impersonation attacks, replay attacks, modification attacks, and information injection attacks The Proposed CLAS Scheme In this section, we propose an efficient and secure CLAS scheme, which is implemented without bilinear pairing and map to point hash operations. Moreover, when the aggregate signature is verified to be invalid, the proposed CLAS scheme uses binary search to identify invalid signatures and introduces a cuckoo filter to revoke malicious vehicles to prevent the attack again. In addition, the proposed CLAS scheme also uses a random vector to resist information injection attacks. This scheme includes eleven phases: system initialization, pseudo identity generation, vehicle registration, partial private key generation, vehicle key generation, individual signature generation, individual signature verification, signature aggregation, aggregate signature verification, invalid signature identification, and malicious vehicle revocation. The main symbols and description used in this scheme are shown in Table 1. The detailed implement process of each phase of the proposed CLAS scheme is described as follows. (i) System initialization: in this phase, KGC and TRA initialize system parameters for RSUs and vehicles (1) According to the security parameter λ, KGC selects two large prime numbers p, q, respectively, and generates an elliptic curve E : order q from P. TRA also selects a point P on the elliptic curve E as its random generator and generates a group G with order q from P (3) KGC selects a random number x ∈ Z * q from the finite field as its master private key, which is used to extract the partial private key ppk i . Then, it calculates the corresponding public key K pub = x · P, where x is only known by KGC (4) TRA selects a random number s ∈ Z * q as its master private key for traceability and calculates the system public key T pub = s · P, where s is only known by TRA (5) KGC and TRA choose four secure hash functions: (6) KGC and TRA keep the master secrets key x and s and issue the system parameters: Any vehicle that has successfully registered with the TA can access the system parameters via a secure channel and store them in its TPD. Similarly, any RSU can also access the system parameters after successful registration. (ii) Pseudo identity generation: in this phase, TRA works with vehicles to generate pseudo identities for vehicles to conditionally preserve their privacy, which allows them to send messages anonymously (1) The vehicle V i with its real identity RID i picks a random number ξ i ∈ Z * q to calculate the its partial pseudo identity PID i,1 = ξ i · P, and The vehicle V i then sends fPID i,1 , A i g securely to TRA. (2) When TRA receives the tuple fPID i,1 , A i g from the vehicle V i , it first checks whether is valid or not. If the identity of the vehicle V i fails, the TRA will discard the tuple; otherwise, it will calculate and generate the vehicle's pseudo identity PID i = fPID i,1 , P ID i,2 , VP i g, where VP i is the valid period of the vehicle's pseudo identity. Finally, TRA sends PID i to KGC and the vehicle V i through a secure channel (iii) Vehicle registration: after receiving the pseudo identity PID i , the vehicle V i stores it in its TPD and then communicates with other entities using the pseudo identity PID i in 5G-enabled vehicular networks. To preserve vehicle's privacy and trace malicious vehicle, TRA stores both the real identity RID i and the pseudo identity PID i of the vehicle V i . Once a vehicle or RSU reports a malicious vehicle, TRA can obtain the real identity of the vehicle from the vehicle's pseudo identity according to the master private key, and TRA inserts the malicious vehicle's pseudo identity PID i into the negative cuckoo filter (RPID-cuckoo) to revoke it (iv) Partial private key generation: in this phase, KGC generates a partial private key based on the received pseudo identity of the vehicle. The preload method is also used to store the pseudo identities of the vehicle and the partial private keys of the vehicle and to reload when them needs to be updated (1) After receiving the pseudo identity PID i of the vehicle V i , KGC checks the validity of the VP i in PID i and queries the RPID-cuckoo filter to ensure that the pseudo identity of the vehicle has not been revoked by the TRA (2) If the pseudo identity has not expired and the vehicle has not been revoked, KGC chooses a random number u i ∈ Z * q to calculate U i = u i · P, h 2i = h 2 ðPID i kK pub k paramsÞ and creates a partial private key ppk i = u i + xh 2i ðmod qÞ for the vehicle V i (v) Vehicle key generation: in this phase, the vehicle generates a partial public/private key pair and creates its full public/private key pair (1) After receiving the tuple ðU i , ppk i Þ from KGC, the vehicle V i calculates h 2i = h 2 ðPID i kK pub kparamsÞ and checks the validity of the partial private key pp k i by calculating whether the equation holds or not. If it holds, the vehicle V i stores the partial private key ppk i in its TPD; otherwise, the vehicle V i discards the partial private key ppk i (2) The vehicle V i chooses two random numbers a i , b i ∈ Z * q and calculates h 3i = h 3 ðPID i k∇kparamsÞ, θ i = a i h 3i , B i = b i · P, and C i = θ i · P, respectively (1) The vehicle V i randomly selects a pseudo identity PID i from its TPD and picks a latest timestamp t i to prevent replay attacks and ensure the timeliness of sensing information collection (2) The vehicle V i picks a random number r i ∈ Z * q , com- holds or not. If it holds, the verifier accepts the signaturemessage tuple msg (viii) Signature aggregation: in this phase, each RSU acts as an aggregate signature generator. When receiving multiple signature-message tuples fM 1 kPID 1 kvpk 1 kσ 1 kt 1 g, fM 2 kPID 2 kvpk 2 kσ 2 kt 2 g, …, fM n kPID n kvpk n kσ n kt n g from n vehicles fV 1 , V 2 , ‥‥ V n g, the aggregate signature generator aggregates multiple certificateless signatures into a single short signature, which can both reduce the communication overhead and make full use of the computational resources of the RSU. In this phase, the adversary can launch an injection attack by tampering with several valid signatures [29]. In order to prevent information injection attacks in the aggregation signature phase, we use the random vector to resist this attack (1) The aggregate signature generator randomly generates a vector v = fv 1 , v 2 , v 3 ⋯ , v n g that is used to resist information injection attacks, where each v i is a random exponent in the range ½1, 2 t and t is a very small integer (2) The aggregate signature generator calculates If it holds, then AS accepts the certificateless aggregate signature; otherwise, AS performs the following phases All of the above phases are shown in Figure 4. (x) Invalid signature identification: in this phase, when an attacker inserts an invalid signature into the aggregate signature in order to interfere with the verification resulting in invalid aggregate signature verification, AS identifies invalid signatures from the aggregate signature, requests TRA to revoke the malicious vehicles and broadcast them, and accepts the remaining valid signatures in order to reduce the computational overhead caused by repeated verification and prevent malicious vehicles from attacking. In the process of invalid signature identification, binary search is used jointly by AS and RSU (1) The RSU sorts the previously received multiple signatures, finds the middle position, divides the multiple signatures into two sets and then performs 9 Wireless Communications and Mobile Computing aggregate signature on them, respectively, and sends the two aggregate signatures to AS for verification (2) If AS verifies that either of the two aggregate signatures is invalid, the above procedures 1 and 2 are repeated for the invalid aggregate signatures until an invalid signature is found (3) AS sends multiple pseudo identities corresponding to invalid signatures fPID i , t i g to TRA to trace the real identity RID i of malicious vehicles and revoke them while sending also multiple pseudo identities corresponding to valid signatures fPID i , t i g to TRA and receiving these valid signatures The overall invalid signature identification steps are shown in Algorithm 1 (xi) Malicious vehicle revocation: in this phase, after AS identifies invalid and valid signatures from the aggregate signature and sends them to TRA, TRA traces the real identity RID i of the malicious vehicle through the pseudo identity PID i corresponding to the invalid signature and then maps out all preloaded pseudo identities of the vehicle V i . Finally, TRA makes use of the cuckoo filter to revoke the malicious vehicles. Moreover, when the fingerprint of each vehicle is known, in order to improve the efficiency of signature verification, a cuckoo filter is used to assist in verifying the signature. At the same time, in order to reduce false positives, two cuckoo filters are used to store relative fingerprint information: the positive filter (PID-cuckoo) is used to store fingerprints with valid signatures, and the negative filter (RPID-cuckoo) is used to store fingerprints with invalid signatures Table 2 Both case 1 and case 2 explicitly state whether the pseudo identity PID i is revoked or not. There is a certain probability that the result of the query is case 3. Case 3 indicates that AS has not yet verified the pseudo identity PID i , or the verification result has not been updated to the cuckoo filter in time. Therefore, the signature verification of the vehi-cle needs to wait for the next round of cuckoo filter update. However, if the number of queries exceeds the preset number of rounds of the system and the query still fails, the message receiver will check whether the message M i is valid or not according to equation (7). Case 4 occurs because the cuckoo filter has a certain false-positive rate. Therefore, the message receiver needs to send a reconfirmation message to TRA via RSU. The process of the message receiver querying the cuckoo filter to verify the pseudo identity PID i is shown in Algorithm 3 Security Proof and Analysis In this section, we explain the correctness of the verification process in our proposed scheme and prove the unforgeability of the signature in the proposed CLAS scheme. Finally, we analyse how the proposed CLAS scheme meets the requirements of security and privacy protection. (2) Aggregate signature verification: 5.2. Security Proof. In this section, we provide formal security proof for the proposed CLAS scheme for 5G-enabled vehicular networks. As mentioned earlier, in order to prove that our proposed scheme is existentially unforgeable against an adaptive chosen message attack under the ROM, we define two types of adversaries which are similar to [10,12,13,34] and consider two games. Theorem 1. In the ROM, if there is a polynomial time Type 1 adversary A 1 who can forge a valid signature of our scheme in an attack model of game 1 after making at most q h times queries to the random oracles h i Query, q c times Create Vehicle ðIDÞ queries, and q s times Sign ðID, M i Þ queries, that is, win the game with an nonnegligible probability Pr ½Succ ðA 1 Þ, then there must be a polynomial time challenger C that can solve the ECDLP with a nonnegligible advantage ε. Proof. In the ROM, it is assumed that there is a probabilistic polynomial time adversary A 1 who has enough ability to forge the signature-message tuple msg = fM i kPID i kvpk i kσ i kt i g of the user ID τ . Given a random instance ðP, Q = s · PÞ of the ECDLP to compute s, a polynomial time challenger C calls A 1 as its subroutine and solves the ECDLP with a nonnegligible advantage in polynomial time. A 1 performs the following queries: Setup ðIDÞ. C inputs the security parameter λ in the system initialization phase and then sets K pub = Q and sends the system parameter params = fP, p, q, E, G, h 2 , h 3 , h 4 , K pub , T pub g to A 1 . In this process, C constructs and maintains five hash lists L h 2i , L h 3i , L h 4i , L u , and L pk , all of which are initialized to empty Create Vehicle ðIDÞ. C maintains a list L u = fðID, U i , vp k i , ppk i , vsk i , h 2i Þg. When A 1 makes a query on ID, and if ID is in L u , then C sends ðID, Partial Private Key ðIDÞ. When C receives a partial private key query from A 1 for ID, if ID ≠ ID τ , C checks whether ID is already in L u or not. If ID is in L u , C sends ppk i to A 1 . Otherwise, C runs Create Vehicle ðIDÞ to obtain ppk i and sends it to A 1 . In addition, if ID = ID τ , C stops the game Public Key ðIDÞ. When C receives a public key query from A 1 for ID, C checks whether ID exists in L u or not. If it exists, C sends PK i = fðU i , vpk i Þg to A 1 . Otherwise, C executes Create Vehicle ðIDÞ to obtain the tuple ðU i , vpk i Þ and sends it to A 1 Replace Public Key ðID, PK i * Þ. C maintains a list L pk = ðID, u i , U i , vsk i , vpk i Þ. When A 1 makes a public key replacement query with ðID, PK i * Þ, where U i * = u i * · P, vp k i * = vsk i * · P and PK i * = ðU i * , vpk i * Þ, C sets U i = U i * , vp k i = vpk i * , vsk i = vsk i * , and ppk i = ⊥, and inserts ðID, u i * , Sign ðID, M i Þ. When A 1 makes a signature query with ðID, M i Þ, C performs the following steps: (1) If ID is in L pk , C picks three random numbers x, y, (2) If ID is not in L pk , because C knows the private key of the vehicle with ID, C acts like the procedure of the scheme Forgery. In the end, A 1 outputs a forged but valid certificateless signature σ i = ðR i , S i Þ on ðID, M i Þ, which passes the signature verification phase. If ID ≠ ID τ , C fails and stops According to the forking lemma [35], A 1 can obtain another forged valid signature σ i * = ðR i * , S i * Þ by simulating the game again with the same random tape but different choice of h 4i in polynomial time. We can get s according to the following equation; that is, C successfully solves the ECDLP. Then, we have From these two linear equations, we can derive the value s by Probabilistic Analysis. We analyse the advantages of C successfully obtaining s from ðP, Q = s · PÞ to solve the ECDLP In the process of executing Create Vehicle ðIDÞ query, the random oracle assignment h 2 ðIDkK pub kparamsÞ causes inconsistency, which happens with probability at most q h /q. Therefore, the probability of successful simulation of q c times is at least ð1 − q h /qÞ q c ≥ 1 − q h q c /q. And, the probability of successful simulation of q h times is at least ð1 − q h /qÞ q h ≥ 1 − q h 2 /q. In addition, ID = ID τ has a probability of 1/q c . Therefore, the overall probability of successful simulation is Pr ½Succ ðA 1 Þ ≥ ð1 − q h q c /qÞð1 − q h 2 /qÞð1/q c Þε. It should be noted here that the time complexity t + Oðq c + q s ÞS of C is determined by the exponentiations executed in the Create Vehicle ðIDÞ and Sign ðID, M i Þ queries, where S is the time of a scalar multiplication operation. C can successfully obtain s from ðP, Q = s · PÞ with an advantage ð1 − q h q c /qÞð1 − q h 2 /qÞð1/q c Þε, where the time complexity of algorithm is t + Oðq c + q s ÞS, which contradicts the ECDLP assumption. Therefore, the proposed scheme can resist the forgery attack of type 1 adversary A 1 under the ROM. Theorem 2. In the ROM, if there is a polynomial time Type 2 adversary A 2 who can forge a valid signature of our scheme in an attack model of Game 2 after making at most q h times queries to the random oracles h i Query, q c times Create Vehicle ðIDÞ queries, and q s times Sign ðID, M i Þ queries, that is, win the game with an non-negligible probability Pr ½Succ ðA 2 Þ, then there must be a polynomial time challenger C that can solve the ECDLP with a nonnegligible advantage ε. Proof. In the ROM, it is assumed that there is a probabilistic polynomial time adversary A 2 who has enough ability to forge the signature-message tuple msg = fM i kPID i kvpk i kσ i kt i g of the user ID τ . Given a random instance ðP, Q = t · PÞ of the ECDLP to compute t, a polynomial time challenger 13 Wireless Communications and Mobile Computing C calls A 2 as its subroutine and solves the ECDLP with a nonnegligible advantage in polynomial time. A 2 performs the following queries: Setup ðIDÞ. C inputs the security parameter λ in the system initialization phase, picks a random number w ∈ Z * q and sets K pub = w · P, and then sends the system parameter par ams = fP, p, q, E, G, h 2 , h 3 , h 4 , K pub , T pub g to A 2 . In this process, C constructs and maintains five hash lists L h 2i , L h 3i , L h 4i , L u , and L pk , all of which are initialized to null Create Vehicle ðIDÞ. C maintains a list L u = fðID, U i , vpk i , ppk i , vsk i , h 2i Þg. When A 2 makes a query on ID, and if ID is in L u , then C sends ðID, Otherwise, if ID = ID τ , C selects two random numbers x, y ∈ Z * q and then calculates U i = xP, vpk i = Q, ppk i = x + w · h 2i , vsk i = ⊥, and h 2i = h 2 ðIDkK pub kparamsÞ ⟵−yx −1 ðmod qÞ. If ID ≠ ID τ , C selects three random numbers x, y, z ∈ Z * q and then calculates Query. C maintains a list L h 2 = fðID, K pub , params, h 2i Þg. When C receives a query ðID, K pub , paramsÞ from A 2 , if ID is already in L h 2 , it sends h 2i to A 2 . Otherwise, C executes Create Vehicle ðIDÞ to calculate h 2i = h 2 ðIDkK pub k paramsÞ and then sends it to A 2 h 3i Query. C maintains a list L h 3i = fðID, params,∇,h 3i Þg. When C receives a query ðID, params,∇Þ from A 2 , if L h 3i contains ðID, params,∇,h 3i Þ, then C sends h 3i to A 2 . Otherwise, C chooses a random number h 3i ∈ Z * q , calculates h 3i = h 3 ðIDk∇kparamsÞ, sends h 3i to A 2 , and inserts ðID, params,∇,h 3i Þ into L h 3i h 4i Query. C maintains a list L h 4i = ðID, M i , vpk i , R i ,∇, t i , h 4i Þ. When A 2 makes a query on ID, and if ID is in L h 4i , then C sends h 4i to A 2 . Otherwise, C selects a random number h 4i ∈ Z * q , calculates h 4i = h 4 ðM i kIDkvpk i kR i k ∇kt i Þ, sends h 4i to A 2 , and inserts ðID, Partial Private Key ðIDÞ. When C receives a partial private key query from A 2 for ID, C checks whether ID is already in L u or not. If ID is in L u , C sends ppk i to A 2 . Otherwise, C runs Create Vehicle ðIDÞ to obtain ppk i and sends it to A 2 . Public Key ðIDÞ. When C receives a public key query from A 2 for ID, C checks whether ID exists in L u or not. If it exists, C sends PK i = fðU i , vpk i Þg to A 2 . Otherwise, C executes Create Vehicle ðIDÞ to obtain the tuple ðU i , vpk i Þ and sends it to A 2 Secret Key ðIDÞ. When C receives a private key query from A 2 for ID, if ID ≠ ID τ , C checks whether L u contains ID or not. If ID is in L u , C sends vsk i to A 2 . Otherwise, C executes Create Vehicle ðIDÞ to obtain vsk i and sends it to A 2 . In addition, if ID = ID τ , C stops the game. Sign ðID, M i Þ. When A 2 makes a signature query with ðID, M i Þ, C performs the following steps: (1) If ID is in L pk , C picks three random numbers x, y, z ∈ Z * q such that S i = x, h 4i = h 4 ðM i kIDkvpk i kR i k∇k t i Þ ⟵ x mod q and R i = x · P + yK pub − xvpk i , then inserts ðID, M i , vpk i , R i ,∇,t i , h 4i Þ to L h 4i , and finally sends the signature σ i = ðR i , S i Þ to A 2 (2) If ID is not in L pk , because C knows the private key of the vehicle with ID, C acts like the procedure of the scheme Forgery. In the end, A 2 outputs a forged but valid certificateless aggregate signature σ = ðR, SÞ on the tuple ðID, M i Þ which passes the signature verification phase. If ID ≠ ID τ , C fails and stops According to the forking lemma [35], A 2 can obtain another valid signature σ * = ðR * , S * Þ by simulating the game again with the same random tape but different choice of h 4i in polynomial time. We can get t according to the following equation; that is, C successfully solves the ECDLP. Then, we have From these two equations, we can derive the value t by Probabilistic Analysis. We analyse the advantages of C successfully obtaining t from ðP, Q = t · PÞ to solve the ECDLP 14 Wireless Communications and Mobile Computing In the process of executing Create Vehicle ðIDÞ query, the random oracle assignment h 4i = h 4 ðM i kIDkvpk i kR i k∇k t i Þ causes inconsistency, which happens with probability at most q h /q. Therefore, the probability of successful simulation of q c times is at least ð1 − q h /qÞ q c ≥ 1 − q h q c /q. And, the probability of successful simulation q h times is at least ð1 − q h /qÞ q h ≥ 1 − q h 2 /q. In addition, ID = ID τ has a probability of 1/q c . Therefore, the overall probability of successful simulation is Pr ½SuccðA 2 Þ ≥ ð1 − q h q c /qÞð1 − q h 2 /qÞð1/q c Þε. It should be noted here that the time complexity t + Oðq c + q s ÞS of C is determined by the exponentiations executed in the Create Vehicle ðIDÞ and Sign ðID, M i Þ queries, where S is the time of a scalar multiplication operation. C can successfully obtain t from ðP, Q = t · PÞ with an advantage ð1 − q h q c /qÞð1 − q h 2 /qÞð1/q c Þε, where the time complexity of algorithm is t + Oðq c + q s ÞS, which contradicts ECDLP assumption. Therefore, the proposed scheme can resist the forgery attack of type 2 adversary A 2 under the ROM. 5. 3. Security Analysis. Section 3.6 has given the security and privacy requirements that 5G-enabled vehicular networks. In this section, we analyse the proposed CLAS scheme to meet the above security requirements according to Theorems 1 and 2. (i) Message integrity and authentication: according to Theorems 1 and 2, it is known that any polynomial adversary has no ability to forge a valid message. When a message msg is received from a vehicle, RSUs check the validity and integrity of the message by verifying the equation S i · P = R i + h 4i ðvpk i + h 2i K pub Þ to prevent the message from being tampered with and forged by malicious vehicles. As a result, no malicious adversary can construct h 2i = h 2 ðPID i kK pub kparamsÞ and h 4i = h 4 ðM i kPID i kvpk i kR i k∇k t i Þ to forge a signature σ i = ðR i , S i Þ. Therefore, our proposed scheme meets the security requirements for message integrity and authentication (ii) Privacy preserving: in 5G-enabled vehicular networks, each vehicle adopts pseudonym technology to hide its real identity RID i to communicate, and the pseudo identity of any vehicle involves the TRA's master private key s and a random number. If an attacker wants to know the real identity of a vehicle, it can only do so through a pseudo identity. However, when given T pub , PID i,1 = ξ i · P, it is hard to calculate ξ i · P · T pub . Besides, TRA keeps the master private key s. Therefore, our proposed scheme can meet the requirements of privacy preserving (iii) Traceability and revocability: in our scheme, TRA can trace the identity of malicious vehicle by getting its real identity RID i . And only TRA can extract the real identity RID i from PID i by its master private key through the equation kT pub Þ · ξ i · T pub . When a malicious action takes place, TRA can effectively trace and revoke the malicious vehicle and insert relevant information of the malicious vehicle into RPID-Cuckoo filter for revocation. Therefore, our proposed scheme can meet the requirements of traceability and revocability (i) Unlinkability: in our scheme, a different pseudo identity is used for each message sent by the vehicle, and the corresponding private key is used to sign the message, in which the number ξ i in the pseudo identity is random. Moreover, there is no relationship between the new pseudo identity and the old pseudo identity for each vehicle. Therefore, the attacker cannot link two messages to the same vehicle, so our scheme meets the requirement of unlinkability (ii) Resistance to replay attacks: in our scheme, the timestamp t i is used to resist replay attacks. After receiving the message and signature, the receiver checks the freshness of the timestamp and whether the validity of the pseudo identity has expired. If the valid time is exceeded, the message is discarded. Therefore, our scheme is resistant to replay attacks (iii) Resistance to modification attack: if the attacker modifies the message and sends it to others, the verifier can easily determine the message has been modified by verifying equation S i · P = R i + h 4i ðvpk i + h 2i K pub Þ. Therefore, our scheme is resistant to modification attacks (iv) Resistance to impersonation attack: according to Theorems 1 and 2, it is proved that our scheme is existentially unforgeable against an adaptive chosen message attack under the ROM. The adversary has not the ability to launch an impersonation attack by forging a valid signature. It is easy to determine if there is an impersonation attack by verifying the equation S i · P = R i + h 4i ðvpk i + h 2i K pub Þ. Therefore, our scheme is resistant to impersonation attacks (v) Resistance to information injection attack: in our scheme, the value of equation S · P = R + C + ∑ n i=1 v i h 4i h 2i K pub can be changed by using a random vector Scheme Sign Verify Ali et al. [16] 74.13% 77.56% Ren et al. [15] 87.06% 84.22% Zhong et al. [29] 96.07% 94.65% Xu et al. [37] 95.36% 94.65% Gayathri et al. [36] 50% 39.39% Gayathri et al. [11] 50% 39.39% Ming and Cheng [38] 66.67% 24.80% The size of elements in group G 1 128 bytes The size of elements in group G 40 bytes q j j The size of the elements in Z * q 20 bytes 17 Wireless Communications and Mobile Computing 6.1. Computational Overhead Analysis. We adopt the same evaluation method as He et al. [10] to analyse the computational overhead of the proposed scheme. The basic configuration is 3.40GHz clock frequency, 4GB of running memory, an Intel i7-4770 processor, and MIRACL library running on the Windows 7 operating system. The MIRACL library is a well-known cryptographic library. The bilinear pairing on the security level of 80 bits is constructed as ⅇ : G 1 × G 2 ⟶ G 2 , where G 1 is an additive group of orderq defined on a super-singular elliptic curve E : y 2 = x 3 modp with embedding degree of 2. Here, we considerp as a 512-bit prime number andq as a 160-bit solinas prime number. The ECC on the security level of 80 bits is created on the nonsingular elliptic curve E : y 2 = x 3 + ax + b mod p as follows: G is an additive group of order q, where p, q are 160bit primes and a, b ∈ z * p . The execution time of the basic cryptographic operations in the scheme is shown in Table 3. We compare the computational overhead of our scheme with these schemes [11,15,16,29,[36][37][38] in three phases: individual signature generation phase, individual signature verification phase, and aggregate signature verification phase. These schemes [15,16,29,37] are based on bilinear pair construction, and these schemes [11,36,38] and our scheme are constructed using ECC. Since the general oneway hash operation T h only incurs very little overhead in the signing and verification process, we no longer count the running time of this operation. The specific analysis of the computation overhead is shown in Table 4. The computation overhead of the individual signature generation phase and individual signature verification phase are shown in Figure 5. It is seen that a signature needs to consist of bilinear pairing based four scalar multiplications and two scalar point additions. It also needs a map to point hash function operation in Zhong et al.'s scheme [29]. Therefore, the computational cost of the signature generation phase of the scheme is T mtp + 2T pa bp + 4T sm bp = 11:2562 ms. In the signature verification phase, operations performed include three bilinear pairing operations, two scalar multiplication operations based on bilinear pairing, one scalar point addition operation based bilinear pairing, and two map to point hash operations. Thus, the computation cost is 2T mtp + 2T sm bp + T pa bp + 3T bp = 24:8701 ms. The aggregate signature verification phase of Zhong et al.'s [29] scheme requires three bilinear pairing operations, 2n scalar multiplication operations based on bilinear pairing, ð2n − 1Þ scalar point addition operations based on bilinear pairing and n map to point hash operations, where n is the number of sensing information transmitted by the sender within a period of time. In this phase, the computation cost is nT mtp + 2nT sm bp + ð2n − 1ÞT pa bp + 3T bp = 7:8382n + 12:6259 ms. In the proposed scheme, the signature generation phase requires one scalar multiplication operation based ECC and one scalar point addition operation based ECC, whose computational cost is T sm ecc = 0:442 ms in this phase. In the signature verification phase, operations performed include three scalar multiplication operations based ECC and two scalar point addition operations based ECC. The computational cost in this phase is 3T sm ecc + 2T pa ecc = 1:3296 ms. In the aggregate signature verification phase, operations performed include two scalar multiplication operations based ECC and two scalar point addition operations based ECC. In the aggregate signature verification phase of this scheme, the computation cost is 2T sm ecc + 2 T pa ecc = 0:8876 ms. 6.2. Communication Overhead Analysis. Driven by 5G, the IoVs need to support the interconnection between smart devices and systems. It will provide access to users everywhere at any time and bring high-intensity data transmission pressure to vehicular networks. Therefore, it is crucial to reduce communication overhead. For fairness, we use the variables in Table 6 to simulate the equivalent security levels of the schemes. On the 80-bit security level, let the sizes of the elements in G and G 1 be 40 bytes and 128 bytes, where G and G 1 are an additive group on a nonsingular elliptic curve and an additive group on a super singular curve, respectively. The signature sizes of the schemes [15,29,36,37] are compared with that of the proposed scheme as shown in Table 7. The aggregate signature generated by the proposed scheme consists of one Z * q element and one G element based on elliptic curve; therefore, it only takes 100 bytes to send the aggregate signature. It should be noted that the size of the aggregate signature bears no relation with the number of messages in the scheme of Ren et al. [15], Zhong et al. [29], Gayathri et al. [36], and our scheme. Although these schemes have lower communication cost, the length of Ali et al.'s scheme [16] and Zhong et al.'s scheme [29] are 128 bytes and 256 bytes, respectively, which far exceeds communication cost required by our scheme to send an aggregate signature. Due to the proposed scheme has lower transmission cost than other schemes, it can meet the low delay requirements for 5G-enabled vehicular networks. Conclusion This paper proposes a lightweight CLAS scheme with revocation mechanism for 5G-enabled vehicular networks. The proposed scheme uses binary search to identify invalid signatures and introduces a cuckoo filter to revoke malicious users to prevent reattack. The security analysis shows that the proposed scheme is secure under ROM and meets all security and privacy requirements. Moreover, our CLAS scheme uses ECC to construct the specific algorithm, which does not need the computationally complex bilinear pairing operations and map to the point hash functions, thus improving the computational efficiency. Through experimental comparison with other related schemes, our CLAS scheme has significant advantages in computational overhead and communication overhead. Data Availability The proposed scheme and its analysis need only theoretical and experimental support. There is no additional data set to be provided in this paper.
14,810.6
2022-04-12T00:00:00.000
[ "Computer Science", "Engineering" ]
Tangles are decided by weighted vertex sets We show that, given a $ k $-tangle $ \tau $ in a graph $ G $, there always exists a weight function $ w: V(G)\to\mathbb{N} $ such that a separation $ (A,B) $ of $ G $ of order $<k $ lies in $ \tau $ if and only if $ w(A)<w(B) $, where $ w(U):=\sum_{u\in U}w(u) $ for $ U\subseteq V(G) $. February 5, 2019 We show that, given a k-tangle τ in a graph G, there always exists a weight Tangles in graphs have played a central role in graph minor theory ever since their introduction by Robertson and Seymour in [3]. Informally, a tangle in a graph G is an orientation of all low-order separations of G satisfying certain consistency assumptions. Tangles capture highly connected substructures in G in the sense that every such substructure defines a tangle in G by orienting each low-order separation of G towards the side containing most or all of that substructure. In view of this, if some tangle of G contains the separation (A, B), we think of A and B as the 'small' and the 'big' side of (A, B) in that tangle, respectively; our main result will confirm this intuition. Formally, a separation of a graph G = (V, E) is a pair (A, B) with A ∪ B = V such that G contains no edge between A B and B A, and the order of a separation (A, B) is the size |A ∩ B| of its separator A ∩ B. Furthermore, for an integer k, a ktangle in G is a set consisting of exactly one of (A, B) and (B, A) for every separation (A, B) of G of order < k, with the additional property that no three 'small' sides of separations in τ cover G, that is, that there are no ( As a concrete example, if G contains an n × n-grid for n k, then the vertex set of that grid defines a k-tangle τ in G by letting (A, B) ∈ τ for a separation of order < k if and only if B is the unique side of (A, B) containing, say, 90% of the vertices of that grid. In this way, the vertex set of the n × n-grid 'defines τ by majority vote'. In [1] Diestel raised the question whether all tangles in graphs arise in the above fashion, that is, whether all graph tangles are decided by majority vote: given a k-tangle τ in a graph G, is there always a set X of vertices such that a separation (A, B) of order < k lies in τ if and only if |A ∩ X| < |B ∩ X|? A partial answer to this was given in [2], where Elbracht showed that such a set X always exists if G is (k − 1)-connected and has at least 4(k − 1) vertices. The general problem appears to be hard. In this paper, we consider a fractional version of Diestel's question and answer it affirmatively, making precise the notion that B is the 'big' side of a separation (A, B) ∈ τ : given a k-tangle τ in G, rather than finding a vertex set X which decides τ by majority vote, we find a weight function w : V → N on the vertices such that for all separations (A, B) of order < k we have (A, B) ∈ τ if and only if the vertices in B have higher total weight than those in A. The existence of a vertex set X as in Diestel's original question would then imply that there is such a weight function with values in {0, 1}. For a graph G there is a partial order on the separations of G given by letting (A, B) ≤ (C, D) if and only if A ⊆ C and B ⊇ D. One of the main ingredients for the proof of Theorem 1 is the following observation about those separations in a tangle τ that are maximal in τ with respect to this partial order. It says, roughly, that they divide each other's separators so that, on average, those separators lie more on the big side of the separation than on the small side, according to the tangle. Additionally we shall use a result from linear programming. For a vector x ∈ R n we use the usual shorthand notation x ≥ 0 to indicate that all entries of x are non-negative, and similarly write x > 0 if all entries of x are strictly greater than zero. Lemma 3 ([4]). Let K ∈ R n×n be a skew-symmetric matrix, i.e. K T = −K. Then there exists a vector x ∈ R n such that Kx ≥ 0 and x ≥ 0 and x + Kx > 0. We are now ready to prove Theorem 1. Proof of Theorem 1. Let a finite graph G = (V, E) and a k-tangle τ in G be given. Since G is finite it suffices to find a weight function w : V → R ≥0 such that a separation (A, B) of order < k lies in τ precisely if w(A) < w(B); this can then be turned into such a function with values in N. For this it is enough to find a function w : V → R ≥0 such that w(A) < w(B) for all maximal elements (A, B) of τ : for if w(A) < w(B) and (C, D) ≤ (A, B) then w(C) ≤ w(A) < w(B) ≤ w(D). So let us show that such a weight function w exists. To this end let (A 1 , B 1 ), . . . , (A n , B n ) be the maximal elements of τ and set Observe that, by Lemma 2, we have m ij + m ji > 0 for all i = j and hence the matrix M + M T has positive entries everywhere but on its diagonal (where it has zeros). We further define Then K is skew-symmetric, that is, K T = −K. Let x = (x 1 , . . . , x n ) T be the vector obtained by applying Lemma 3 to K. We define a weight function w : V → R by Note that w has its image in R ≥0 and observe further that, for Y ⊆ V , we have With this, for i ≤ n, we have where (M x) i denotes the i-th coordinate of M x. Thus w is the desired weight function if we can show that M x > 0, that is, if all entries of M x are positive. From x + Kx > 0 we know that at least one entry of x is positive. Let us first consider the case that x has two or more positive entries. Then K x > 0 since K has positive values everywhere but on the diagonal, and hence since Kx ≥ 0. Therefore, in this case, w is the desired weight function. Consider now the case that exactly one entry of x, say x i , is positive, and that x is zero in all other coordinates. Then for j = i we have (M x) j ≥ (K x) j > 0 and thus w(B j ) − w(A j ) = (M x) j > 0. However (M x) i = 0 and thus w(A i ) = w(B i ), so w is not yet as claimed. To finish the proof it remains to modify w such that w(A i ) < w(B i ) while ensuring that we still have w(A j ) < w(B j ) for j = i. This can be achieved by picking a sufficiently small ε > 0 such that w(A j ) + ε < w(B j ) for all j = i, picking any v ∈ B i A i , and increasing the value of w(v) by ε. We conclude with the remark that Theorem 1 and its proof extend to tangles in hypergraphs without any changes. Even more generally, the following version of Theorem 1, which is formulated in the language of [1], can be established with exactly the same proof as well: Theorem 4 then applies to tangles in graphs or hypergraphs by taking for U the universe of separations of a (hyper-)graph and for P the given k-tangle. (See [1] for more on the relation between graph tangles and profiles.) Theorem 4 holds with the same proof as Theorem 1, since Lemma 2 holds in this setting too, using the definition of profile rather than the tangle axioms.
2,049.6
2018-11-16T00:00:00.000
[ "Mathematics" ]
A new approach to stochastic relativistic fluid dynamics from information flow We present a new general formalism for introducing thermal fluctuations in relativistic hydrodynamics, which incorporates recent developments on the causality and stability of relativistic hydrodynamic theories. Our approach is based on the information current, which measures the net amount of information carried by perturbations around equilibrium in a relativistic many-body system. The resulting noise correlators are guaranteed to be observer-independent for thermodynamically stable models. We obtain an effective action within our formalism and discuss its properties. Introduction The matter formed during high-energy heavy-ion collisions is modeled as an expanding viscous relativistic fluid known as the quark-gluon plasma (QGP).While this modeling has had great success describing data, it is known that any dissipative system near equilibrium exhibits spontaneous thermal fluctuations, which is the physics content behind the fluctuationdissipation theorem [1,2].Therefore, self-consistent hydrodynamic simulations of the QGP should also investigate these stochastic fluctuations. In this proceedings, we report on a new approach for including stochastic fluctuations in relativistic systems developed in [3,4].This formalism is guaranteed to be causal and stable against fluctuations around the equilibrium state in a Lorentz-invariant manner.Our construction relies on the information current [5], a quantity that measures how much information is contained in a given perturbation around the equilibrium state.This information current can be used to obtain the equilibrium probability distribution for the system to be in a given state.We also construct an effective action for the fluctuating hydrodynamic system and show how the noise can be obtained from a symmetry of this action. The information current The free energy describes the thermal state of a system at a given time.To define this in relativity, we must foliate the spacetime into a set of spacelike hypersurfaces Σ τ with pastdirected timelike unit normal vectors n µ .The free energy depends on the thermodynamic variables and how we choose to foliate spacetime.On a given hypersurface, the free energy variation from equilibrium, δΩ, is given by where δs µ is the variation of the entropy current from equilibrium, α I are the thermodynamic conjugates to the conserved quantities evaluated at equilibrium, and δJ µI are the variations of the conserved currents from equilibrium.The quantity in parentheses is the information current The probability distribution for the system to be in a given macroscopic state around equilibrium is then given by where δφ A is a vector containing the hydrodynamic fields of the system.Note that this probability distribution seemingly depends on the choice of foliation.However, physical results should be observer-independent, so ensuring that this dependence on the foliation drops out when calculating the correlation functions for observable quantities is important. While the utility of the information current should already be apparent through its connection to thermodynamics, it also has deep connections to causality and stability.In particular, it has been shown in [5] that a relativistic system will be (linearly) causal and stable against fluctuations if: 2. The bound n µ E µ = 0 is saturated if and only if δφ A = 0, hence equilibrium is unique. As long as we ensure that the information current satisfies these conditions, our fluctuating theory will be linearly causal and stable.To take advantage of these properties, we construct a theory of fluctuations directly from the information current. Fluctuations from information flow The linearized dynamics of a relativistic hydrodynamic system can be extracted from the fact that ∂ µ E µ = −σ, where σ is the entropy production.Using this, we can express the equations of motion in terms of the hydrodynamic variables as where , and ξ A is a Gaussian stochastic vector with zero mean.This approach is particularly well-suited for Israel-Stewart-like hydrodynamic models [6] as these are constructed from the entropy current, conserved quantities, and entropy production; the same ingredients are used here.Using this equation of motion, we can write the momentum space correlation function of the hydrodynamic variables as These correlation functions should match those obtained from the equilibrium probability distribution of Eq. ( 3), which can be used to determine the form of the noise correlator ξ A (x)ξ B (x ′ ) .In particular, it is found that for thermodynamically stable systems.The full details of this derivation are provided in [3], as well as proof that the corresponding noise correlators do not depend on the choice of foliation. Fluctuations in relativistic diffusion We can apply this to the Israel-Stewart theory of a conserved current in the Landau hydrodynamic frame.Such a system is defined by the conserved current J µ = nu µ + J µ , where n is some density, u µ is the fluid velocity (with u µ u µ = −1), and J µ is the dissipative part of the conserved current (where J µ u µ = 0).The corresponding entropy current is given by where s is the equilibrium entropy density, µ is the chemical potential associated with the density n, and β J is some new transport coefficient.The entropy production should be positive definite, so we take it to be a quadratic form σ = 1 κT J λ J λ , where κ is the charge conductivity.The information current is then given by where χ = ∂n/∂µ.It can be verified that the equations of motion from Eq. ( 4) are the linearized versions of the conservation law and the Israel-Stewart relaxation equation for J µ .From Eq. ( 6), it follows that the conservation law does not fluctuate, while the relaxation equation has a stochastic source, ξ µ ⊥ , with noise correlator Here, ∆ µν = g µν + u µ u ν is the projector orthogonal to u µ .From this, the symmetrized momentum space correlator for the density n can be easily obtained.One finds which reduces to the standard first-order result in the suitable limit [3]. Actions for fluctuating hydrodynamics Stochastic partial differential equations can be written as a path integral over the dynamical variables and some set of auxiliary variables [7].For systems obeying Eq. ( 4), we find that the effective Lagrangian is Here, δ φA are the auxiliary variables.The first term corresponds to the non-fluctuating equation of motion, while the second term gives the fluctuations. Above, we have inserted the fluctuation-dissipation relation of Eq. ( 6), but detailed balance provides a means to implement the fluctuation-dissipation theorem at the level of the action.Let Θ denote a transformation under time reversal and parity.Then, employing a similar derivation to that used in [8], we find that the action should transform as L Θ = L + iδφ A E µ AB ∂ µ δφ B under a transformation involving time reversal and parity.The appropriate symmetry for a system with an effective action of the form Eq. ( 11) is This symmetry can be used to determine the noise correlators, implementing the fluctuationdissipation theorem.In principle, nothing about the derivation to obtain this result required the equations of motion to be linear.Hence, we expect this property to be valid in nonlinear systems [9].Another approach for constructing effective actions for Israel-Stewart theories was recently presented in [10]. Conclusions In [3], a formalism for stochastic fluctuations was constructed using the information current.This approach incorporates causality and stability conditions built into the information current to ensure that noise correlators are independent of the choice of spacetime foliation.This new framework is easily applied to linearized Israel-Stewart-like theories, as demonstrated by the example involving a fluctuating conserved current.Using the Martin-Siggia-Rose approach, it is possible to derive an effective action for any fluctuating system with an information current and non-negative entropy production [4].Demanding that the path integral be consistent with the principle of detailed balance, we found a symmetry of the effective action involving time reversal and parity that can implement the fluctuation-dissipation theorem.This symmetry is derived without any assumptions regarding linearity, indicating that it should be possible to generalize the results presented here to nonlinear systems.
1,832.6
2023-12-12T00:00:00.000
[ "Physics" ]
The design and development of an automatic transmission solenoid tester for wheeled vehicles Solenoids are the most critical components in automatic transmissions. They are used to control the shift points, clutch locking, or pressure regulation of automatic transmissions. Since the number, type, and order of the solenoids all differ when they are used in different vendor’s automatic transmissions, making accurate normal/abnormal decisions for solenoids is very difficult, as it can lower the maintenance quality, to waste labor and material cost, and even reduce driving safety. This article proposes an “abnormal” inspecting method (i.e. for abnormality) for solenoids with high inspect ability and develops a learnable automatic transmission solenoid tester. This tester can perform solenoid testing on multiple channels at the same time. The test result statistics for all channel solenoids tested are generated automatically. It also provides visibility for users to view the difference comparisons of testing curves of temperature, pressure, voltage, current, and resistance on a graphical screen. The curve visibility function will be helpful for the solenoid diagnosis of abnormal or fault reasons. Introduction In comparison with tracked vehicles, wheeled vehicles have such advantages as fast speed, high mobility, long running distance, low price, and convenient maintenance, but with low cross-country power and large turning radius as their weaknesses. Tracked vehicles are mostly military, for example, armored cars and tanks. Wheeled vehicles are the foremost type of civilian vehicle (e.g. cars and trucks). The automatic transmission (AT) is a key component of wheeled vehicles; it automatically changes the gear ratio while running, acting as the AT for the gearshift or pressure adjustment. 1,2 At present, the AT vehicles with automatic shifting function use electronically controlled automatic transmissions (ECAT). 2 This kind of AT can use different sensors to inform the driving computer of the working condition of engine; the driving computer then sends approach is to dismount all the solenoids from the vehicle one by one for inspection, so that the professional maintenance personnel can identify the faulty or abnormal solenoid according to experience for subsequent renewing and remedial operation. 7,8 However, different numbers, types, and orders of solenoids are used in the AT of different brands. Therefore, it is difficult for maintenance personnel to judge whether or not a solenoid is faulty, resulting in severe difficulties in maintenance and repair, wasting manpower and material resources, and even compromising driving safety. At present, a variety of solenoid testers have been developed for promoting inspection efficiency. [9][10][11][12][13][14][15] However, the existing test systems mainly consider the difference between the test hydraulic pressure curve and normal hydraulic pressure curve as the basis of judging whether or not the solenoid is normal; the anomaly inspection capability is obviously insufficient as only one channel solenoid can be tested each time, and the inspection efficiency is confined. In fact, an AT solenoid can be divided into an electromagnetic structure and a mechanical valve body structure, and the failures include electric structure failure and mechanical structure failure. 16,17 The following items can be inspected by measuring the current consumed by a solenoid: (1) impedance of coil, for checking the aging demagnetized state; (2) whether the action position of the magnetic core is fixed within the rated time; and (3) whether the coil is broken or the core is deactivated. The solenoid inlet/outlet pressure is measured to check (1) whether the valve seat closure is faulty or dirty, and (2) whether the spring of valve seat is too aged for full opening or complete closing. According to the control mode of the driving computer, the solenoids of AT can be classified into ON/ OFF type and duty cycle type: 16,18,19 1. ON/OFF type solenoid: when the solenoid is ON, the needle valve is turned on and the line pressure is relieved directly. When the solenoid is OFF, the needle valve is locked tight, the line is closed, and the pressure cannot be relieved. Generally, the needle valve turn-on stroke is fixed; the voltage control signal is shown in Figure 1(a); this kind of solenoid is often used for shift control. 2. Duty cycle solenoid: the solenoid can change the needle valve turn-on stroke according to the time ratio of ON, so the amount of discharged oil is variable; the voltage control signal is shown in Figure 1(b). This type of control is known as pulse width modulation, and as PWM solenoid; its control mode generally uses fixed frequency. This kind of solenoid is often used for regulating the line pressure. For the problems and limitations related to the present solenoid testers, this article proposes an innovative solenoid anomaly detection method by simultaneously considering both possible electric structure failure and mechanical structure failure of solenoids. Based on this method, we designed a multichannel solenoid inspection system with learning ability 20,21 which can inspect an ON/OFF solenoid and PWM solenoid as the solution to the aforesaid problems and limitations. In terms of this solenoid anomaly detection method, the voltage can be supplied according to the voltage control curve, in order to measure the curve variation of the consumed current, in coordination with the oil pressure supply to measure the inlet and outlet pressures to work out the difference between the test hydraulic pressure curve and the standard hydraulic pressure curve, as well as the difference between the test current curve and standard current curve at the same time, greatly enhancing the anomaly recognition capability of the solenoid. In terms of the design of the solenoid inspection system, a universal inspection system with learning ability is proposed, which can establish the standard curves of a solenoid of unknown model by learning for the test operation of the solenoids of the model. In addition, this system constructs four solenoid testing channels; as four solenoids can be tested at the same time, the testing efficiency is greatly increased. The test result statistics for all channel solenoids tested are generated automatically. The user can perform combinatory analysis of different test curves with the flexible display function of the system, to ascertain abnormal conditions and possible failure causes of the solenoid. Solenoid anomaly inspection method The procedure of the AT solenoid anomaly inspection method proposed in this article is shown in Figure 2. This inspection method comprises the following steps: 1. For a solenoid of model A, the standard outlet pressure P s and standard temperature T s of oil input for the solenoid are preset, the voltage control curve V c (t) of solenoid test is defined, and t is the time; this is the test configuration of subsequent inspection operation; 2. A normal solenoid a 0 of model A is used, the actuating environment of this solenoid is controlled according to the P s , T s , and V c (t) of the preset test configuration in Step 1; to test the solenoid a 0 , a standard hydraulic pressure curve P s (t) and standard current curve C s (t) are generated. The range of allowable error of standard hydraulic pressure curve E pl (t) ; E ph (t) and the range of allowable error of standard current curve E cl (t) ; E ch (t) are defined, where is the tolerable error percentage, P max and P min are the maximum and minimum values of pressure sensing, respectively, and C max and C min are the maximum and minimum values of current sensing, respectively. 3. According to the P s , T s , and V c (t) of the test configuration in Step 1, the oil is imported into the test solenoid a i of Model A, i . 0, and the actuating environment of this solenoid is regulated to test the solenoid a i ; a test hydraulic pressure curve P t (t) and a test current curve C t (t) are generated; 4. The P t (t) and C t (t) resulting from the test solenoid a i are compared with P s (t) and C s (t), respectively, to obtain the difference DP(t) = P t (t) 2 P s (t), "t, and the difference DC(t) = C t (t) 2 C s (t), "t; 5. When P t (t) and C t (t) fall within the range of allowable error of P s (t) and C s (t), respectively (i.e.|DP(t)| ł DE p and |DC(t)| ł DE c , "t), the tested solenoid a i is identified as ''normal''; when P t (t) or C t (t) does not fall within the range of allowable error of P s (t) and C s (t) (i.e. dt,|DP(t)| .DE p or |DC(t)| .DE c ), the tested solenoid is identified as ''abnormal.'' 6. If there are other tested solenoids a i of model A, return to Step 3. The example of this inspection process is shown in Figure 3. Figure 3(a) shows the standard hydraulic pressure curve P s (t) and standard current curve C s (t) of normal solenoid a 0 established by performing Step 2, and the ranges of allowable error E pl (t) ; E ph (t) and E cl (t) ; E ch (t) thereof (gray regions). Figure 3(b) shows the test hydraulic pressure curve P t (t) and test current curve C t (t) of the tested solenoid a i generated by performing Step 3. Figure 3(c) shows the comparison diagram of P t (t) and C t (t) in Step 4 and P s (t) and C s (t). It is observed that the curve P t (t) and curve C t (t) have exceeded the ranges of allowable error of curve P s (t) and curve C s (t), respectively, so this tested solenoid a i is identified as ''abnormal'' in Step 5. In addition, according to the anomaly inspection procedure shown in Figure 2, the solenoid test method proposed in this article will have learning function, because for the other unknown solenoids of model B, as long as the procedures of Steps 1 and 2 are completed, the test configuration of solenoids of model B can be established in the system and the required P s (t), C s (t), E pl (t) ; E ph (t) and E cl (t) ; E ch (t) can be tested. Afterward, the anomaly inspection operation can be performed for the tested solenoid of model B. System architecture In this work, to design the solenoid inspection system, a test architecture which can inspect four solenoids simultaneously is established by simulating the solenoid operating environment in the AT. This test architecture can use one oil hydraulic storage tank to simultaneously actuate four solenoids of the hydraulic channel; the system can automatically monitor the temperature and pressure of the oil tank and the voltage signals delivered to various channel solenoids, and establish the required hydraulic pressure curve and current curve for inspection by detecting the channel pressure and consumed current of various solenoids. The architecture of this solenoid inspection system is shown in Figure 4. The functions of various constitutional units are described below: 1. Hydraulic tank: to store oil for actuation of solenoids. 2. Hydraulic driver: an oil hydraulic pump connected to the hydraulic tank. This hydraulic pump can pressurize the oil stored in the hydraulic tank and deliver it to various tested solenoids through the oil delivery pipe. . Schematic diagram of solenoid anomaly inspection: (a) standard hydraulic pressure curve P s (t) and standard current curve C s (t); (b) test hydraulic pressure curve P t (t) and test current curve C t (t); (c) comparison of P t (t) and C t (t) with P s (t) and C s (t), respectively. 3. Hydraulic controller: it can receive the command signals from the monitoring host to control the hydraulic driver to deliver oil to actuate the tested solenoid. 4. Supply pressure detector: for detecting the supply pressure of oil pressurized and delivered by the hydraulic pump to the delivery pipe, the detected pressure signal can be transmitted by the signal transform unit to the monitoring host for processing. 5. Temperature driver: a preheating pump, for heating the oil in the hydraulic tank. 6. Temperature controller: a controller for controlling the heating action of the preheating pump; it can receive the command signals from the monitoring host, or the user operates the control action. 7. Temperature detector: for detecting the oil temperature in the hydraulic tank, the detected temperature signal can be transmitted by the signal transform unit to the monitoring host for processing. 8. Level detector: for detecting the oil height in the hydraulic tank; the detected level signal can be transmitted by the signal transformation unit to the monitoring host for processing. 9. Tested solenoid (sample): the tested solenoid to be inspected or the normal solenoid for establishing standard conditions. 10. Solenoid valve seat: molds of different specifications can be installed to provide the sockets of different models of tested solenoids, so that each tested solenoid is positioned and stably connected to the delivery line, and so the oil can be delivered smoothly to the solenoid. This valve seat can be provided with the solenoids of four channels, where the first channel is used to test the solenoid and to establish standard Hardware design To develop the hardware of the solenoid inspection system, the hardware system design for the solenoid test bench is completed first. Figure 5 shows the hardware system P&ID (Process & Instrument Diagram) of the solenoid test bench, including a heat insolating oil tank, a hydraulic control device, a solenoid valve seat, and a signal control device. The design structure is described below. 1. The hydraulic tank is connected to a hydraulic pump P-01; there is a filter F between the hydraulic tank and P-01 to filter the oil of hydraulic tank. The P-01 pressurizes the oil stored in the hydraulic tank and delivers it to the hydraulic control device through the oil delivery pipe. The outlet pressure transmitter PT-05 can detect the oil pressure in the delivery pipe; the detected pressure is displayed on the screen of monitoring host by pressure indicating controller PIC-05; it is fed back to control the pressure of P-01. In addition, the hydraulic tank is connected to a back pressure control valve BPV for reducing the input pressure of P-01 so that the oil can return to the hydraulic tank through the back pressure control valve BPV. 2. The hydraulic tank is connected to a preheating pump HT-01 for heating the oil in the hydraulic tank. There is a temperature transmitter TT-01 in the hydraulic tank for detecting the temperature of the oil; the detected temperature is displayed on the screen of the monitoring host by a temperature indicating controller TIC-01; it is fed back to control the temperature actuator TA-01 to actuate the heating action of HT-01. In the P&ID diagram of Figure 5, the framed elements show that the signal can be transmitted through the signal transform unit to the monitoring host and displayed, or the monitoring host performs the feedback control action. For example, the pressure indicators PI-01;PI-04 and current indicators AI-01;AI-04 can transmit the hydraulic pressure signals and current signals of various channel solenoids to the monitoring host and display them. The indicating controllers PIC-05, TIC-01, and VIC-01 not only transmit the outlet pressure, oil tank temperature, and outlet voltage signal to the monitoring host, but also actuate feedback control for P-01, HT-01, and DC Power according to the obtained signal values, in order to automatically control the oil pressure, temperature, and voltage. Based on the system architecture in Figure 4 and the hardware design P&ID in Figure 5, the developed solenoid inspection system entity is shown in Figure 6. Automatic monitoring software development In this work, LabVIEW graphical language 23,24 is used as the development tool for the automatic monitoring software of the solenoid inspection system. The modular layer structure of the developed software system is shown in Figure 7. This software system architecture comprises such main constitutional modules as a voltage control curve editing unit, a standard condition multiple tests unit, a solenoid grease cleaning unit, a solenoid test setting unit, a repeated solenoid testing unit, a test result statistics unit, and a flexible curves display unit. This software system can provide a solenoid testing procedure with learning function, so that it can build the standard curves of unknown models' solenoids by learning, and then be used for the test operation of the solenoids of the model. The complete testing process is shown in Figure 8. The execution steps of this testing process are described below: 1. Identify the model of solenoid: for the solenoid to be inspected, the first step is to identify the solenoid as a solenoid of a known or unknown model. 2. Edit voltage control curve: for a solenoid of an unknown model, the first step is to edit its voltage control curve. The voltage control curves are divided into ON/OFF type and PWM type. Install tested solenoid Adequate hydraulic oil shall be put in the oil tank of the solenoid test bench before the solenoid inspection; the solenoid to be inspected is then loaded in the corresponding mold, and the mold is installed on the solenoid test bench. For example, the 2-3 shift solenoid of GMC THM 4L80-E automatic transmission 25 ( Figure 10(a)) is used as tested object. Figure 10(b) shows the mold. Figure 10(c) shows this solenoid has been loaded in mold and installed on the test bench. Afterward, the heater of test system is actuated till the oil is heated to standard temperature T s ; the automatic monitoring software of this solenoid inspection system can be started to perform the aforesaid solenoid inspection procedure. Edit voltage control curve Figure 11(b) shows the voltage control curve edit screen of PWM solenoids; the user can enter the PWM signal frequency F, start the duty ratio (start r d ), end duty ratio (end r d ), duty ratio variation value (Dr d ), and the number of signals of each r d in the voltage control table on the left of this screen. As shown in the control table, the PWM signal frequency F defined in Line 1 is 100 Hz (10 ms per signal), the r d changes from 10% to 90%, Dr d is 1%, and the number of signals of each r d is 10, meaning each r d takes 100 ms (0.1 s). The F defined in Line 2 is also 100 Hz, the r d changes from 90% to 10%, Dr d is 21%, and the number of signals of each r d is also 10. Therefore, the PWM control voltage of the two lines will use signal frequency F = 100 Hz; the r d increases from 10% to 90%, and then decreases from 90% to 10%, and each r d takes 10 signals, 0.1 s. In the same way, the system can perform a validation check of the PWM voltage signal; the valid voltage control curve can be automatically displayed in the curve diagram on the right of the screen according to the selected drawing object. The edited PWM voltage control table can also be saved for PWM solenoid test. Building standard conditions of a solenoid To build the standard conditions of a solenoid, the user can set up the model parameters as in the screen shown in Figure 12. For example, the model of solenoid entered in this screen is ''4L80E-23Shift,'' and file ''D:\Solenoid Test Data\4L80E OnOff.vct'' is selected to define the voltage control curve V c (t); the test parameters are then entered, including voltage type (ON/ OFF), standard temperature T s (90°C), standard pressure P s (16 kg/cm 2 ), sampling rate Dt (0.2 s), tolerable error percentage a (8%), tolerable error of standard pressure e p (0.2 kg/cm 2 ), PID parameter P-Value (3), I-Value (120), D-Value (120), and number of repeated tests N s (4); the bracketed values are set values. The ''Edit control curve'' button in Figure 12 can be clicked to display the voltage control curve edit screen shown in Figure 11 for the user to define the required voltage control table. When the user has completed the test parameters setting, the ''Standard condition test'' button can be clicked to enter the solenoid test screen, as shown in Figure 13. When the user clicks the ''Start pressure control and standard condition test'' button in this screen, the system first controls the outlet pressure of the hydraulic pump according to the PID parameters and e p , so that the pressure reaches and remains at standard pressure P s ; 26,27 the tests for standard conditions of solenoid model are then repeated; the number of tests is N s (4) to generate N s hydraulic pressure curves P si (t) and N s current curves C si (t), i = 1 ... N s . Figure 13 shows the screen after 4 repeated tests for standard conditions of the model. Figure 14 shows the test scene of a normal solenoid; the oil is ejected when the solenoid is turned on and the oil is locked when solenoid is turned off. After N s tests for standard solenoid conditions, the system will calculate the average value of the N s hydraulic pressure curves P si (t) and N s current curves C si (t) (i = 1 to N s ), respectively, as the standard pressure curve P s (t) and standard current curve C s (t) of the solenoid of this model, expressed as equations (1) and (2), respectively P s (t) = C s (t) = Finally, the system uses the solenoid model, voltage type, Dt, T s , P s , a, e p , P-value, I-value and D-value, as well as the generated P s (t) and C s (t) curves, as the standard test conditions of this solenoid model, and this model (4L80E-23Shift) is automatically taken as the file name and saved in a standard condition file (e.g. D:\Solenoid Test Data\4L80E-23Shift.sdc) for testing the solenoids of this model. Repeated tests for solenoids To perform the test operation of tested solenoids, the user can set up the standard conditions of solenoids in the solenoid test setting screen, as shown in Figure 15. First of all, the user must load the standard conditions of the model of tested solenoids, for example, when the [D:\Solenoid Test Data\4L80E-23Shift.sdc] standard condition file is selected and read, the system loads the standard conditions of the model ''4L80E-23Shift'' from this file, and displays the standard condition parameters (solenoid model, voltage type, Dt, T s , P s , a, e p , P-value, I-value and D-value) on the screen. Afterward, the number of repeated tests N t for the tested solenoids is set up and the channels to be tested are selected in this screen; herein, the N t is set as 5, meaning the solenoids will be tested five times in the future, and the results of each test are counted in order to judge the success or failure of the solenoid test. This system can provide at most four test channels for the user to perform solenoids test operation; the user can set up the channel to perform a solenoid test by clicking the ''Channel 1,''''Channel 2,''''Channel 3'' or ''Channel 4'' button on the screen; the Test or Cancel state can be switched by clicking any button. All four channel buttons are set as Test state in this screen, so the system will repeatedly test the solenoids in these four channels. After the user sets up the test channels and the times of repetitive work, the ''Start solenoid test process'' button can be clicked to enter the solenoid repeated testing screen, as shown in Figure 16. In this system, the repeated test process for tested solenoids is similar to the repeated test procedure of building standard solenoid conditions; both of them must adjust the hydraulic pump outlet pressure to standard pressure value P s ; the error is lower than e p (0.2 kg/cm 2 ), and the repeated test operation of solenoids is then performed. There are four working charts for Channels 1 ; 4; each chart displays the standard pressure curve P s (t) and standard current curve C s (t) (white curves). The white dotted lines above and below the two curves represent the range of tolerable error. The test pressure curve P t (t) (yellow curve) and test current curve C t (t) (blue curve) of the channel solenoid advance with the test process till the round of testing ends, and then the next round of test is started; N t rounds of test will be performed. On this screen, the pressure curve in the working chart of Channel 3 has turned red, meaning the pressure test for Channel 3 solenoid has failed because the test pressure curve has exceeded the tolerable range of pressure. In fact, in the overall solenoid test process, as long as any test curve exceeds the tolerable range, the curve immediately turns red. When the repeated test procedure for solenoids is completed, the system will automatically store the test records of all the test channels. In this kind of test record, N t test pressure curves, N t test current curves, N t outlet pressure curves and N t temperature curves are stored for the solenoid of each channel for subsequent test result statistics and analysis. Test result statistics and curve analysis In order to perform the statistics and analysis of each channel solenoid test result, the pressure curve of i-th test is represented by P ti (t), and the current curve of i-th test is represented by C ti (t). The pressure failure rate, current failure rate, and total failure rate of each test can be calculated. The pressure failure rate of i-th test is represented by F pi , the current failure rate is represented by F ci , the total failure rate is represented by F Ti , and the decision on whether t-th measured pressure falls in the range of tolerable error in i-th test is represented by d pi (t); d ci (t) represents the decision on whether the t-th measured current falls within the range of allowable error in the i-th test. Various failure rates are defined as follows The success or failure of the test result can be determined by judging whether or not the total failure rate of each test is zero, that is, if F Ti = 0, the i-th test result R ti is ''Success,'' on the contrary, if F Ti . 0, R ti is ''Fail.'' Therefore, the overall test result R t of the solenoid can be defined as ''Success'' only if total failure rate of all the N t tests is zero; otherwise, ''Fail.''R ti and R t are defined as equations (8) and (9), respectively For example, after the repeated test procedure for solenoids in Figure 16 is completed, the system displays the complete test result on the solenoid test result statistics screen in Figure 17. This screen contains the four statistical tables of solenoid test results of Channels 1;4. According to these statistical tables, the test results of Channel 1, Channel 2, and Channel 4 are ''Success,'' and the test result of Channel 3 is ''Fail.'' Each statistical table contains the statistics of five tests (each test has a record), and each record contains four fields: pressure failure rate, current failure rate, total failure rate, and test result. The pressure failure rate of the first test for Channel 3 solenoid is 43/151, meaning 151 pressure values are extracted from the test pressure curve, 43 pressure values are identified as fail values as they exceed the tolerable range of error of standard pressure curve. As long as any test fails, the solenoid test is identified as ''Fail. '' In order to assist the user in further reviewing the solenoid test failure condition and discussing the possible failure cause, this system provides a flexible display function of test curves, allowing the user to perform combinatory analysis of different test curves. For example, when the ''Display test curve'' button is clicked in the lower part of the screen in Figure 17, the system will enter the testing curves display screen shown in Figure 18. The right part of this screen is a tab switched page display area; there are four labels of channels, and the corresponding page can be displayed by clicking any tab; each page contains a channel test chart and a voltage graph. The left part of the screen provides the control buttons for the user to select the display object; the topmost button is used for selecting i-th test to display, i = 1, ..., N t , while the lower part provides the switching button for 10 curves, including channel pressure, pressure range (upper and lower bounds of pressure), set pressure, channel current, current range, set current, impedance, outlet pressure, standard pressure and oil temperature, which can be clicked to switch ''Display'' or ''Hide.'' If the button is switched to the ''Display'' state, the corresponding curve is immediately displayed in the channel test chart. On the contrary, if the ''Hide'' state is switched to, the corresponding curve will disappear from the channel test chart. As shown in Figure 18(a), the Channel 3 test chart and voltage graph page of the 3-th test are selected in the screen, and the five control buttons for channel pressure, pressure range, set pressure, channel current, and current range are set as ''Display'' status, so the test pressure curve (yellow curve) and test current curve (blue curve) obtained by the 3rd test for Channel 3 solenoid will be displayed. The curves of the pressure range (dotted gray curves) are on both sides of the curve of set pressure (gray curve). The lower voltage graph displays the set voltage curve (gray curve) and actual voltage curve (red curve). According to the comparison and analysis of the aforesaid curves, the channel pressure curve has exceeded the upper bound of set pressure, so the solenoid test fails. According to the comparison of outlet voltage curve, the normal activated voltage is 10 V (red vertical dash line), and the activated voltage of test solenoid is about 11 V (green vertical dash line), higher than normal activated voltage. In order to know whether the simultaneous test for multiple channel solenoids results in insufficient hydraulic pump outlet pressure, the outlet pressure curve can be added to the test chart to observe whether the curve descends obviously when multiple solenoids are actuated simultaneously, to relieve pressure. Figure 18(b) shows the screen with an additional orange outlet pressure curve, as this curve keeps fixed pressure value, with no obvious descent. Therefore, this system can stably control the outlet pressure of the hydraulic pump, which is not influenced by simultaneous actuation of multiple channel solenoids. Solenoid grease clean procedure To perform the solenoid grease cleaning operation, the control parameters can be set up on the screen, as shown in Figure 19, such as the working voltage of the power supply, working pressure of the hydraulic pump, frequency of pulse voltage, PID parameters, and working time. In addition, the user can select the channels for grease cleaning; for example, the user only sets ''Channel 1'' and ''Channel 3'' as Clean status in this screen, so the system will shake the oil sludge out of the two channel solenoids by vibration on 1000 Hz working pulse frequency; the cleaning operation time is 3 min, and the system will control the outlet pressure in the preset 16 V. Experimental result analysis This study uses the solenoids of THM 4L80-E automatic transmission 25 for an actual inspection experiment. The solenoids of THM 4L80-E automatic transmission include four models, which are 1-2 shift solenoid, 2-3 shift solenoid, pressure control solenoid (PCS) and torque converter clutch (TCC) solenoid. The first two are ON/OFF solenoids, and the last two are PWM solenoids. The solenoid test of each model follows the inspection process in Figure 8; the standard conditions are built before the repeated tests are implemented, and the repeated tests can be divided into two stages, that is, when the preliminary test (Stage 1 test) result is Fail, the solenoid grease cleaning operation must be performed before the Stage 2 test. In order to analyze the solenoid test result, a solenoid test result classification table is defined according to whether the test pressure curve and test current curve are higher than, within, or lower than the tolerable range of error, as shown in Figure 20. The PH, PN and PL represent the test pressure curve is higher, within and lower, respectively, than its tolerable range of error. The CH, CN and CL represent the test current curve is higher, within and lower, respectively, than its tolerable range of error. This solenoid test uses four solenoids of each of the four models used in THM 4L80-E automatic transmission, which are installed in Channels 1;4 for the synchronous test. The test is repeated 20 times (N t = 20) for each model. In terms of statistics of test results, the numbers of failures of Stage 1 and Stage 2 tests will be represented by N f1 and N f2 , respectively; R 1t and R 2t represent the results of Stage 1 and Stage 2 tests, respectively (S = success, F = fail). Finally, RC 1 and RC 2 classify and represent the results of Stage 1 and Stage 2 tests, respectively (e.g. classification table of Figure 20). Tables 1 and 2, respectively. According to Table 1, the 1-2 shift solenoid of Channel 1 is a normal solenoid; the solenoid of Channel 2 fails in the Stage 1 test (PH), but after the grease cleaning operation, the Stage 2 test result has become Success perhaps because the grease of the solenoid or valve seat induces poor closure and normal status is recovered after the grease removal. The solenoid of Channel 3 fails in all 20 tests of two stages (PL, CH), identified as faulty solenoid. The solenoid of Channel 4 has normal pressure test result in the two-stage test (PN), but the current test result is Fail (CL), so it is identified as a solenoid with electromagnetic structure failure. According to Table 2, the 2-3 shift solenoids of Channels 1 and 2 can be improved to normal solenoids by the grease cleaning operation. The solenoids of Channels 3 and 4 have electromagnetic structure failure problem (CH), and fail to be improved by grease removal operation. For PWM solenoid test, P s = 16 kg/cm 2 , T s = 90°C, e p = 0.2 kg/cm 2 , Dt = 0.2 s, a = 10%. In addition, the operating frequency of PWM voltage signal is defined as F = 614 Hz; the duty ratio r d increases from 1% to 99%, and then decreases from 99% to 1%, and each r d takes 10 signals, 10/614 s, the variation of test pressure curve P ti (t) and test current curve C ti (t) resulting from the ascending and descending of r d is reviewed. Table 3 shows the PCS solenoid test result analysis sheet. It is observed that the solenoids of Channels 1 and 2 have an electromagnetic structure failure problem; the Channel 3 fails in Stage 1 test (PL), but the Stage 2 test result has changed into Success after grease removal operation. The Channel 4 solenoid test result is a normal solenoid. Table 4 shows the TCC solenoid test result analysis sheet; it is observed that the solenoids of Channels 1 and 3 have an electromagnetic structure failure problem (CH); the solenoids of Channels 2 and 4 fail in the Stage 1 test (PH), but succeed in Stage 2 test after the grease cleaning operation. Discussion on the characteristics The solenoid inspection method proposed in this article and the developed test system have the following characteristics: 1. Learning function: for an AT solenoid of unknown model, as long as the standard conditions of the solenoid model are built, the operation can be performed by vibration of high frequency impulse voltage to eliminate the abnormal condition of poor closure of the valve seat resulting from sediment incrustation of the solenoid oil column. According to the experimental results, many solenoids with too high pressure curve (PH) can be improved and recovered to normal solenoids after the oil sludge removal operation. 5. Multiple test channels are provided: this system can stably control the outlet pressure of hydraulic pump for performance testing and the grease cleaning operation for multiple (1 to 4) test solenoids; the inspection efficiency is increased. 6. Automatic repeated tests: as a part of AT solenoids is not completely continuously faulty, sometimes there is only one fault after several actuations; this system can automatically perform multiple rounds of repeated tests for solenoids according to the N t setting value, to detect the failure frequency in multiple tasks. 7. Test curve display function: for the test curves recorded in the repeated test process of multiple channel solenoids, the user can perform combinatory analysis of different curves with the flexible display function of system to ascertain the abnormal condition of the solenoid and diagnose the possible failure causes. Conclusion For the problems and limitations of the present AT solenoid testers, an innovative solenoid anomaly inspection method is proposed by simultaneously considering both the possible electric structure failure and mechanical structure failure of solenoids. This method can measure the curve variation of consumed current and outlet pressure of the solenoid according to the voltage control curve to compare the test pressure curve with the standard pressure curve, and to compare the test current curve with the standard current curve; the solenoid anomaly detection capability can be greatly enhanced. According to this solenoid anomaly inspection method, a multichannel solenoid inspection system with learning ability is developed herein. This system allows the user to test a normal solenoid of an unknown model to establish the standard conditions of the solenoid model; multiple solenoids of the same model can then be tested repeatedly. In this study, according to the proposed solenoid inspection procedure, the multichannel synchronous inspection experiment is performed for the four kinds of solenoids (two ON/OFF solenoids and two PWM solenoids) used in the GMC THM 4L80-E automatic transmission. According to the experimental results, the normal hydraulic pressure curve (PN) and abnormal current curve (CH or CL) may occur in the test. Therefore, this system is provided with measurement and comparison of the current curve; the solenoid anomaly detection capability can be greatly enhanced. In addition, the solenoids with poor valve seat closure resulting from oil column sediment incrustation (failure types are PH, CN) can be improved and recovered to normal solenoids effectively by the grease cleaning operation. The solenoid inspection system developed in this article may be applied in solenoid fault inspection in the maintenance aspect, and applied in quality control in the manufacture aspect, which will enhance the performance of the fault maintenance and the part manufacturing of the AT. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
9,142
2020-05-01T00:00:00.000
[ "Engineering" ]
Identification of the Interface in a Binary Complex Plasma Using Machine Learning A binary complex plasma consists of two different types of dust particles in an ionized gas. Due to the spinodal decomposition and force imbalance, particles of different masses and diameters are typically phase separated, resulting in an interface. Both external excitation and internal instability may cause the interface to move with time. Support vector machine (SVM) is a supervised machine learning method that can be very effective for multi-class classification. We applied an SVM classification method based on image brightness to locate the interface in a binary complex plasma. Taking the scaled mean and variance as features, three areas, namely small particles, big particles and plasma without dust particles, were distinguished, leading to the identification of the interface between small and big particles. A complex plasma is a weakly ionized gas containing small solid particles [1,2].The particles are highly charged by collecting ions and electrons, and interact strongly with each other.This system allows experimental studies of various physical processes occurring in liquids and solids at the kinetic level [3], such as plasma crystals [4,5], acoustic waves [6], and turbulence [7,8].A complex plasma consisting of two differently sized microparticle types is known as binary complex plasma.Under certain conditions, two types of particles can be mixed and form a glassy system [9].Other phenomena, such as phase separation [10,11] and lane formation [12], can also be studied in such systems.Under microgravity conditions, phase separation can occur due to the imbalance of forces despite the criterion of spinodal decomposition not being fulfilled [13].The phase separated system then allows carrying out dedicated experiments such as wave transmission [14], and interaction of spheres with differently sized particles [15]. In complex plasmas, the particle radius usually ranges from a few to hundreds of microns.Then, the particles can be illuminated by a laser and directly recorded by a video camera [16].The recorded image sequences can be further analyzed by using tracking algorithms [17,18] and the trajectories of individual particles can be obtained.This provides a basis to study the dynamics and interactions in a multi-particle system.However, under certain conditions, a large region of interest needs to be recorded with a high recording rate.As a result, the spatial resolution has to be sacrificed with currently affordable recording technology.Therefore, advanced image recognition techniques are desirable in the research of complex plasmas. In recent years, machine learning has been widely applied to image recognition [19][20][21], such as face recognition [22], and handwriting recognition [23,24].Machine learning methods include many different algorithms [25], such as decision trees [26], neural networks [27], Bayesian networks [28], k-Nearest Neighbor (kNN) [29] and support vector machine (SVM) [30].Among these algorithms, the SVM method is one of the common supervised learning models for classification and regression problems [30].Given a training set, the aim of SVM is to find the "maximum-margin hyperplane" that divides the group of points and maximizes the distance between the hyperplane and the nearest point from either group [30].Once the hyperplane has been established by SVM training algorithm, we can classify samples not in the training set.The SVM method has been applied in solving various practical problems, such as text and hypertext categorization [31], classification of images [32] and other scientific research projects [33,34]. We applied the SVM method to achieve automatic identification of the interface in a binary complex plasma based on the brightness of the recorded images.The experiments were performed in the PK-3 Plus Laboratory on board the International Space Station (ISS).Technical details of the setup can be found in Reference [35].An argon plasma was produced by a capacitively coupled radio-frequency (rf) generator in push-pull mode at 13.56 MHz.We prepared a binary complex plasma by injecting two types of particles.The first type was melamine formaldehyde (MF) particles of a diameter of 2.55 µm with a mass m b = 1.34 × 10 −14 kg, while the second type was SiO 2 particles of a diameter of 1.55 µm with a mass m s = 3.6 × 10 −15 kg.Using the quadrant view (QV) camera [35], a cross-section of the left half of the particle cloud (illuminated by a laser sheet) was recorded with a frame rate of 50 frames-per-second (fps). Due to the gradient of plasma potential close to the chamber wall, both particle types were confined in the bulk plasma region, forming a three-dimensional (3D) cloud with a cylindrical symmetry.Figure 1a shows the left half of the cross section of the particle cloud.The cloud of small particles, big particles, and microparticle-free plasma can be easily distinguished with the naked eye.The two particle types were phase-separated due to the following reasons: First, the disparity of particle size (∆d/d ≈ 0.5) was much larger than the critical value of spinodal decomposition [10].Second, both particle types were subjected to two forces under microgravity conditions, namely the ion drag force (directed outwards from the center of the plasma chamber) and the electric-field force (directed inwards to the center of the plasma chamber).The total force acting on the two particle types had a subtle difference depending on the particle diameter [13].The synergistic effects of spinodal decomposition and force difference led to the instantaneous phase separation.Particularly the second effect drove the small particles into the inner part of the particle cloud and left big particles outside [14].To identify the interface of the particle cloud, we applied the SVM method to distinguish two particle types and the background microparticle-free plasma, which defined the three possible classes.First, we prepared the training sets and defined the features.Three areas (representing three classes) were selected in one frame of the experimental video (marked by rectangles in Figure 1a).Class 1 is the small particles area, class 2 is the big particles area, and class 3 is the background plasma without dust particles.The areas are far from the interface to avoid ambiguity, and their class can be easily identified with the naked eye.To define features, we randomly selected 4 × 4 pixel tiles from the selected area.Figure 1b-d shows a part of the tile collections.Here, each tile was one sample.The mean and the variance of each sample were selected as features.As the variance was much bigger than the mean, we rescaled both features so that their magnitudes were comparable: where i stands for sample i, j = 1 represents variance and j = 2 represents mean.x j min and x j max are the minimum and maximum of the j feature of all samples of all classes.We labeled each sample as 1, 2, or 3 based on the area it belonged to and repeated the process for a few frames.All samples were randomly divided into two sets, namely the training set and the test set.Here, we selected 216 samples in each class for training.Table 1 shows the values of the features and labels of a few samples.Next, we applied one of the support vector machine methods, namely the support vector classification (SVC), to the training set.The algorithm was provided by scikit-learn API [36,37].The SVC method implemented the "one-against-one" approach for multi-class classification.The "one-against-one" approach involves constructing a machine for each pair of classes.Thus, three classifiers were constructed, each constructed by two different classes of training points, i.e., "big" or "small particles", "big" or "particle-free", and "small" or "particle-free".When applied to a sample, each classifier gave one vote to the winning class, and the sample was labeled with the class having most votes [38].The parameter C in the scikit-learn API was set to 5, which is the penalty parameter of the error term.A larger values of C allows for fewer incorrect classifications.The linear kernel type was used in the algorithm and the remaining parameters were the default parameters.More detailed instructions can be seen in scikit-learn API [36,37]. The results of the SVC classification are shown in Figure 2. Dark blue dots represent samples in the background microparticle-free plasma, light blue dots represent samples in the cloud of big particles, and dark red dots represent samples in the cloud of small particles.The classification lines are indicated by the blue and red dotted lines.When the sample is located below the blue line in Figure 2, the corresponding pixels represent the background microparticle-free plasma.When the sample is above the blue line and above the red line, the corresponding pixels belong to the cloud of big particles.When the sample is below the red line, the corresponding pixels belong to the cloud of small particles.To evaluate the accuracy of the classification, we selected 1000 samples (with knowledge of their class) from the test set, which was set aside before the training.The accuracy was defined as the percentage of correctly classified samples out of all the selected test samples.Here, we studied the dependence of the accuracy of the SVM model on the size of the training set and the spatial resolution.As we can see in Figure 3a, with a small set of training samples, the accuracy depended on which samples we selected.On the one hand, if the randomly selected training samples represented the overall properties of the class, the resulting accuracy was high.On the other hand, if the selected samples represented only a part of the properties, the accuracy was low.This randomness led to a relatively low accuracy with big standard deviations.However, the accuracy improved quickly as the number of training samples increased.When the number of training samples exceeded 40, the accuracy approached 100%.For the classification, the spatial resolution depended on the size of each tile.Bigger tiles include more information in each tile but lead to lower spatial resolution.In Figure 3b, we show the dependence of the accuracy of the classification on the side length of each tile.With a considerable size of the training set, the accuracy already exceeded 90% when each tile included only four (2 × 2) pixels.This shows that the mean value of the pixel brightness alone played a significant role in classification.The accuracy rose further with the tile size.It exceeded 98% when there were more than sixteen (4 × 4) pixels in each tile.Comparing the results using features with (blue lines) and without (orange lines) scaling in Figure 3, we found that the scaling allowed for a reduction of the size of the training set and increased the spatial resolution of the classification.Finally, we applied the trained algorithm to distinguish the two particle types and microparticle-free plasma in another frame of the recorded video.The original image and the results after classification are shown in Figure 4a,b, respectively.As we compare these two panels, we see that the small particles (white area), big particles (gray area), and the background microparticle-free plasma (black area) are clearly distinguished.An interface can be drawn between the small and big particles (highlighted by the red line).Here, the red line is obtained by calculating each of the demarcation points of the horizontal pixels, while the green dotted line is obtained by connecting the innermost big particle cloud pixels.The discrepancy may be caused by the presence of a third type of particles with intermediate size in the experiment run.In summary, we applied the SVM method to achieve automatic identification of the interface in a binary complex plasma.The experiments were performed in a binary complex plasma under microgravity conditions on board the ISS, where the particle size cannot be directly deduced in the recorded images by the QV camera.The results show that this method can effectively distinguish small and big particles and the background microparticle-free plasma using the scaled mean and variance of the pixel brightness with low demand for the training samples. Figure 1 . Figure 1.(a) Single image extracted from experiment recordings and the selected areas for training (highlighted by the rectangles).(b) Examples of samples of the small particle cloud corresponding to class 1 (highlighted by the blue solid rectangle).(c) Examples of samples of the big particle cloud corresponding to class 2 (highlighted by the yellow dashed rectangle).(d) Examples of samples of the background plasma without dust particles corresponding to class 3 (highlighted by the orange dotted rectangle).We randomly selected four by four pixel tiles from each area and calculated the mean and variance of the pixel brightness in each grid area as features for the SVM method.The grid size defines the spatial resolution. Figure 2 . Figure 2. The results of the SVC classification.Dark blue dots represent samples in the background microparticle-free plasma, light blue dots represent samples of the big particle cloud, and dark red dots represent samples of the small particle cloud.The classification lines are indicated by the blue and red dotted lines. Figure 3 . Figure 3. Dependence of the accuracy of the classification on: the number of samples in the training set (a); and the tile side length (b).The blue line denotes the accuracy with scaling, while the orange line denotes the accuracy without scaling.In (a), the tile side length is set to 4 and, in (b), the number of training samples is set to 216. Figure 4 . Figure 4. Original image selected from the experiment recording (a); and the results obtained by the SVC classifier (b).Area of small particles (white), big particles (gray), and the background microparticle-free plasma (black) are clearly distinguished.The red and green curves indicate the interface detected with two different methods (see text). Author Contributions: H.H. performed the analysis and wrote the original draft.M.S. designed the experiments and edited the manuscript.C.-R.D. conceived the idea and oversaw the project.Funding: The research was funded by the National Natural Science Foundation of China (NSFC), Grant No. 11405030.The PK-3 Plus project was funded by the space agency of the Deutsches Zentrum für Luft-und Raumfahrt e.V. with funds from the Federal Ministry for Economy and Technology according to a resolution of the Deutscher Bundestag under grant number 50WM1203. Table 1 . A few samples of the scaled training set.
3,190.6
2019-03-01T00:00:00.000
[ "Physics", "Computer Science" ]
Design of Plasmonic Coupler with Germanium Spacer Layer for Quantum Well Infrared Photodetectors A design of a plasmonic coupler, composed of metal hole arrays and a germanium spacer layer, integrated with a quantum well infrared photodetector structure is presented. Insertion of a germanium spacer layer in this hybrid structure enhances absorption and the z-component of an electric field in the quantum well absorption region under both substrate-side and air-side illumination configurations. By changing thickness of the germanium spacer layer, the plasmonic resonance wavelengths can be adjusted with peak quantum well response. This plasmonic coupler is believed to be promising to improve performance of quantum well infrared photodetectors. Introduction Quantum well infrared photodetectors (QWIP) have received much attention for applications in night vision, environmental monitoring, and astronomy research [1]. The relatively mature GaAs technology makes QWIP a competitive candidate for long wavelength infrared (LWIR) detectors. However, the major drawback of QWIP is that it cannot detect light at normal incidence and it is only sensitive to an incoming electric field component normal to the QW surface [2] thus necessitates an additional optical coupling scheme to induce a preferred electric field component from normal incident light. Two-dimensional (2D) periodic or random gratings are widely used as an efficient coupling scheme for QWIP focal plane arrays (FPAs) [2,3]. However, realization of such optical gratings involves growth of a thick top grating layer and complex fabrication steps. Photonic crystal structures can also be used to enhance normal incidence absorption and thus detectivity of the QWIP, but suffers from sophisticated fabrication processes [4]. Recently, plasmonic optical couplers have gathered much attention for efficiently coupling the normal incident light to QWIP [5,6]. Integrating a plasmonic coupler, consisting of a simple 2D perforated metal hole arrays (MHA), with the QWIP structure, the performance of the photodetectors can be enhanced [7]. As the commonly used plasmonic metal such as gold (Au) acts as a perfect conductor in the LWIR (8-12 m) range, the surface plasmons excited at the metal/dielectric (semiconductor) interface by 2D MHA at this wavelength range are commonly known as spoof surface plasmons (SSP) [8]. The main challenge involved with the plasmonic coupler integrated with a QWIP structure, referred as a hybrid structure, is to place the active QW region within the SSP decaying field for efficient coupling. This restricts the use of a plasmonic coupler only in a thin QWIP structure [9]. However, for the LWIR range, SSP are loosely bound at the metal/semiconductor interface and the evanescent field penetrates up to few microns into the semiconductor in a typical QWIP structure [10]. In this hybrid structure, a heavily doped top contact layer of the QWIP has low permittivity in the LWIR range [5] and forms an antiguiding structure, resulting enhanced coupling of SSP into substrate radiation and thereby reducing their coupling to the absorber and thus the detectivity of photodetectors [10]. Similar substrate radiation losses have also been observed in a plasmonic coupled type-II superlattice (T2SL) structure due to a low refractive index of a T2SL layer [11]. In this work, a new plasmonic coupler composed of a 2D MHA and germanium (Ge) spacer layer integrated with a LWIR QWIP structure is presented. Insertion of a high refractive index Ge spacer layer between 2D MHA and a heavily doped top contact layer can confine the SSP to the metal/semiconductor interface and enhances absorption as well as induces a strong z-component of the electric field ( E z ) in the QW absorber region. Ge has been found to have good compatibility with n-doped GaAs as it is commonly used as an ohmic contact material and it can be easily deposited by e-beam evaporation [12]. Theoretical Model and Method The QWIP layer structure considered for this study is shown in Fig. 1a. The layer structure from bottom to top consists of GaAs substrate, a 0.5-m n-doped GaAs bottom contact layer with N d = 1 ×10 18 cm −3 , the multiple quantum well (MQW) absorber consisting of 50 periods of 6.5-nmthick GaAs wells with doping, N d = 1 ×10 18 cm −3 , and 9.5 nm undoped Al 0.25 Ga 0.75 As barriers and atop 0.2 m n-doped GaAs top contact layer with N d = 1 ×10 18 cm −3 . A numerical model is developed based on the finite difference method (FDM) using MATLAB in order to solve self-consistent Schrodinger-Poisson equations. The absorption coeffiecient spectrum of the MQW absorber corresponding to bound-to-bound inter-subband transition of photo-carriers is depicted in Fig. 1b, showing peak absorption at = 9.7 m. Perforated 2D Au hole arrays or MHA are integrated on top of the QWIP structure. The period of the MHA with a square lattice structure, diameter of hole, and thickness of Au film are defined as p, d, and t Au , respectively. A Ge spacer layer of thickness t Ge is inserted in between MHA and an n + -GaAs top contact layer. The relative permittivity of top and bottom n + -GaAs contact layers is calculated as where p is the plasma frequency defined by where n e , e, ε GaAs , and m * e are free carrier density ( N d =1×10 18 cm −3 ), electron charge, high frequency permittivity of undoped GaAs, and GaAs electron effective mass, respectively. The relative permittivity of the MQW absorber layer is calculated from composition and doping average of the constituent well and barrier materials. The relative permittivity of undoped GaAs, Ge, and Au is taken from ref. [13]. The relative permittivity versus wavelength for different materials used in the simulation is shown in Fig. 1c. The numerical simulations were performed using Lumerical software package based on the finite difference time domain (FDTD) method. The periodic boundary conditions were adopted in x and y directions and the perfect matched layer (PML) boundary conditions were imposed at top and bottom boundaries along z-direction. Two hybrid structures have been studied and compared under substrateside and air-side illumination configurations. One is a QWIP structure integrated with MHA coupler and the other is the same MHA coupler with a Ge spacer layer inserted between MHA and the top contact layer of QWIP. For convenience, these two plasmonic couplers are referred to as MHA and MHA+Ge coupler, respectively. Substrate-Side Illumination (SSI) SSI configuration is compatible with conventional 2D FPAs which are flipped and light is incident through the substrate. In this configuration, incoming light is considered to be incident from the GaAs substrate-side. The SSP at the metal/semiconductor interface are excited directly from GaAs m * e the incident light, and the MHA can be optimized for SSP coupling without concerning about the transmission through the MHA. For the square lattice geometry of MHA on dielectric, the plasmonic resonant wavelength ( ij ) for normal incident light and field penetration depth ( ij ) into the dielectric can be written as [14] and where p is the period of MHA and i, j correspond to the orders of the wavevector, and m and d are relative permittivities of the metal and dielectric (i.e., air or semiconductor), respectively. Figure 2a presents the spectral absorption (=1-T-R, where T and R are the transmission and reflection, respectively) for MHA and MHA+Ge couplers corresponds to p = 3 m, d = p/2, t Au = 100 nm, and t Ge = 100 nm. The Au film is considered sufficiently thicker than the skin depth ( ∼ 30 nm at = 10 m) to avoid direct transmission. The absorption of the QWIP layers without MHA, called reference, is also plotted for comparison. Both MHA and MHA + Ge couplers have absorption peaks nearly at 10 m and 7 m corresponding to the first-and second-order resonance wavelengths, 01 and 11 ( ∼ 01 ∕ √ 2 ), respectively. The little difference in the resonance wavelengths Δ 01 = 0.017 m and Δ 11 = 0.016 m between these two plasmonic couplers is due to difference in relative permittivities of n + -GaAs and Ge layers. For this given period, 01 matches well with the peak absorption of the QWIP structure in the study. For the MHA+Ge coupler, the absorption at 01 is found to be nearly 3 folds higher than that for the MHA coupler. This can be explained on the basis of SSP coupling strength at MHA/ n + -GaAs and MHA/Ge spacer layer interfaces in the hybrid structure. In the LWIR range, as shown in Fig. 1c, the relative permittivity of n + -GaAs top contact layer is reduced due to free carrier contribution and a similar reduction in permittivity is also observed for the MQW absorber layer due to a combined effect of a lower refractive index of the Al 0.25 Ga 0.75 As barrier layer and free carrier contribution from doped QWs. These lower indexed layers confined between MHA coupler and higher refractive index GaAs substrate can act as a leakymode antenna [10] and couples SSP mode directly into the substrate, resulting in lower absorption in the QW region. However, by inserting a higher refractive index Ge spacer layer in between MHA and n + -GaAs top contact layer, MHA restores SSP mode and allows good coupling with the QW absorber layer through the evanescent field that results in the higher absorption. For 01 ≈ 10 m, 01 from Eq. (4) is ∼ 5.2 m for MHA+Ge coupler which is beyond the total QWIP layer thickness, indicating SSP mode coupling with an absorber layer. Also, the absorption enhancement at 01 for MHA and MHA+Ge coupler is found to be nearly 23 and 66 folds higher, respectively, with respect to the reference structure. Figure 2b presents the tuning of the plasmonic resonance wavelengths for MHA+Ge coupler by varying the periods of MHA. As predicted by Eq. (3), the linear dependence of the resonance wavelength with the period of MHA is obtained for different orders. This allows one to achieve perfect spectral overlapping between the plasmonic resonance and the peak QWIP absorption as well as to design multispectral QWIP. The symmetric arrangement of hole arrays in x and y directions makes this plasmonic coupler polarization insensitive as shown in the inset of Fig. 2b. The distribution of the E z field which is the preferred field component for the QW absorption for MHA and MHA+Ge couplers at 01 resonance is shown in Fig. 3. It is found that for the MHA+Ge plasmonic coupler, the E z field in x-y plane at metal/dielectric interface is more strongly confined around opposite edges of a circular hole, indicating dipolar-like plasmonic resonance [15] and a strong E z field in x-z plane almost covers the entire MQW absorber region as compared to those for the MHA coupler. To quantify the enhancement of the E z field at different locations of the QWIP structure for both MHA and MHA+Ge couplers, the quantity F is defined as: where |E z | and |E 0 | are the averaged z-component of the induced electric field and electric field of the normal incident light, respectively. The integration is performed over the entire x-y plane located at a distance (s) from the top n + -GaAs contact layer. As plotted in Fig. 4a, for the MHA+Ge coupler, the averaged |E z | 2 is nearly 15 times stronger than |E 0 | 2 at the center of the MQW active region and at the same location F is 5.5 folds stronger than that for the MHA coupler. For the QWIP, photocurrent is proportional to the averaged |E z | 2 across the entire MQW active region [16]. Therefore, in order to evaluate optical performance of the QWIP integrated with a plasmonic coupler, the coupling efficiency is defined as: where |E z | and |E 0 | have been defined previously. The integration is performed over the entire MQW active region. It is found from Fig. 4b that is nearly four folds stronger for MHA+Ge coupler than that for the MHA coupler at 01 resonance. Hence, an improvement in performance of the QWIP with MHA+Ge coupler is expected. The tuning of the plasmonic resonance wavelength is also studied by changing the thickness of the Ge spacer layer. As shown in Fig. 5, both 01 and 11 are redshifted with an increase of Ge spacer layer thickness. The redshift is found to be smaller for t Ge ≤ 150 nm and relatively larger for the thicker Ge spacer layer. The resonance wavelength, 01 , is shifted from 9.98 to 10.3 m as the thickness of Ge spacer layer is increased from 50 to 300 nm. However, as the thickness of spacer layer increases, the separation between plasmonic coupler and active region also increases and as a result E z field intensity in Fig. 2 a Absorption spectra for MHA and MHA+Ge plasmonic coupler integrated with QWIP structure at normal incidence under SSI configuration, corresponding to p = 3 m, d = p/2, t Au = 100 nm, and t Ge = 100 nm. Inset shows the period corresponding to the two resonances. The dashed line is the result for the reference structure without plasmonic coupler. b Absorption spectra for the MHA+Ge coupler with different periodicities (p). Inset shows the polarization dependent absorption spectra for p = 3 m the active region decreases. The shift in the resonance wavelength is attributed to change of effective dielectric constant of the dielectric layers. Since the plasmonic field penetrates through the entire QWIP structure including high refractive index Ge spacer layer, the plasmonic resonance strongly depends on the effective dielectric constant of all the dielectric layers. Therefore, with increase in Ge spacer thickness increases the effective dielectric constant of dielectric layers and hence the resonance wavelengths. This allows one to adjust the plasmonic resonance with peak QWIP response without changing MHA lithography masks. Air-Side Illumination (ASI) ASI configuration is suitable for single pixel detector characterizations, where light is incident from the air-side (i.e., top side) and hence the critical substrate removal step [17] is not required. In this configuration, the excitation of SSP occurs at both air/metal and metal/semiconductor interfaces. The SSP at the metal/semiconductor interface which is of interest is excited by evanescent waveguide mode through 2D MHA [18] and the corresponding resonance wavelength lies within detection spectrum of the QWIP. Figure 6 presents the absorption spectra for MHA and MHA+Ge couplers in ASI configuration where all the design parameters are kept the same as used in SSI configuration. The resonance wavelengths, 01 = 10.03 m and 11 = 7.07 m for the MHA+Ge coupler, are nearly the same as those obtained in the SSI configuration, suggesting SSP excitation at the metal/semiconductor interface. However, the absorption is found to be lower for both MHA and MHA+Ge couplers as compared to the SSI configuration. This is attributed to the evanescent decay of field intensity at the metal/semiconductor interface. At the air/metal interface, 01 and 11 are found to be 3.04 m and 2.15 m, respectively, which are beyond the detection spectrum of the QWIP. In this configuration, the MHA+Ge coupler is also found to be effective and enhances the absorption by nearly two folds at 01 than that for the MHA coupler as shown in Fig. 6. Also, the MHA+Ge coupler induces 2.4 times stronger E z field intensity at the center of the MQW absorber region as compared to the MHA coupler as shown in the inset of Fig. 6. It is also found that the absorption increases with decrease in the thickness of MHA or Au film as shown in Fig. 7. This may be because evanescent field intensity Fig. 4 a Field enhancement factor, F over x-y plane as a function of distance (s) away from the top n + -GaAs contact layer for MHA and MHA+Ge couplers at the peak resonance wavelength, 01 . b Coupling efficiency, over entire MQW active region versus wavelength for MHA and MHA+Ge couplers, corresponding to p = 3 m, d = p/2, t Au = 100 nm, and t Ge = 100 nm. The shaded region in (a) shows MQW active region Absorption spectra for MHA and MHA+Ge plasmonic couplers integrated with QWIP structure at normal incidence under ASI configuration, corresponding to p = 3 m, d = p/2, t Au = 50 nm, and t Ge = 100 nm. Inset shows the quantity F over x-y plane as a function of distance (s) away from the top n + -GaAs contact layer for these two couplers at = 01 . The shaded region shows MQW active region increases through thinner MHA and excites SSP at the metal/semiconductor interface. Conclusion A design of plasmonic coupler composed of 2D MHA and Ge spacer layer for QWIP is presented. Insertion of Ge spacer layer between MHA and QWIP structure enhances absorption and E z field intensity in the QW absorption region. For the MHA+Ge coupler, the absorption is found to be 3 folds higher than that for the conventional MHA coupler at the resonance wavelength, 01 ≈ 10 m under SSI configuration, while the averaged |E z | 2 is 15 times stronger than |E 0 | 2 at the center of the QW absorber region. However, for ASI configuration, the absorption and field enhancements are limited due to decay of evanescent field intensity at metal/semiconductor interface and these can be maximized with thinner Au film or MHA. The tuning of plasmonic resonance wavelength by changing the thickness of Ge spacer layer allows one to adjust the plasmonic resonance with peak QWIP response without changing MHA lithography masks. This plasmonic coupler is believed to be promising to improve performance of LWIR QWIP FPAs.
4,110.4
2022-11-18T00:00:00.000
[ "Physics" ]
Real-Time Underwater Wireless Optical Communication System Based on LEDs and Estimation of Maximum Communication Distance This paper presents a real-time underwater wireless optical communication (UWOC) system. The transmitter of our UWOC system is equipped with four blue LEDs, and we have implemented pre-emphasis technology to extend the modulation bandwidth of these LEDs. At the receiver end, a 3 mm diameter APD is utilized. Both the transmitter and receiver are housed in watertight chassis and are submerged in a water pool to conduct real-time underwater experiments. Through these experiments, we have obtained impressive results. The data rate achieved by our system reaches up to 135 Mbps, with a BER of 5.9 × 10−3, at a distance of 10 m. Additionally, we have developed a convenient method for measuring the underwater attenuation coefficient, using which we have found the attenuation coefficient of the water in experiments to be 0.289 dB/m. Furthermore, we propose a technique to estimate the maximum communication distance of an on–off keying UWOC system with intersymbol interference, based on the Q factor. By applying this method, we conclude that under the same water quality conditions, our system can achieve a maximum communication distance of 25.4 m at 80 Mbps. Overall, our research showcases the successful implementation of a real-time UWOC system, along with novel methods for measuring the underwater attenuation coefficient and estimating the maximum communication distance. Introduction In recent years, underwater wireless optical communication (UWOC) has gained significant research interest due to its higher speed and moderate distance capabilities compared to acoustic communication and radio frequency (RF) technologies [1,2].UWOC can be categorized into two types based on the light sources used: laser diode (LD) based and light-emitting diode (LED) based.LD-based UWOC offers a higher data rate and smaller beam divergence angle, resulting in an extended communication distance.However, the smaller beam divergence angle makes alignment more challenging.On the other hand, LED-based UWOC has a lower data rate and larger beam divergence angle, which leads to higher channel attenuation and limited communication distance.However, it is easier to align.In many scenarios, a data rate of tens of Mbps is sufficient to meet application requirements, making LED-based UWOC suitable [1]. Over the years, various advancements have been made in UWOC technology.For instance, in 2010, AquaOptical II was developed, enabling underwater communications over 50 m at a low signal-to-noise ratio [3].In 2013, the adoption of discrete multitone (DMT) modulation technology allowed error-free underwater communication of 58 Mbps [4].In 2018, a field programmable gate array (FPGA)-based underwater communication system was implemented, enabling real-time communication at a distance of 10 m with a data rate of 25 Mbps [5].The same year witnessed the utilization of the photomultiplier tube (PMT) in UWOC systems [6].In 2019, offline high-speed underwater communication experiments exceeding Gbps were successfully completed [7].In 2020, an FPGA-based UWOC system capable of full-duplex real-time communication was developed [8].Additionally, the silicon photomultiplier (SiPM) found its application in UWOC systems in the same year [9].In 2021, a prototype for underwater video transmission based on UWOC was realized [10].Furthermore, in 2022, a UWOC system based on FPGA and quadrature amplitude modulation (QAM)-orthogonal frequency-division multiplexing (OFDM) technology was developed [11].Finally, in 2023, a three-stage cascaded T-bridge equalizer was designed to expand the 3 dB bandwidth of the LED [12].More details regarding these LED-based UWOC systems can be found in Table 1.We can notice UWOC systems can be categorized into real-time and offline systems.Contrasting with real-time systems, offline UWOC systems rely on MATLAB for signal modulation, arbitrary wave generators for waveform output, oscilloscopes for sampling, and MATLAB for demodulation.Presently, real-time UWOC systems have inferior signal processing capabilities compared to offline systems, leading to a noticeable discrepancy in the data rate between the two types.Realtime systems are closer to practical deployment, while offline systems present an intriguing future for UWOC.Where the W E is the unit of electrical power, the W O is the unit of optical power. With regards to the modulation format, offline systems often employ complex modulation formats to achieve higher communication data rates [13][14][15].On the other hand, real-time systems prefer simpler modulation formats such as on-off keying (OOK) and frequency-shift keying (FSK) [16,17].The OOK signal has a relatively wide spectrum, allowing power to be maintained above the −3 dB bandwidth.Thus, the amplitude-frequency responses of the channel within this range affect the system's data rate.Although serious intersymbol interference (ISI) may occur in this system, a high data rate, usually over three times the value of the −3 dB bandwidth, can be obtained as long as the noise is sufficiently small [18].It is important to note that data rate is measured in bps and bandwidth is measured in Hz. This paper presents a real-time LED-based UWOC system.The transmitter consists of four LEDs, and pre-emphasis technology is utilized to extend the LED's bandwidth to 40.3 MHz.At the receiver end, a 3 mm diameter large area avalanche photodiode (APD) is employed to obtain a large field of view (FOV) angle.The transmitter and receiver are enclosed in watertight chassis and immersed in a 10 m water pool to conduct real-time UWOC experiments, achieving a maximum data rate of 135 Mbps. Furthermore, this paper introduces a convenient method for measuring the attenuation coefficient and proposes a method to infer the maximum communication distance based on eye height and receiver noise.In UWOC research, a practical challenge lies in obtaining a sufficiently long-distance water pool for testing.Additionally, due to the utilization of LEDs as the light source with a large beam angle, the use of reflective mirrors to redirect a collimated beam, as commonly performed in LDs UWOC, is not feasible.Therefore, it becomes crucial to theoretically infer the maximum possible communication distance.Our paper presents a method to estimate this distance, providing valuable insights for UWOC researchers.By considering the eye height at 80 Mbps and receiver noise, it is possible to estimate that the maximum communication distance at this data rate is 25.4 m. The subsequent sections of this paper are organized as follows: the second section describes the design of the transmitter and receiver, along with their performance verification in the air.The third section elaborates on the underwater experiments and presents a measurement method for the water attenuation coefficient.In the fourth section, the underwater experiments are discussed and a method for calculating the maximum communication distance based on the Q-factor is proposed.Lastly, the fifth section concludes the paper. Transmitter Design In this experiment, we aim to achieve a data rate in the range of tens of Mbps while maximizing the communication distance.To optimize the light intensity, we have chosen high-power blue LEDs and incorporated spot LED lenses.Specifically, we are using the GD CS8PM1.14LEDs from OSRAM.These LEDs have a peak wavelength of 451 nm and a half-power angle of 80 • .At a forward current of 350 mA, they emit an optical power of 641 mW, which increases to 2.5 times the optical power at the maximum forward current of 1 A. However, due to the large junction area, the modulation bandwidth of the GD CS8PM1.14 is limited to approximately 6.94 MHz.To overcome this limitation and extend the bandwidth, we have implemented a 2nd order pre-emphasis circuit [18,19].The preemphasis and drive circuit can be seen in Figure 1.In this circuit, R 1 , R 2 , C 1 , and C 2 are used to pre-emphasize the driving signal by applying different gains to different frequency components of the signal.The proposed LED driver with a 2nd order pre-emphasis circuit (Ref.[18], Figure 1 and Ref. [19], Figure 10). Receiver Design The alignment of a transmitter and receiver submerged in water is more complex compared to the alignment in air, as it lacks the support of an underwater tripod.To address this, we have increased the field of view angle of the receiver.This was achieved by incorporating the Hamamatsu S8664-30K photodetector, which features a large photosen-Figure 1.The proposed LED driver with a 2nd order pre-emphasis circuit (Ref.[18], Figure 1 and Ref. [19], Figure 10). Receiver Design The alignment of a transmitter and receiver submerged in water is more complex compared to the alignment in air, as it lacks the support of an underwater tripod.To address this, we have increased the field of view angle of the receiver.This was achieved by incorporating the Hamamatsu S8664-30K photodetector, which features a large photosensitive area with a 3 mm diameter.Consequently, this results in a larger junction capacitance of 22 pF and a nominal bandwidth of 140 MHz.In the amplifier circuit, we utilize the LTC6268-10 from Analog Devices. The non-inverting amplifier topology, demonstrated in Figure 2, is adopted for its input impedance that surpasses ten times the resistance of 50 Ω.As a result, no load effect is formed, allowing this topology to be considered as a cascade of two-stage systems.We can calculate the −3 dB bandwidth for both circuits separately, and subsequently determine the overall bandwidth using Equation ( 1) [20], where f 1 is the bandwidth of System 1, f 2 is the bandwidth of System 2, and f −3 dB is the bandwidth of the cascaded system. Receiver Design The alignment of a transmitter and receiver submerged in water is more complex compared to the alignment in air, as it lacks the support of an underwater tripod.To address this, we have increased the field of view angle of the receiver.This was achieved by incorporating the Hamamatsu S8664-30K photodetector, which features a large photosensitive area with a 3 mm diameter.Consequently, this results in a larger junction capacitance of 22 pF and a nominal bandwidth of 140 MHz.In the amplifier circuit, we utilize the LTC6268-10 from Analog Devices. The non-inverting amplifier topology, demonstrated in Figure 2, is adopted for its input impedance that surpasses ten times the resistance of 50 Ω.As a result, no load effect is formed, allowing this topology to be considered as a cascade of two-stage systems.We can calculate the −3 dB bandwidth for both circuits separately, and subsequently determine the overall bandwidth using Equation ( 1) [20], where is the bandwidth of System 1, is the bandwidth of System 2, and is the bandwidth of the cascaded system.APD junction capacitance and 50 Ω resistance form a low-pass filter.Its bandwidth can be obtained from ( 2), where R s = 50 Ω, C apd = 22 pF.So, the bandwidth is 145 MHz. Considering that the gain-bandwidth product (GBP) of LTC6268-10 is 4 GHz [21], and the gain of the non-inverting amplifier is set to 90, we can deduce that the bandwidth of the non-inverting amplifier is 44.4 MHz.By referring to Equation (1), the overall bandwidth of the receiver is determined to be 38.5 MHz. Performance Verification of Transceiver Module After completing the pre-emphasis circuit, we conducted several tests to evaluate the performance of both the transmitter and the receiver.These tests included examining the frequency response, analyzing the eye diagram, measuring the receiver's output voltage noise in a dark environment, and assessing the system's bit error rate (BER). To assess the frequency response, we employed a test block diagram as shown in Figure 3a.The bandwidth of the system was determined to be 40.3MHz, as depicted in Figure 3b. After completing the pre-emphasis circuit, we conducted several tests to evaluate the performance of both the transmitter and the receiver.These tests included examining the frequency response, analyzing the eye diagram, measuring the receiver's output voltage noise in a dark environment, and assessing the system's bit error rate (BER). To assess the frequency response, we employed a test block diagram as shown in Figure 3a.The bandwidth of the system was determined to be 40.3MHz, as depicted in Figure 3b.For the measurement of the eye diagram, we utilized the block diagram illustrated in Figure 4a.The experimental field setup, presented in Figure 4b, involved placing the transmitter and receiver 2.2 m apart without using any lens or optical filter.The resulting eye diagram is shown in Figure 4c. and the gain of the non-inverting amplifier is set to 90, we can deduce that the bandwidth of the non-inverting amplifier is 44.4 MHz.By referring to Equation ( 1), the overall bandwidth of the receiver is determined to be 38.5 MHz. Performance Verification of Transceiver Module After completing the pre-emphasis circuit, we conducted several tests to evaluate the performance of both the transmitter and the receiver.These tests included examining the frequency response, analyzing the eye diagram, measuring the receiver's output voltage noise in a dark environment, and assessing the system's bit error rate (BER). To assess the frequency response, we employed a test block diagram as shown in Figure 3a.The bandwidth of the system was determined to be 40.3MHz, as depicted in Figure 3b.To conduct the BER test, as depicted in Figure 5, we positioned the transmitter and receiver 2.2 m apart while omitting the use of a lens or optical filter.Indoor lighting was turned on during this test.Our bit error rate tester (BERT) lacked a signal amplitude adjustment function.To adjust the amplitude of the pseudo-random binary sequence (PRBS) signal, we utilized a self-made clock and data recovery module (CDR).In the receiver, another CDR was employed to recover the data and clock from the received signal.Con- To conduct the BER test, as depicted in Figure 5, we positioned the transmitter and receiver 2.2 m apart while omitting the use of a lens or optical filter.Indoor lighting was turned on during this test.Our bit error rate tester (BERT) lacked a signal amplitude adjustment function.To adjust the amplitude of the pseudo-random binary sequence (PRBS) signal, we utilized a self-made clock and data recovery module (CDR).In the receiver, another CDR was employed to recover the data and clock from the received signal.Consequently, we achieved a BER of 0 at a data rate of 80 Mbps over a duration of ten minutes. (c) To conduct the BER test, as depicted in Figure 5, we positioned the transmitter and receiver 2.2 m apart while omitting the use of a lens or optical filter.Indoor lighting was turned on during this test.Our bit error rate tester (BERT) lacked a signal amplitude adjustment function.To adjust the amplitude of the pseudo-random binary sequence (PRBS) signal, we utilized a self-made clock and data recovery module (CDR).In the receiver, another CDR was employed to recover the data and clock from the received signal.Consequently, we achieved a BER of 0 at a data rate of 80 Mbps over a duration of ten minutes.To measure the receiver's output voltage noise, we followed the procedure outlined below: we turned off the indoor lighting, sealed the receiver with a c-mount cover, and placed it in a black bag, leaving only the power line and output signal line connected to an oscilloscope.The standard deviation of the receiver's output voltage noise was determined to be 1.335 mV. Lenses for LEDs and APD The transmitter boards employ the F12985 LED lens from LEDiL, featuring a nominal beam angle of 4.9°.Figure 6 illustrates the arrangement of the four LEDs on each transmitter board.On the other hand, the receiver is equipped with a lens that has a diameter of 120 mm and a focal length of 160 mm.To measure the receiver's output voltage noise, we followed the procedure outlined below: we turned off the indoor lighting, sealed the receiver with a c-mount cover, and placed it in a black bag, leaving only the power line and output signal line connected to an oscilloscope.The standard deviation of the receiver's output voltage noise was determined to be 1.335 mV. Lenses for LEDs and APD The transmitter boards employ the F12985 LED lens from LEDiL, featuring a nominal beam angle of 4.9 • .Figure 6 illustrates the arrangement of the four LEDs on each transmitter board.On the other hand, the receiver is equipped with a lens that has a diameter of 120 mm and a focal length of 160 mm.In each chassis, three transmitter boards and one receiver board are incorporated as shown in the diagram.The chassis also features a hollow aluminum tube on top, which facilitates the passage of the power line and signal line for external power supply and testing.To ensure waterproofing, all screw holes and gaps on the chassis are sealed with waterproof glue.Additionally, an optical window made of transparent acrylic plate is positioned at the front of the chassis. Figure 7 displays the measurements of the beam angle conducted in a 30 m garage.In this test, all 12 LEDs were illuminated, resulting in a light spot with a diameter of approximately 2.6 m, as depicted in Figure 7b.Consequently, we determined the beam angle to be approximately 4.96°, which closely aligns with the lens's nominal value.This larger angle proves advantageous for achieving accurate alignment between the transmitter and receiver.In each chassis, three transmitter boards and one receiver board are incorporated as shown in the diagram.The chassis also features a hollow aluminum tube on top, which facilitates the passage of the power line and signal line for external power supply and testing.To ensure waterproofing, all screw holes and gaps on the chassis are sealed with waterproof glue.Additionally, an optical window made of transparent acrylic plate is positioned at the front of the chassis. Figure 7 displays the measurements of the beam angle conducted in a 30 m garage.In this test, all 12 LEDs were illuminated, resulting in a light spot with a diameter of approximately 2.6 m, as depicted in Figure 7b.Consequently, we determined the beam angle to be approximately 4.96 • , which closely aligns with the lens's nominal value.This larger angle proves advantageous for achieving accurate alignment between the transmitter and receiver. facilitates the passage of the power line and signal line for external power supply testing.To ensure waterproofing, all screw holes and gaps on the chassis are sealed waterproof glue.Additionally, an optical window made of transparent acrylic plate i sitioned at the front of the chassis. Figure 7 displays the measurements of the beam angle conducted in a 30 m ga In this test, all 12 LEDs were illuminated, resulting in a light spot with a diameter o proximately 2.6 m, as depicted in Figure 7b.Consequently, we determined the beam a to be approximately 4.96°, which closely aligns with the lens's nominal value.This la angle proves advantageous for achieving accurate alignment between the transmitter receiver. Transmitter and Receiver Design Summary The device models and specific parameters of the transmitter and receiver are shown in Table 2. Transmitter and Receiver Design Summary The device models and specific parameters of the transmitter and receiver are shown in Table 2. Measurement of Underwater Attenuation Coefficient The experiment took place in an outdoor environment, using a black inflatable polyvinyl chloride (PVC) pool filled with water for the lawn.Due to the duration of the experiment spanning several days, dust and leaves fell into the water, resulting in lower optical transmittance compared to clear water.Two chassis were utilized in the experiment, one acting as the transmitter and the other as the receiver. Figure 8 illustrates the block diagram of the experiment.A BERT sends a PRBS21 signal, which is then adjusted in amplitude through a self-made CDR and fed into a transmitter board.In this specific experiment, we employed four LEDs to provide sufficient illumination for the received signal at a rate of 80 Mbps.The LED bias current was set to 1 A, and the alternating current (AC) current was 0.67 A. The output signal from the receiver is connected to an oscilloscope in order to observe the eye diagram.Additionally, the signal is also connected to another CDR and the BERT's error detector to measure the BER. The system in Figure 6 integrates transmitter and receiver.In our experiment, we employed two of these systems to facilitate one-way communication.It is worth noting that the transmitter and receiver in the opposite direction are identical.During the one-way communication experiment, only the transmitter is working in one system, and only the receiver is working in the other system. To determine the attenuation coefficient of the water, we initially positioned the two chassis on the ground beside the pool, maintaining a distance of 10 m between them, which precisely matched the length between the two chassis within the pool.Subsequently, we measured the eye height of the receiver's output signal.It is important to note that angle adjustment can be a challenging task.The pitch angle adjustment involves placing multiple layers of small sheets underneath the front or rear of the chassis, while the horizontal adjustment is achieved by rotating the chassis.In order to determine the optimal pitch angle and horizontal direction, we used an oscilloscope to observe the eye diagram of the receiver.We considered the adjustment to be successful when the eye diagram displayed the maximum opening, indicating that the optimal pitch angle or horizontal angle had been achieved.The observed maximum eye height was 600 mV.Next, we immersed the two chassis in the pool with a separation of 10 m, while ensuring that the water level was approximately 10 cm above the top of the chassis.Again, we employed the same alignment method used previously to align the transmitter and receiver.The maximum eye height underwater was measured at 308 mV.gram of the receiver.We considered the adjustment to be successful when the eye diagr displayed the maximum opening, indicating that the optimal pitch angle or horizon angle had been achieved.The observed maximum eye height was 600 mV.Next, we mersed the two chassis in the pool with a separation of 10 m, while ensuring that the wa level was approximately 10 cm above the top of the chassis.Again, we employed the sa alignment method used previously to align the transmitter and receiver.The maxim eye height underwater was measured at 308 mV.The light emitted from the LEDs towards the receiver undergoes three types of attenuation: geometric attenuation, dielectric attenuation, and optical attenuation.Geometric attenuation is caused by the divergence of the light beam, while dielectric attenuation results from absorption, scattering, and other factors related to the air or water.Optical attenuation occurs due to the presence of optical lenses and windows along the optical path.According to [22], in a wireless optical communication system, the received power is where P r is the received optical power by the photodetector, m is the mode number of the light source, A is the physical detector area, d is the distance between the light source and the photodetector, φ is the transmitter's emergence angle, ψ is the receiver's incidence angle, T s (ψ) is the signal transmission of the filter, g(ψ) is the concentrator gain, and P t is the transmitted optical power of the light source.Essentially, (3) describes both geometric and optical attenuation.Assuming m, A, d, φ = ψ = 0, T s (0), and g(0) are given constants, P r is directly proportional to P t .Simplifying further, (4) is obtained. where K g and K o are scale factors for geometric attenuation and optical attenuation.P ra is the received power in the air, c a is the atmosphere attenuation coefficient [1,16].In the air, the eye height observed at the receiver is where H a is the eye height in the air, R is the responsivity of the photodetector, M is the avalanche gain of APD, R is the transimpedance gain of the transimpedance amplifier, P ra1 is the received optical power for '1 s in the air, P ra0 is the received optical power for '0 s in the air, P t1 is the transmitted optical power for '1 s, P t0 is the transmitted optical power for '0 s.Since the attenuation in 10 m of air is negligible, exp(−c a d) can be approximated as 1.Then Similarly, the eye height in water is where H w is eye height in water, c w is attenuation coefficient in water.When the transmitter and receiver are precisely aligned, K g and K o are nearly identical to those in the air.By dividing both sides of ( 6) and ( 7) separately, ( 8) is obtained. So Substitute d = 10 m, H a = 600 mV, and H w = 308 mV into (9), we find that c w = 0.0667/m.Alternatively, c w,dB = 0.289 dB/m.It is worth noting that selecting a lower data rate can mitigate the influence of ISI when measuring the eye height.However, such measures are unnecessary in this case. The attenuation coefficient of pure water ranges from approximately 0.04/m to 0.05/m.However, the attenuation coefficient of tap water varies significantly depending on the impurity content, which is much more than 0.05/m, that is 0.217 dB/m [1,2,23].Taking into account the water source used in our experiment, as well as the fact that it was exposed to outdoor conditions for several days, the measured attenuation coefficient falls within the expected range for tap water.Therefore, we can confidently deem the measurement results as reliable. Underwater Wireless Optical Communication Experiments and Results We conducted an experiment in an outdoor pool to measure the BER and eye height for underwater communication at different rates.The results are summarized in Table 3.In this field, the commonly used BER standard is 3.8 × 10 −3 , as the error correction encoding algorithm can correct it to 1 × 10 −9 , which is a widely adopted BER standard in optical communication.However, achieving precise control of the BER at exactly 3.8 × 10 −3 in our experiment proved to be quite challenging.As a result, we opted for the nearest BER of 5.9 × 10 −3 .It is important to note that our system has a limited bandwidth of 40.3 MHz.As the data rate increases, we observed a more significant impact from ISI and noticeable jitter.This effect is independent of the water attenuation.Consequently, ISI prevents us from achieving higher data rates. Since the experiment took place during the daytime, we encountered interference from ambient light, which affected the receiver.As a result, the standard deviation of the receiver output noise increased to 3.2 mV.Background light plays a crucial role in UWOC, and its impact on UWOC systems is influenced by various factors.These factors include the illumination of background light, the field of view angle of the receiver (which depends on the size of the photodetector photosensitive surface and the focal length of the focusing lens), as well as the optical aperture of the receiver's lens, among others.In order to comprehensively assess the background light situation, we measured the output noise of the receiver without emitting any optical signals.This measurement reflects the combined influence of the aforementioned factors.To isolate the influence of the noise of the optical receiver itself, we also conducted measurements in a dark environment, as shown in Table 2. Estimation of Maximum Communication Distance Due to the size of the pool, we cannot experiment with channel lengths longer than 10 m.However, we can estimate the maximum communication distance theoretically.In order to evaluate the communication distance at a data rate of 80 Mbps, which is related to the BER, we will use the Q-factor.Simultaneously, the Q-factor decreases as the communication distance increases. Since the bandwidth in our system is only 40.3 MHz and the data rate is 80 Mbps, there is a serious issue of ISI.Therefore, we refer to the method outlined in [24] to evaluate the maximum communication distance.As shown in Figure 9, due to ISI, "0" s ("1" s) split into several rails.Additive Gaussian white noise superimposes on each "0" s and "1" s rail, respectively.The distance from the bottom rail to the top rail is normalized to 1.0.The mean values of these rails are defined as , , , ,…, , for "0" s, and , , , ,…, , for "1" s.The standard deviations for rails are defined as , , , ,…, , for "0" s, and , , , ,…, , for "1" s, respectively.The probabilities of occurrence for each '0′ rail and '1′ rail are set to be , and , , respectively, which satisfies [24], Figure 1). According to [24-27], the BER is Assume , = , = , so where P e (D) is the BER for decision level D, p 0,j (D) and p 1,j (D) are defined as P 0,j (D) = p 0,j 4 er f c( D − µ 0,j √ 2σ 0,j ) Define Q as So At the receiver, Q observed on an oscilloscope is Taking water attenuation into consideration, (3) can be rewritten as When φ = ψ = 0, and m, A, T s (0), g(0) is given, P r is proportional to P t as where k = m + 1 2π AT s (0)g(0) By substituting (24) into (22), we obtain Q = MRk(P t1,0 − P t0,0 ) 2σ exp(−c w d) d 2 (26) Assuming P e (D) = 3.8 × 10 −3 , according to (21), Q = 2.67.We denote this specific Q value as Q min .Q min corresponds to the maximum communication distance d max .When conducting an UWOC experiment with a channel length of d 1 , we obtain a Q-factor denoted as Q 1 .Then Equation ( 27) can be rewritten as In our experiments, c w = 0.0667/m, d 1 = 10 m, Q = 308 3.2×2 = 48.125,and Q min = 2.67.By substituting these values into (28) and solving the equation, we find that d max = 25.4 m.Consequently, we can conclude that, under a bit error rate standard of 3.8 × 10 −3 , the maximum communication distance for water with an attenuation coefficient of 0.0667/m at a data rate of 80 Mbps is 25.4 m.Many researchers do not have a sufficiently long water pool, so estimating the maximum communication distance from an experimental result with a limited transmission distance is a common problem.These researchers can benefit from Formula (28), which is not found in the previous literature. Figure 3 .Figure 3 . Figure 3. Frequency response measurement and result.(a) Block diagram for frequency response measurement.(b) Frequency response of our system.For the measurement of the eye diagram, we utilized the block diagram illustrated in Figure4a.The experimental field setup, presented in Figure4b, involved placing the transmitter and receiver 2.2 m apart without using any lens or optical filter.The resulting eye diagram is shown in Figure4c. Figure 3 . Figure 3. Frequency response measurement and result.(a) Block diagram for frequency response measurement.(b) Frequency response of our system.For the measurement of the eye diagram, we utilized the block diagram illustrated in Figure4a.The experimental field setup, presented in Figure4b, involved placing the transmitter and receiver 2.2 m apart without using any lens or optical filter.The resulting eye diagram is shown in Figure4c. Figure 4 . Figure 4. Eye diagram measurement.(a) Block diagram for eye diagram measurement.(b) Experiment setup for eye diagram measurement.(c) Eye diagram of our system. Figure 4 . Figure 4. Eye diagram measurement.(a) Block diagram for eye diagram measurement.(b) Experiment setup for eye diagram measurement.(c) Eye diagram of our system. Figure 4 . Figure 4. Eye diagram measurement.(a) Block diagram for eye diagram measurement.(b) Experiment setup for eye diagram measurement.(c) Eye diagram of our system. Figure 5 . Figure 5. Block diagram for the BER test. Figure 5 . Figure 5. Block diagram for the BER test. Figure 6 . Figure 6.Chassis for the transmitter and receiver. Figure 6 . Figure 6.Chassis for the transmitter and receiver. Figure 8 . Figure 8. Underwater experiments (a) Block diagram for underwater experiments (b) Field experiments setup. is the BER for decision level , , and , are defined as Table 1 . Comparison of the UWOC systems based on LED. Table 2 . Transmitter and receiver design summary. Table 2 . Transmitter and receiver design summary.
7,296.8
2023-09-01T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Scalable Gaussian Processes on Discrete Domains Kernel methods on discrete domains have shown great promise for many challenging data types, for instance, biological sequence data and molecular structure data. Scalable kernel methods like Support Vector Machines may offer good predictive performances but do not intrinsically provide uncertainty estimates. In contrast, probabilistic kernel methods like Gaussian Processes offer uncertainty estimates in addition to good predictive performance but fall short in terms of scalability. While the scalability of Gaussian processes can be improved using sparse inducing point approximations, the selection of these inducing points remains challenging. We explore different techniques for selecting inducing points on discrete domains, including greedy selection, determinantal point processes, and simulated annealing. We find that simulated annealing, which can select inducing points that are not in the training set, can perform competitively with support vector machines and full Gaussian processes on synthetic data, as well as on challenging real-world DNA sequence data. I. INTRODUCTION U NCERTAINTY quantification is an increasingly important feature of machine learning models. This is particularly crucial in applications such as in biomedicine [1]- [3], where prediction errors may have serious repercussions. Consider a wet lab biologist seeking to find a DNA sequence which can be targeted by a drug (for instance, using CRISPR-cas9 [4]). They have reduced the problem to some number of candidate sequences but to further narrow the selection requires painstaking experiments. If they had a framework that could incorporate their prior knowledge of DNA sequence similarity as well as the results from previous experiments, they could optimally select the best next experiment to perform, thereby saving vast amounts of time and resources. Such a framework would need to perform well under various data sizes as well as provide calibrated uncertainty estimates in order to make an informed decision. Many problems, like this one, are discrete, involve large datasets, and require well-calibrated uncertainty estimates. Kernel methods have shown performances that are competitive with deep learning models in such application domains [5], while probabilistic modeling provides a unified framework for prediction and calibrated uncertainty estimates [6]. One class of probabilistic kernel methods that have proven to be useful in various regression and classification settings are Gaussian Processes (GPs) [7]. They are data efficient, non-parametric, and have tractable posterior distributions. Moreover, one can use any kind of likelihood for the generating process, for example, a Bernoulli likelihood in the case of a classification problem. The main challenge of scaling GPs to large datasets lies in the computational complexity of inference which is cubic in the number of observations. Inducing point methods are the main class of approaches for circumventing this limitation [8]- [11]. These methods aim to use some m n inducing points to reduce the inference complexity to O(nm 2 ). Having reduced the computational complexity of inference, the remaining challenge is to choose the set of inducing points that best approximates the full model [8]. When the domain is continuous, the locations of the inducing points can be optimized using the gradient of the log marginal likelihood [12]. Unfortunately, this gradient-based optimization scheme is not feasible in discrete domains, where the log marginal likelihood is no longer differentiable with respect to the Inducing points for the sparse Gaussian Process are chosen from the data points, but also from the rest of the domain. The choice of inducing points is optimized with respect to the log marginal likelihood. A discrete kernel function is chosen to construct the sparse approximation of the GP's covariance matrix. The GP can then be used to predict latent function values and uncertainties on the input domain. inducing point locations. In this work, we explore different techniques for choosing inducing points over discrete domains by combining discrete optimization with sparse GP approximations. In our experiments, we show that our sparse GP framework has comparable performance to full (i.e., not sparsified) GP models as well as Support Vector Machines on biological sequence data. We make the following contributions: • We present the first empirical assessment of a range of different inducing point selection techniques for Gaussian Processes on discrete domains. • We discuss and evaluate the tradeoffs of these different techniques, for example, in terms of computational complexity and their ability to choose inducing points from outside the training set. • We assess the performance of the models on synthetic data and several challenging real-world datasets. In the following sections, we describe the main components of our framework beginning with sparse GPs, continuing to discrete inducing point selection methods, and concluding with the spectrum string kernel. Each inducing point method corresponds to a different sparse string GP in our framework. For a high level overview of the framework see Figure 1. Finally, we present experiments comparing each inducing point selection technique in our framework using both Gaussian and Bernoulli likelihoods, that is, binary labels, on synthetic toy data, UCI splicing data [13], and the DREAM5 dataset [14]. II. SPARSE GAUSSIAN PROCESS APPROXIMATIONS Consider a supervised learning problem in which the goal is to estimate a latent function f : X → R given observed inputs x := (x 1 , . . . , x n ) and corresponding outputs y := (y 1 , . . . , y n ). For the biologist example in the previous section, f could map DNA sequences to drug targetability scores. We assume that our observations are corrupted by additive noise η, thus y = f (x) + η, where we have overloaded the notation of the function to be broadcast elementwise. Following a long line of previous work [7], we treat the function f as an unobserved random variable with a Gaussian Process prior. Specifically, a GP prior with a zero mean function and a covariance kernel k(·, ·): It follows that the prior on the function outputs, f := f (x), is given by N (0, K xx ) where K xx denotes the Gram matrix (also known as the kernel matrix) with In general, the predictive distribution cannot be solved in closed form but in the special case of Gaussian noise, where η ∼ N (0, σ 2 ), the predictive distribution can be computed as where the test inputs and outputs are denoted x * and f * respectively and K * · = K · * is shorthand for K x * · . While this closed-form predictive distribution is appealing and has found numerous applications, scaling it to large datasets is fundamentally limited by the matrix inversion This motivates the use of so-called "inducing point" methods which provide a framework for trading model quality for tractability. We assume that there exists a set of m inducing points (z 1 , . . . , z m ) =: z, z i ∈ X with outputs u := f (z) which are distributed N (0, K zz ) according to the prior. Now, we make the modeling assumption that f and f * are conditionally independent given u, that is, p(f , f * , u) = p(f | u) p(f * | u) p(u). We can again solve the inference problem in closed form: Note that we have reduced the cubic part of inference from O(n 3 ) to O(m 3 ), where we can choose m, the number of inducing points. Overall, the inference procedure has complexity O(nm 2 ) [8]- [11]. Inducing point methods provide a framework for dramatically decreasing the computational complexity of inference, but we are still left with the problem of choosing the set of inducing points that achieves the best possible approximation with limited resources (namely m inducing points). This inducing point selection can be cast as an optimization problem in which we are trying to maximize the log marginal likelihood: with respect to the locations z. Standard methods for solving the inducing point selection problem focus on continuous inputs and overlook the case of discrete ones. In the following section, we tackle this problem on discrete domains using effective and well-tested discrete optimization methods. III. DISCRETE OPTIMIZATION TECHNIQUES The problem we are trying to solve using discrete optimization is arg max z log p(y | z) (Eq. 1). In the following sections, we approach this problem using two classical techniques from discrete optimization and one submodular data summarization model: greedy selection, simulated annealing, and determinantal point processes. A. GREEDY SELECTION Greedy inducing point selection dates back to early works on sparse Gaussian Processes [9], [15], [16]. The algorithm is initialized with an empty set of inducing points and at each iteration greedily selects the next observation in the data that maximizes the marginal likelihood p(y | z) (Eq. 1). Thus, the set of inducing points is a mere subset of the original data. This approach is justified by the fact that the marginal likelihood is strictly monotonic in the number of inducing points. Hence, adding a new inducing point is always guaranteed to increase the objective. This technique is conceptually simple and easy to implement, but it comes with the major drawback that the inducing points can only be selected from the training set. Especially in high-dimensional discrete spaces, the training set might only span a small fraction of the total space, so this can be a strong limitation. Natural extensions of this method include selecting several inducing points instead of just one at every iteration, swapping inducing points between the training set and the inducing point set [17], and optimizing a variational lower bound on the likelihood rather than the likelihood itself [9]. Since we mainly include this method as a baseline in our experiments, we leave the exploration of these extensions to future work. B. SIMULATED ANNEALING Simulated annealing is a sampling-based approach which starts with an initial guess S 0 := {z 1 , . . . , z m } and a loss function L(·) to be optimized. At each iteration the algorithm perturbs an element of the set and decides whether or not to accept this new perturbation as the next state. To make this decision, an energy term is computed from the current iterate S t−1 and the proposalŜ, E t = L(S t−1 ) − L(Ŝ). The new proposal is then accepted with probability where T t is known as the temperature parameter and is usually chosen with an exponential decay rate in t. Since we are working on discrete string domains, we define a perturbation to be a change of one or more characters in a given string. Determining the number of characters to change requires careful fine tuning. In our experiments we chose the most conservative setting of perturbing just a single character at each step. The loss function is again naturally defined as L(z) := log p(y | z). Crucially, this perturbation approach allows the simulated annealing to explore inducing points from the entire input space, and not only from the training set. This makes it more flexible than greedy selection or the determinantal point process described in the following. C. DETERMINANTAL POINT PROCESSES A Determinantal Point Process (DPP) with kernel k is a distribution over subsets of observations [18]. Given a subset of points z ⊆ x with |z| = m as above, its probability is defined as 1 where I is the identity matrix. Intuitively, the determinant of the Gram matrix K zz represents the volume of the parallelepiped spanned by the features in z. Therefore, subsets of high probability have a large volume in feature space which in turn implies diversity. In their recent work, [19] showed analytically that with O(log N ) inducing points sampled from a DPP, the sparse GP is close to the full GP in KL-divergence. While this result only holds for squared-exponential kernels, we consider it to be a theoretical motivation for using DPPs for inducing point sampling. While the normalization constant, det(K xx + I), is notably available in closed form, this is not relevant for our purposes since it still requires O(n 3 ) operations to compute. Instead, we use fast MCMC-based sampling methods for our DPPs [20]. Note that this could in principle be extended to sampling from the whole input space, but it would make the normalization constant intractable and require further approximations. For simplicity, we thus resort to just sampling inducing points from the training set in this work (similar to the greedy approach above). IV. STRING KERNELS While our framework is fully general, in this work we focus our experiments on biological sequences, which are an important real-world discrete data domain. One key aspect of many biological sequences is translation invariance. In this section, we describe n-gram-based string kernels which are designed to exhibit this property and thus explicitly incorporate our biological prior knowledge. In cases where full translationinvariance is not a desired property, a practitioner can choose from the vast literature on kernel methods to select a more appropriate prior for the GP. Specifically, in our work we use the spectrum kernel [21], which was designed for protein sequences and has also been successfully applied to other types of biological sequences [22]. There are existing applications which use string kernels in Gaussian Processes, but they use small datasets (n ≈ 280) where full Gaussian Process inference is viable [23]. In this work, we enable the extension of these methods to larger data sets, through the use of sparse GP approximations. Given an alphabet A, we denote the input domain of all strings of finite length as X = A * . The n-th order spectrum kernel is defined over this domain as is the number of times that the string a ∈ A n appears as a substring in x. This is essentially a bag-of-ngrams model. While the set A k might be prohibitively large, thus making the feature maps Φ n (x) prohibitively high dimensional, it can easily be seen that we can compute k n (·, ·) without having to represent Φ n (x) explicitly. For two strings of arbitrary length, x ∈ A l(x) and x ∈ A l(x ) , the kernel can be rewritten as Computing this kernel naïvely has complexity O(l 2 ) where without loss of generality l := l(x) ≥ l(x ). This can be further improved using suffix trees, resulting in a complexity of O(kl) with k < l [21]. These three components-a GP prior represented by the choice of the kernel function, an inducing point GP approximation, and finally a method for selecting inducing points from a discrete input space-provide a unified framework for supervised learning over discrete input spaces using GPs. This framework not only has good predictive performance but also provides superior uncertainty estimates which we demonstrate in the following experiments. The main challenge within this framework remains the selection of inducing points from the discrete domains, which is why we will thoroughly assess the different techniques and their tradeoffs in the following. V. EXPERIMENTS We compared different inducing point optimization methods for sparse GPs on toy datasets in regression and classification settings. We then validated our framework's performance on two real-world DNA sequence datasets from the UCI repository [24] and the DREAM5 dataset [14]. We used support vector machines (SVMs) [25] with post hoc uncertainty calibration [26] as a competitive benchmark method for the classification tasks. We find that our sparse string GP framework performs well when compared to full string GPs in these diverse settings. Moreover, the inducing points selected by the algorithm align well with the natural intuition for inducing points on continuous domains. Our framework offers comparable predictive performance with SVMs but yields superior uncertainty calibration. A. IMPLEMENTATION If not otherwise noted, all GPs and SVMs use a spectrum kernel as implemented in Shogun [27], [28]. For fitting the GPs, we used the GPy framework [10]. For fitting the SVMs, we used the sklearn package [29]. Note that in the regression experiments, the GP posterior can be computed in closed form (as described in Sec II), since the Gaussian likelihood is conjugate with the Gaussian prior. However, in the classification examples, we need to use a non-conjugate Bernoulli likelihood, and thus have to resort to approximate inference. In our experiments, we use expectation propagation for the approximate inference [7], [30], since it is fast and yields good performance. We use it with the GPy standard parameters. Alternatively, one could use Markov Chain Monte Carlo (MCMC) inference to get an even better (that is, asymptotically exact) approximation to the posterior. However, this would be computationally much more expensive and would likely outweigh all the computational benefits of our sparse approximation, which is why we have not tried this approach. Since the SVM does not natively output probabilities, we have to calibrate it in order to turn the SVM predictions into probabilities. The Calibrated SVM uses a technique called Platt scaling [26]. It performs a logistic regression on the SVM outputs and calibrates it using a cross-validation on the training data. B. PERFORMANCE EVALUATION In order to assess the predictive performance of our GP models on regression and classification tasks, we use the mean squared error (MSE) and area under the precision-recall curve (AUPRC), respectively. We use calibration curves [31], [32] to assess uncertainty calibration. Moreover, we report the mean absolute deviation (AD) of the calibration curves from the diagonal. Note that a perfectly calibrated classifier would lie directly on the diagonal of the plot and hence yield an AD of zero. C. INDUCING POINT OPTIMIZATION FOR REGRESSION AND CLASSIFICATION We developed a simple toy experiment as a controlled setting for our initial comparisons. It is composed of 1000 Table 1: Performance comparison of different inducing point optimization methods and a full GP on a toy data regression task. Means and their standard errors are computed over 100 runs of the experiment. Note that the full GP is included as a gold standard, but is not considered in the actual comparison because it is not a scalable method. strings of a 4 character alphabet (inspired by the DNA bases, 'A','C','T','G') each of which has length 100. We first generated a set of 100 strings of length 5 which we call the library. Each example in the toy dataset contains some copies of a particular element of the library, distributed uniformly at random through the sequence. The other characters are selected uniformly at random from the alphabet. The label for each example is the number of elements of the particular library sequence it contains. We think of this as a discrete dataset with 100 clusters of strings, corresponding to the elements of the library. This toy dataset challenges the inducing point methods to effectively summarize the data for the prediction task. We use 100 inducing points for all the experiments. While in general one should perform Bayesian model selection via log marginal likelihood [7], to select the optimal kernel hyperparameter k, in this case we know the optimal value should be k = 5, since this is the size of the strings in the library. In Tables 1 and 2, we compare a full GP model, sparse GPs with different inducing point selection methods, and SVMs. For the inducing point methods we used randomly chosen inducing points (random), greedy selection (greedy), VOLUME 4, 2016 Table 3: Comparison of training and inference time complexity for full GPs and the different sparse GPs. n is the number of training points, m the number of inducing points, s is the subset size for the greedy subset selection, and k the number of iterations for the simulated annealing. Method Training time Inference time simulated annealing (SA), and DPP (DPP) methods. For the greedy selection, we experimented with performing it on random subsets to improve performance. Thus, greedy_10percent corresponds to performing a greedy selection on a uniformly random selection of 10% of the training data. For the DPP, we experimented with different MCMC steps. Thus dpp_10percent corresponds to taking 10% of recommended nm steps until complete mixing. Our results confirm the intuition that the sparse GP approximations cannot match the performance of the full GP, neither with respect to log-likelihood nor with respect to predictive performance. However, our results also demonstrate that inducing point selection is crucial in improving the performance of the sparse GPs. With careful selection of inducing points-either greedily or via simulated annealingthe sparse models can approach the performance of the full model. Moreover, there is a clear tradeoff between runtime and performance of the methods. The methods with the longest runtime, particularly the simulated annealing, achieve the best results among the sparse GP models, while for instance the DPP model offers a much more attractive runtime, but slightly lower performance. It depends therefore on the practitioner's choice whether runtime or performance is a more important selection criterion. This also suggests that the ability of the simulated annealing to select inducing points from outside the training set can be beneficial in this application. The inducing points chosen in both regression and classification settings follow a natural intuition. In the regression task, the model has to count the number of library elements equally well across all parts of the space. In the classification task, a more precise count close to the decision boundary is crucial for minimizing classification errors. Figure 2 clearly demonstrates this behavior. This experiment shows that our sparse GP framework approaches the predictive performance of a full GP, while also outperforming baseline SVM methods. We also find that inducing point selection in discrete string space follows our general intuition for inducing point selection in continuous spaces. There is a natural tradeoff between the fast but suboptimal performance of random inducing point selection and the slow but superior performance of greedy selection. We explored this tradeoff by restricting the greedy inducing point selection over the entire dataset to a randomly sampled subset of the data. We then varied the size of this random subset. In the case when the random subset is the same size as the original dataset, one recovers the greedy algorithm. See Table 3 for a summary of the time complexities of all methods discussed. The results are depicted in Figure 3. There is a clear tradeoff between performance and runtime, both of which increase as subset size increases. However, the runtime grows linearly with the subset size (as expected), while the likelihood converges. We expect that finding the optimal subset size will highly depend on the application both in terms of specific properties of the dataset as well as the computational resources available to the practitioner. D. REAL WORLD DNA SEQUENCE DATA To validate our models on real-world data, we performed classification on the UCI splicing dataset [13]. Moreover, to demonstrate the scalability of these methods, we performed regression on the DREAM5 dataset [14]. As with our experiments on synthetic data, we aim to compare the predictive performance and uncertainty calibration against SVMs for classification. The splicing dataset contains 3,190 sequences of 60 nucleotides each which have to be classified into splicing and non-splicing sites. We applied our methods to the pTH2427 transcription factor of the DREAM5 dataset which contains a total of 32,896 short sequences of length 8. On the DREAM5 dataset, we performed a 70-30% train-test split. We compared a kernel SVM against a full GP and our sparse GPs with inducing points selected greedily and by simulated annealing. Note that the splicing data as well as the DREAM5 data are too large for feasible full GP inference. To provide a fair comparison, we report inference times of our sparse GP in comparison with the full GP on a randomly selected subset Table 4: Comparison between greedy and DPP methods on the DREAM5 dataset. The kernel matrix for the DPP was normalized as k(x, y)/ k(x, x)k(y, y). The greedy method was run on uniform random subsets of size 20 of the training data. [25] Full GP [23], [33] Variational GP [34] Greedy GP [17] DPP-GP [19] SA-GP (ours) of the splicing data in Table 6. It can be seen that our sparse GP speeds up inference by more than one order of magnitude while still yielding comparable predictive performance (c.f. Tab. 3). The sparse GPs use 50 inducing points on the splicing data. The order of the spectrum kernel was chosen to be k = 3 by all GPs through log marginal likelihood optimization. The performance of the methods in terms of area under the precision-recall curve (AUPRC) and calibration is measured by a 2000/1190 train-test-split on the splicing data. Results are reported in Figure 4. For the DREAM5 dataset (Tab. 4) we used a kernel of size k = 3. This was motivated both by biological considerations of the size of a codon and also by the fact that the sequences are short-only 8 characters. We found that when set to the right random subset size (in this case 20), greedy selection could outperform the DPP in both runtime and performance. If we compare the calibration of the different methods on the classification task, it can be seen that the various sparse GP models, and the calibrated SVM are comparable in terms of calibration and predictive performance. (Fig. 4). The calibration ranking among the GPs is analogous to the one for the log-likelihoods, that is, the sparse GP with inducing points optimized by simulated annealing ranks second, the one with greedily selected points third. These experiments show that our framework yields a comparable performance with full GP inference as well as kernel SVMs on real world DNA sequence classification tasks. Moreover, it scales to larger datasets where full GP inference is computationally infeasible. It also shows that greedy selection can perform better than the theoretically motivated DPP sampling, while simulated annealing generally performs best, possibly due to its more flexible inducing point selection from outside the training set. VI. RELATED WORK a: Sparse Gaussian processes This work builds upon the rich literature on inducing point methods for Gaussian Processes (see [8] and references therein). Recent work in this domain has utilized variational approximations [34] and certain geometrical structures [11]. Furthermore, it has been proposed to use spherical harmonic features [35], orthogonal inducing points [36], and doubly sparse GPs [37]. Unfortunately, all these advances are limited to continuous input spaces, so we are forced to resort to more conventional inducing point methods in this work. VOLUME 4, 2016 b: Gaussian processes on discrete domains Many kernels have been devised to work well on discrete domains, for instance, on strings [21] or graphs [38]. These have been used successfully in combination with SVMs or similar linear models for problems in biology [5], [22], chemistry [39], [40], and natural language processing [41]. Using discrete kernels in GPs is a relatively unexplored area, possibly due to the difficulties in hyper-parameter optimization and inducing point selection. Discrete kernels have been used on graphs [42] and strings [43] (also for biological problems [23]), but so far only on relatively small datasets with full GPs. In parallel work, [33] study GPs on strings for Bayesian optimization, but their problems are small they do not use inducing points or any other scalability approach. c: Discrete inducing point selection While the greedy inducing point selection approach has already been proposed in early work on GP regression [16], [44], [45], these works have not particularly assessed its performance on discrete GPs. To the best of our knowledge, the only work that previously studied discrete sparse GPs are [17], who also use a greedy approach, although with greedy swapping between the inducing point set and the training set, instead of greedy forward selection. We are the first to additionally study DPPs for discrete inducing point selection, as well as simulated annealing, thus enabling the use of inducing points that are not included in the training set. For an overview comparison of our proposed simulated annealing approach with the other related models in terms of uncertainty estimation, scalability, applicability to discrete domains, and selection of the inducing points, see Table 5. VII. CONCLUSION AND FUTURE WORK In this work, we explore different inducing point selection techniques for sparse Gaussian Processes on discrete domains. We found empirically that our proposed method using simulated annealing gives the best overall predictive performance and uncertainty estimates. This method also yields favorable runtimes on larger datasets and more non-standard likelihoods. Moreover, we showed that our models perform competitively with SVMs on toy data as well as real-world DNA sequence data in terms of predictive performance, while offering better calibrated uncertainty estimates in some settings. There are many directions for future work. First, developing a closer integration between discrete optimization and the marginal likelihood of the GP would improve both the approximation quality as well as the runtime of the inducing point algorithm. An orthogonal direction is a fully Bayesian treatment of the string kernel hyperparameter k, namely treating k as a random variable with a prior and performing inference on it. Finally, we would like to see existing GP software packages extend their abstractions to discrete kernels with hyperparameters that are not differentiable. In conclusion, we would advise practitioners to use our framework on discrete problems where datasets are too large for full Gaussian Process models but uncertainty estimates are still desirable. Furthermore, in cases where likelihoods other than Gaussian or Bernoulli are required, standard regression and classification techniques are inapplicable, whereas our framework provides a principled and flexible solution.
6,844.6
2018-10-24T00:00:00.000
[ "Computer Science", "Mathematics" ]
Potential of Silver Nano Particles Synthesized from Ficus sycomorus Linn Against Multidrug Resistant Shigella species Isolated from Clinical Specimens Ficus sycomorus Plant was known traditionally for its medicinal properties, Shigella species as a bacterial was also known for their resistance to orthodox medicine. Hence the synthesis of silver nanoparticles from Ficus sycomorus. This study was carried out to investigate the anti-shigellosis potential of silver nanoparticles synthesized from Ficus sycomorus Linn stem bark aqueous extract against Multi-drug Resistant (MDR) Shigella species isolated from clinical specimen collected from patients attending Yobe State Specialist Hospital Damaturu, Nigeria. A total of 400 diarrhoeagenic stools were screened for isolation of Shigella species and determined their antibiotic susceptibility pattern using standard methods. Phytochemical constituents of Ficus sycomorus extract were used to synthesize silver nanoparticles using green synthesis approach. The nanoparticles was analyzed for transmittance, functional groups, sizes and shapes using Uv-vis, FTIR and Scanning Electron Microscopy (SEM), and was tested for antibacterial activities on MDR Shigella isolates. There is no significant difference in Shigella recovery relation to patients gender (P<0.05). The age group, 0 10 years were more susceptible, 40% (36), followed by >30 years (21). Shigella were also found to be sensitive to Ciprofloxacin (92%), Augmentin (87%), Cefuroxime (85%), Streptomycin (83.5%) while the most frequent resistance was showcased against Nalidixic Acid (48%), and Tetracycline (27%). Phytochemicals detected include saponins, flavonoids, alkaloids, cardiac glycoside and tannin. Uv-vis showed broad peaks around 460nm, the FTIR showed C-H stretch of hydroxyl group of alkanes and the SEM showed nanoparticles with wide range of shapes and sizes. Anti-Shigella activities of silver nanoparticles is higher at zones of inhibition between 10mm and 30mm higher compared to the activities of crude aqueous extract and AgNO3 solution against the MDR Shigella species which showed an enhanced activities. The high prevalence of shigellosis among children in this study, indicated that improved hygiene is needed for children in the area and detailed examination is required for the treatment of diarrhoea in adults. Ciprofloxacin and Amoxicillin Clavulanate, Nalidixic acid could be used only where culture and sensitivity results prevailed. Enhanced traditional medicine should be given priority because of its potentials. This study have demonstrated feasibility of the green synthesis of F. sycomorus as a potent antishigellosis to combat the global burden of the disease. This is the first study On Stem bark aqueous extracts of F. sycomorus against shigella species in the area. Introduction Shigellosis is a major public health problems responsible for high morbidity and mortality rate among children aged less than 5 years in tropical countries. It is an acute diarrhoea disease caused by Shigella species, a gram negative bacterium belonging to the family enterobacteriaceae with four species; Shigella dysenteriae (serogroup A), Shigella flexneri (serogroup B), Shigella sonnei (Serogroup C) and Shigella boydii (serogruop D) [1]. Shigella infection appears to be more frequent among adult populations, via direct feaco-oral transmission either through accidental ingestion of stool contaminated food or through direct oral-anal contact [2]. It has relatively no animal reservoir making in-vivo studies and vaccine development difficult. In developed countries, it is a paediatric diseases but it is widespread between children and adult population in developing countries. The disease may be widespread in war time and natural disaster [3]. Global distribution associated with over-crowding and poor hygienic condition. It can be spread by flies, finger, food, and faeces, part of gay bowel syndrome. Shigella sonnei -North of USA, Shigella flexneri -South, in India, Shigella flexneri have been predominant species followed by dysenteriae and sonnei in temperate region [4]. Predisposing factors increasing the risk of Shigellosis in Nigeria and other developing countries include; feeding habit, literacy, occupation and hygiene among others [5]. The high cost of antibiotics, presence of counterfeit drugs readily hawked in the city, expiration and improper storage of drugs have contributed to improper usage of drugs leading to multiple drug resistance. Antibiotic treatment is mostly recommended for younger or older patients, mal-nourished children, patients infected with HIV, food handlers, health care workers and children in day care centres. The resistance mechanisms therefore depend on which specific pathways are inhibited by the drugs and the alternative ways available for those pathways that the organisms can modify to get a way around in order to survive [6]. Despite the need to combat antibiotic resistant strains, relatively few shigellainfecting bacteriophages have been described thereby posing therapeutic challenges [7]. The medicinal plants have great positive impacts on the treatment of gastroenteritis and other infectious diseases caused by bacteria [8]. Nowadays, they are widely used in conventional as well as alternative medical practices not only in developing countries like Nigeria but also in the Developed countries as a complementary medicine [9]. Ficus sycomorus Linn (Moraceae), Farin Baure in Hausa, Ibbi in Fulfulde, Sycamore Fig in English is a large, semi-deciduous spreading savanna tree, up to 21 (max. 46) m, occasionally buttressed. Its bark is slash pale pink with heavy latex flow. Leaves broadly ovate, obtuse with rough surface [10]. Invitro antimicrobial screening of methanolic stem bark extract of F. sycomorus revealed that the extract inhibited varying activity against enterococcus faecalis, E. coli, S. typhi, Shigella dysenteriae and Candida albicans [11]. Silver and its compounds have been used since ancient time for treatment of bacteria and wound infections especially in patients with severe burns [12]. Silver nanoparticles are particles of silver between 1nm and 100nm in size. Nano-particles are mostly synthesized from silver but diamond, octagon and thin sheets are also popular [13]. Biosynthesis of nano-particles is an important area in the field of nanotechnology which is economic and eco-friendly. It is promising as antibacterial agent for both Gram's positive and Gram's negative bacteria [14]. Silver nanoparticles have attracted interest due to its corresponding small size, unusual physical, chemical and biological properties [15]. It has potent antimicrobial and antioxidant activities and potential biomedical and industrial applications [16]. It has been reported that they have advantages as drug carriers [17]. In the antimicrobial activities, initially Silver nano-particles attach to the surface of the bacteria membrane and then penetrate into bacteria. After penetration, they inactivate enzymes of the microbes, generating H 2 O 2 causing bacteria death. The green synthesis of Silver Nanoparticles suggests their usage in medical devices as an antimicrobial coater [18]. Antimicrobial activity of AgNPs may also, be due to either (i.) formation of pores on the cell membrane which ultimately lead to leakage of cellular content, or (ii.) the silver ion penetrate through the ion channels does not damage the cell membrane; rather, denatures the ribosome and exhibit the expression of enzymes and thiol containing protein essential for the production of ATP and thus result to cell death as argued by [19]. AgNPs synthesized from various plants including Ficus sycomorus shown efficient antimicrobial activity against pathogenic bacteria [20]. Damaturu is the Capital of Yobe state, North-Eastern Nigeria affected by insurgency, we have many internally displaced camps across the city and the specialist hospital has been the hospital of choice for both IDPs and Residents because of the state governments free drugs initiate, subsidy on every other services rendered by the hospital and Victim Support Funds. The health facility have been overwhelmed with diarrhoea, dysentery and other diseases associated with poverty, war, internal displacement, poor sanitation, personal hygiene, and shortage of water supplies. This necessitated prospective study to determine the prevalence and antimicrobial profiles of Shigella species isolated from the diarrhoeal stool of patients presented for care at Yobe State specialist Hospital Damaturu and the potential of using silver nanoparticle enhanced Ficus sycomorus to combat the multiple drugs resistant strains isolated between April, 2019 and October, 2019. This should provide Information for use in designing treatment guideline for treatment of Shigellosis that will be appropriate in the area. In addition, the study would add to existing literature on epidemiological information on the resistance patterns of Shigella isolates of public health importance and feasibility of using and enhancing natural products against shigellosis in the area. Study Area and Population The study was carried out in Yobe State Specialist Hospital, Damaturu, North-East Nigeria. It is Located at 12000'N, 11030'E with 45, 502 km 2 and estimated population of 2, 757,000. Specimen Collection A total of 400 faecal specimens was collected as noninvasive method from diarrhoea/dysentery patients of all ages and sexes in the study area. Sterilized sample containers were given for collection of stool sample. Isolation, Identification and Characterization of Shigella species. The samples were processed the same day for the isolation of Shigella species. Feacal specimens were processed according published methods [21]. One loopful of feacal sample was streaked on MacConkey-Lactose Agar (MLA) and Lactose-Lysine Deoxycholate Agar (XLD) and incubated at 37°C for 24 hours. The MLA plates that showed the presence of convex, colourless colonies and XLD plates showing presence of translucent or red colonies were considered for further identification. Suspected colonies were re-streaked on other selective media i.e Hektoen Enteric Agar (HEA), Salmonella-Shigella Agar (SSA) and Deoxycholate Citrate Agar (DCA) as described by [22]. The culture plates were examined for typical morphological characteristics of Shigella species. Biochemical Characterization of Shigella species Colonies showing characteristic appearance on selective media were sub-cultured on Kligler iron agar (KIA) and Triple sugar iron agar (TSI). Oxidase, Urease, Indole, Citrate and Motility tests were conducted for further identification as described by [22]. Green Synthesis and Characterization of Nano-scale Silver Particles from Stem Bark Extracts of F. sycomorus Biosynthesis of Nano-Scale Silver Particles using aqueous Stem Bark Extracts of F. sycomorus as reducing agent [25]. With constant stirring, 50ml of AgNO 3 solution (1mM) were added drop-wise to the aqueous extract of F. sycomorus stem bark at 50-60°C for the reduction of Ag 2+ . This solution was incubated in the dark at 37°C until use. A control solution (without extract) was incubated under the same condition. [26]. UV-vis Spectra Analysis This was carried out for the nanoparticles of stem bark extract of F. sycomorus, by measuring the Optical Density (OD) using UV-2401, India. Measurement was performed between 200-800nm with a resolution of 1nm and Scanning speed of 300nm/min. The reduction of Ag 2+ was monitored by measuring the UV-vis spectrum of 1ml aliquot sample and 2ml de-ionized water in quartz cell. Silver nitrate (1mM) was used to adjust the baseline as a blank [14]. Fourier Transmission Infra-Red Analysis (FT-IR) This analysis was carried out using IRAffinity-1S Spectrometer (Buck Scientific -530) and Perkin-Elmer spectrophometer on the powder sample of AgNPs. The AgNP solution was centrifuged at 10,000 rpm for 20 minutes. The solid residue obtained was then dried at room temperature, and the powder obtained was used for FTIR measurement. Scanning Electron Microscopy Scanning Electron Microscopy (SEM) was performed with Phenom Pro-X 800-07334 operated at 25 kv. Scanning Electron Microscope (SEM) images were taken by coating a drop of silver nanoparticles of F. sycomorus onto a carboncoated copper grid and allowed to evaporate, while held with the aid of sample holder before scanning [27]. Test for Antibacterial Efficacy of the Extracts This test was performed using disc diffusion methods [24]. McFarland 0.5 standard inoculum was prepared [28]. The test was carried out by using filter paper discs (Whatman No. 1) of 6mm prepared and sterilized. The discs was impregnated with 100µl of AgNP from dilution (100mg/ml) and reconstituted in minimum amount of solvent was then applied over each of the culture plates previously seeded with the 10 6 cfu/ml culture of Shigella and incubated at 37°C for 18hours. The same was repeated for crude extract and AgNO 3 [29]. After incubation period, the zone of inhibition was then measured as an indicator for antibacterial activity compared to stem bark crude extract, its AgNPs, and AgNO 3 . Characteristics of Shigella Isolates from Stool Samples The age distribution data revealed that Shigella species were isolated from 85 cases, where highest frequency 40% (34) was recorded in the age group 0 -10, followed by 22.2% (19) in age group 11 -20, 14%. Most of the isolates 52% (44) were from male patients. The frequency of Shigella isolated in this study (22.4%) is related to a work 19.72% reported in Iran, but slightly higher compared to 13.5% reported in Lagos, Nigeria [30,31]. 11.6% isolates frequency in diarrhoea patients from Bangladesh, 8.56% from Addis Ababa, Ethiopia and 8.0% in Maiduguri, Nigeria as reported by other studies [32,33,34]. The value obtained in this study also observed lower when compared with report in Kano, Nigeria [35]. High prevalence shigella in the present study indicated low hygiene level among the patients. It is found to be more prevalent among children categories <11 years of age, contrary to reported cases by Andualem in Addis Ababa, Ethiopia with highest prevalence rate between 15 -35 years age group and claim that Shigella is more common among MSM (Men Sleeping with Men) in California [2]. There is a slight difference between number of males and females but statistically insignificant at (P< 0.05=0.026) suggesting that there is no statistical difference between the recovery rate of Shigella in both male and female respondents. Antibiotics Susceptibility Pattern of Shigella species Isolated from Stool Samples Although, Shigellosis can be self-limiting with oral rehydration and care but there is need to use antibiotics as the only way to reduce severity of the infection, illness duration and shedding of the etiologic agent. Choice of antimicrobial in developing countries is determined by the availability of the drugs, cost, and pattern of resistance in the area. Overall resistance recorded in this study is with Nalidixic acid (48.2%) followed by Tetracycline (27.1%), Ampicillin (24.7%). Ciprofloxacin (90.6%) on the other hand leads the sensitivity of the isolates, followed by Amoxycillin Clavulanate (Augmentin) (87.1%), Cefuroxime 84.7%. There is no total resistance, with limited total sensitivity to the antibiotics used in this study. This is contrary to reports in some previous studies with overall resistant to ampicillin 100% total sensitivity to ciprofloxacin and Ofloxacin but rather worrisome that resistant to two or more antibiotic is at very high rate as findings revealed that 95.1% of Shigella spp. isolated from patients showed a MDR phenotype [33,34,30]. Resistance to Ampicillin, a common antibiotic previously used for bloody diarrhoea in this area, was low at 24.7% as reported in a study [34]. But high resistance to Ampicillin (78.0%) was reported by National Antimicrobial Resistance Monitoring System in United States and other studies [20]. The high resistance to Nalidixic acid (48.2%), in this study slightly agreed with the value (31%) reported in Iran [30]. Worldwide, in the last decade, there has been an increased proportion of isolates resistance to drugs that were known to be effective against Shigella isolates such as Ampicillin, Chloramphenicol, Cotrimoxazole, and Tetracycline. Therefore, these antibiotics should no longer be considered as appropriate empirical therapy without culture and sensitivity [30]. Phytochemical Constituents of F. sycomorus Stem Bark Extracts Phytochemical constituents identified in the methanol and aqueous stem bark extracts (Table 3); include: carbohydrates, saponins, cardiac glycosides, flavonoids, tannins and Anthraquinones. Steroids is absent in both extracts. This agreed with previous findings [36][37][38][39]. In another study using root back extract of the same plant, reducing sugar was also found [40]. The phytochemical analysis of Ficus sycomorus revealed the presence of alkaloids, tannins, saponins, flavonoids and steroids in both the aqueous extracts of the leaves and the fruits [41]. These classes of compounds are known to be biologically active and are associated with the antimicrobial activities of Ficus sycomorus [42,43]. Alkaloids have been associated with medicinal applications in plants, among which is their toxicity against cells of foreign organisms. Synthesis and Characteristics of Silver Nanoparticle The clear colourless solution of silver nitrate changed to clear deep brown immediately and remained deep brown over a period of time. The change in colour to deep brown with time is due to excitation in surface Plasmon resonance and the collective oscillation of conduction electrons within metal nanoparticles. The surface Plasmon resonance, enables scattering and absorption of light at a particular frequency, giving them the colour [44]. UV-vis absorbance characteristics displayed by AgNPs of F. sycomorus aqueous stem bark extract as the function of their surface Plasmon resonance peak broad around 460nm (Figure 1). The highest peak in this study falls within close range as those observed by previous scholars at 450, 451, 457 and 460 [45,27,46,47]. FT-IR Characteristics of Silver Nano Particle Synthesized The AgNPs obtained from the stem bark extract of Ficus sycomorus mimick prominent bands suggesting absorbance around 3379.4505, 2941.7374 and 1633.6242 neglecting the fingerprint region ( Figure 2). The bands denote stretching vibration bands as a result of presence of compounds like Flavonoids and other phyto-compounds present in the crude extract [23]. The Broad, medium absorbance around 3379.4505 resembles N-H stretch of Amine, which can be specific to the frequency of extending vibration of primary amine [48]. Weak C-H stretching of aldehyde or hydroxyl group of alkane around 2941.6949 was observed, and FITR spectrogram observed around 1633.6242 indicate C=C stretching, 1631-1633 stretch indicate C=C stretch of alkenes or C=O stretch of Amides found in alkaloids falling within the range reported [46]. A similar kind of IR bands was observed for Ag nanoparticles of the same plant with minor shift in the absorption bands. The minor shift may be due to the interaction of leaf extract with AgNPs which changed the original transmittance level of the extract. It can be concluded that the water soluble compounds in the crude extract are responsible for capping and efficient stabilization of the Nanoparticles. Electron Microscopy The morphology of the synthesized AgNPs was determined by SEM (Phenom Pro-X 800-07334) which showed different morphologies. The resulting nanoparticles showed uniform spherical particles in the size range of nm. SEM images also showed the presence of Nano triangles and other morphologies in the same sample. Other nanoparticles of equal range in size and shapes have been reported, using aqueous extracts from Buchholzia coreacea, Sargassum insisifolium and various plants species, [47][48][49]. Ficus sycomorus leaves and latex extract Nanoparticles have been explored [14] and characterized as Ellipsoidal, spherical and irregular in shapes using TEM, EDX. Antishigella Activities of AgNPs This studies showed that the F. sycomorus stem bark extract synthesized silver nanoparticle exhibited enhanced potency with massive zones of inhibition between 10mm and 30mm compared to the activities of both crude stem bark extract and AgNO 3 . The bacterial isolates were multidrugresistant strains of Shigella species showing resistance to two or more commercially available antibiotics. The activities due to zones of inhibitions produced range between less than 10mm and 24mm inhibition zones for F. sycomorus crude extract with highest percentage (77.6%) inhibition less than 10mm, less than 10mm -29mm inhibition zones for Sliver Nitrate with highest percentage inhibition (60.3%) produced 10mm -14mm and greater than 10mm -greater than 29mm zones of inhibition for the silver nanoparticles synthesized from the stem bark extract of Ficus sycomorus with highest inhibition (46%) found between 15mm and 19mm, though inhibition zone of as high as 30mm was also observed. These results falls within same range as reported by previous studies [43,47,50,51,46,14,52]. Zones of inhibition of F. sycomorus fruits and leaves extract between 15mm and 17mm on E. coli., zones of inhibition of 7-19 mm for biosynthesized AgNPs using leaf and latex of Ficus sycomorus against some bacterial strains and 32mm inhibition zone against Shigella sonnei and Salmonella typhi using NPs synthesized from Bacillus spp. have been reported [43,14,48]. NPs synthesized from colanut exhibited Zone of Inhibition between 12mm and 15mm [47,46]. Ciprofloxacin and Streptomycin resistant bacteria have been reported to be sensitive to silver nanoparticle synthesized from Kapparillaka leaves 16mm -21mm [53]. Ficus sycomorus leaves 12mm -24mm and Nelumbor nucifera leaves around 16mm [29,51]. However, the present study has shown more potent activities of AgNPs against multidrug-resistant Shigella, thereby demonstrating the efficacy of the particles, which can be applied for biomedical applications to combat drug-resistant bacterial infections in this era of emerging and re-emerging infectious diseases. Conclusion It was found that the frequency of isolation of Shigella is common among children (40%), slightly more among male (52%) compared to female but statically insignificant at 95% confidence level. The Shigella species isolated showed considerable resistance to Nalidixic acid and sensitive to Augmentin and Ciprofloxacin which suggest the continued use of these drugs in the treatment of shigellosis due to low resistance. Nalidixic acid can be excluded from empirical drugs used in treatment of shigellosis in this area, due of its high rate of resistance. Silver nanoparticles produced showed maximum absorption bands around 460nm, and the FT-IR showed N -H stretch of amine, weak C -H stretch of aldehyde or hydroxyl group of alkanes and C -C stretch of alkenes or C=O stretching of amides while the SEM showed various characteristics. Anti-Shigella activities of the silver nanoparticles produced from stem bark extract of F. sycomorus showed commendable zones of inhibition against the MDR Shigella species, compared to its crude extracts and silver nitrate solutions. Biosynthesized AgNPs a is very promising as an anti-shigellosis suggesting possibility of enhancing local herbs using nano-biotechnology models which is not only environmental friendly but also, economical.
4,884.6
2020-08-26T00:00:00.000
[ "Medicine", "Environmental Science", "Materials Science" ]
Investigating the effects of wind loading on three dimensional tree models using numerical simulation with implications for urban design In this study, the effects of wind on an Eastern Red Cedar were investigated using numerical simulations. Two different tree models were proposed, each with varying bole lengths and canopy diameters. A total of 18 cases were examined, including different canopy diameters, bole lengths, and wind velocities. Using computational fluid dynamics (CFD) methods, the drag force, deformation, and stress of the tree models were calculated under different wind velocities and geometric parameters. A one-way fluid–structure interaction (FSI) method was used to solve the deformation of the tree. Additionally, velocity and pressure distribution around the tree were obtained. The results indicate that wind velocity and geometric parameters of the tree have a significant impact on deformation, drag force, and stress. As wind velocity increases from 15 to 25 m/s, the force on the tree increases substantially. The results also show that the diameter of the canopy has a bigger effect on stress and strain than the bole length. This study provides insights into tree behavior under wind loading for urban planning and design, informing optimal tree selection and placement for windbreak effectiveness and comfortable environments. www.nature.com/scientificreports/ trees mitigate soil erosion. Soil erosion induces loss of soil nutrients and water, water pollution, and global change 28 . Soil degradation is one of the most noteworthy natural issues in the world. In semiarid Mediterranean regions, the dry climate leads to a low level of plant cover, which, in turn, leads to low soil structure development 29 . In China, there are about 3.3 million km 2 of desertified lands caused by wind erosion 30,31 . In combination with field tests, numerical models may give more noteworthy knowledge about the wind-induced drag acting on trees beneath distinctive scenarios. Later improvements in computational fluid dynamics (CFD), experimental and field studies have looked for the complex and energetic wind-tree interaction, in order to gauge the force that the tree can persevere in given areas and evaluate the tree's situation and the hazard of tree failure. Since the 1950s, researchers have examined the optimization of vegetative windbreaks and found that efficiency is determined by numerous contributing components. Windbreak height is the major controlling factor, and the length of a windbreak ought to be at least ten times its height 32 . Moreover, windbreaks perpendicular to the approaching speed were found to be more compelling 33 . A windbreak's width can also impact its viability (number of rows) and is additionally critical for shielding 34 . Last but not least, the geometry and density of trees are key factors 35 . Because of their wide range of agricultural applications, dense canopies have been the focus of many recent transport process studies [36][37][38][39] . Trees with heights much greater than the spacing of individual plants characterize dense canopies. Pietri et al. 40 conducted experiments on dense and sparse canopies to investigate the effect of canopy density on turbulence characteristics within and above canopies. In addition to dense tree canopies, recent research has focused on windbreaks and forest clearings 41,42 . The horizontal spacing between plants in this type of canopy is greater than the plant height. These types of canopies are useful for erosion control and shelter. Many studies have been carried out to better understand windbreaks and their effects on atmospheric surface-layer flow fields. Speckart and Pardyjak 43 developed and implemented models for mean and fluctuating velocities around a windbreak in a simple, empirically based CFD code. Mayaud et al. 44 investigated the effects of a single tree, a grass clump, and a shrub on turbulent wind flow and discovered that wind velocity can be reduced by up to 70% in the lee of vegetation. Leenders et al. 45 investigated wind velocity patterns and windinduced soil erosion in the vicinity of five different types of vegetation. Their findings revealed that wind velocity was reduced close to the soil surface for shrubs but increased around the trunk for trees. Numerical studies of sparse canopies have used different methods, including Reynolds Averaged Navier-Stokes (RANS) and Large-Eddy Simulation (LES) solvers. Numerous studies have also been done on numerical wind flow predictions. These were regarded as inefficient in the execution of numerical flow computations because complete descriptions of the geometries of twigs and leaves in tree crowns need a significant amount of computer time. To investigate how the urban canopy layer contributes to the development of a nighttime urban boundary layer, Uno et al. 46 devised a second-order turbulence model. Hiraoka 47 used ensemble-averaging and spatial averaging approaches based on the eddy-viscosity concept to simulate flows in plants and urban canopies. In order to properly depict the crown penetration characteristics, Shaw and Schumann 48 suggested adding the proper source terms to the momentum equations used in flow simulations of the spatial sections of tree crowns. The combined effects of the drag coefficient, leaf density, and combined velocities resulted in the newly introduced drag terms. A 2-D numerical model was also used by Wilson and Flesch 49 to simulate flow fields in forests. Comparatively, the infield results and forecasted wind velocity profiles were in good agreement. Wei et al. 50 studied the effect of meteorological parameters in winter and summer and human thermal comfort in different landscapes of an urban park in China. They examined several factors and concluded that in summer, the most comfortable type of landscape space is wood. Wang et al. 51 showed the correlation between green space and improvements in adult health. In a similar study, Liu et al. 52 investigated the impact of environmental parameters such as trees on the mental and physical health of residents. Drag force, deformation, and stress of a three-dimensional T-shaped flexible beam were investigated numerically and experimentally by Malazi et al. 53 . They used a two-way fluid-structure interaction (FSI) numerical method for all simulations. A system coupling was employed to connect the fluid and solid domains. Furthermore, an open channel with a high-quality camera was used for the measurement of deformation on a T-shaped flexible beam. Between the numerical and experimental methods, good results were obtained. In this study, a systematic investigation of windbreaks using CFD simulations has been conducted, and the Eastern Red Cedar was selected as a model tree to evaluate various parameters. The study begins by introducing the numerical simulation methods, followed by a discussion of the analysis results and the selection of optimal design parameters. Finally, conclusions are presented, including a discussion of the novelty and limitations of the study. Additionally, the study provides valuable insights into the behavior of trees under wind loading and the key parameters that influence the performance of trees as windbreaks. This can inform the selection and placement of trees in urban areas to optimize their effectiveness in reducing wind velocities and creating more comfortable environments for residents. Furthermore, the examination of tree deformation and stress on soil can assist in selecting appropriate ground for trees with different bole lengths and crown diameters, which is crucial for selecting soil in parks. Materials and methods Governing equations and numerical methods. Ansys Workbench-system coupling was applied for the solution of one-way fluid-structure interaction. First, the fluid domain calculates in the Ansys Fluent part, and the solid domain computes in the Ansys Mechanical part. Then a coupling system connects the two parts together. The forces obtained on the fluid side are transferred to the solid side, and then the displacement, stress, and strain of the solid part can be calculated. In this study, the realizable k-ε turbulence model was used for solving the fluid domain, and the static structure method was employed for solving the solid domain. Details of the models were explained below. www.nature.com/scientificreports/ Computational fluid dynamics (CFD). The turbulent flow simulation in the three-dimensional computational fluid domain was implemented using the realizable k-ɛ turbulence model 53,54 . The continuity and momentum formulas can be shown as: where ρ is the density. u i and u j represent the average velocity component of the fluid. P is pressure, S i is the source term for the momentum equation, µ represents the dynamic viscosity, µ t represents the eddy viscosity, and it is calculated as µ t = ρC µ k 2 ε . The transport equations for k and ε for the realizable k-ɛ model can be written as, where k represents the turbulent kinetic energy. ɛ represents the rate of dissipation. G k is turbulent kinetic energy generation, G b is turbulent kinetic energy generation, and Y M is fluctuating dilatation contribution to the overall dissipation rate. The model constants for the realizable k-ɛ turbulence model can be written C 1ε = 1.44 , Computational model and physical conditions. Eastern Red Cedar has been selected as the tree model. This wood type has very low shrinkage. This species is lightweight, moderately soft, and low in strength when used as a beam or post, and low in shock resistance. The heartwood is very resistant to decay 55 . The present study investigates two three-dimensional models of an Eastern Red Cedar tree at various geometry parameters numerically (Fig. 1). The canopy diameter of model 1 and model 2 is 2.432 m and 1.216 m. Table 1. The characteristics of air, trees, and soil have been selected as shown in Table 2. Figure 2 demonstrates the typical model of the computational domain and boundary conditions. A single tree has been modeled in this analysis. L c denotes the crown length of the tree (3 m). The height, length, and width of the domain are defined as 6 L c , 30 L c , and 10 L c . The velocity inlet boundary condition is located at 10 L c upstream of the tree, and the pressure outlet boundary condition is located at 20 L c downstream of the tree. The free slip boundary condition is selected for the upper part of the model. Computational structural dynamics (CSD Tetrahedron and prism with triangle base mesh were used in the computational fluid domain with highquality mesh near walls, and tetrahedron mesh is applied in the computational solid domain. As presented in Fig. 3, nearly 13 million elements are used to solve the computational fluid domain, and 6 million elements are applied to solve the solid computational domain. Simulation methodology and ethical considerations. This study was based solely on computer simulation and did not involve the use of real Red Cedar plants or any experimentation with plants. This simulation Table 1. Numerical simulations run in the present study. Results and discussion Drag force study. Drag force can act on a body when it is placed in the fluid flow 53 . Drag force on a tree during wind blowing can be calculated using Eq. (6). where, F D_pressure is pressure drag, F D_viscous is viscous drag, p is the pressure, and τ is the wall share stress. Once the drag force has been computed, the drag coefficient can be calculated using Eq. (7). where C D is the drag coefficient, F Drag is the total drag force, ρ is the density of the fluid, U is the velocity of the fluid, and A is the characteristic area of the body (tree frontal area). Table 3 represents the comparison of the drag coefficient results between this study and others when the wind velocity is 20 m/s. It was realized that Eastern Red Cedar models 1 and 2 presented very close drag coefficients to real trees. Figure 4 shows the variation of the total force with respect to the wind velocity in tree models at different bole lengths (0.5 m, 1.0 m, and 1.5 m). Total forces increase with increased wind velocity for models 1 and 2. Total forces are increased by nearly 180% at model 1 when wind velocity changes from 15 to 25 m/s. Moreover, total (6) F D = F D_pressure + F D_viscous = P n. e d dS + τ w t. e d dS, www.nature.com/scientificreports/ forces are increased by nearly 200% at model 2 when wind velocity changes from 15 to 25 m/s. The total force has its maximum value when the bole length is 1.5 m for both models. The total force of model 1 is nearly 70% greater than that of model 2 when the wind velocity is 25 m/s. It can be concluded that the canopy diameter has a great influence on the total force. As the diameter increases twice, the total force increases by the same ratio. Moreover, the negligible effect of bole length can be concluded too. A comparison of two models reveals that the canopy diameter has a much more significant effect on the total force results as the canopy diameter decreases. Deformation, stress and strain study. Total deformation can be computed in a three-dimensional flexible solid structure numerically. Total deformation is obtained using Eq. (8). where U x , U y , and U z are component deformations in the x, y, and z directions, respectively. In Fig. 5, the total deformations of the models 1 and 2 with different bole lengths are compared. It can be shown that bole length and canopy diameter have a significant effect on the total deformation. In the same trend, with an increase in wind velocity, this deformation difference with variable length increases considerably. The reduction trend is the same for both models, though the total deformation for model 2 for all bole lengths reduces in comparison with model 1. The deformation of tree models 1 and 2 increased with increasing wind velocity and bole canopy. Maximum deformation occurs when wind velocity and bole length are at their maximum values for models 1 and 2. Van Mises stress can be computed in a three-dimensional flexible solid structure numerically. Van Mises stress is obtained using Eq. (9). www.nature.com/scientificreports/ where σ 1 , σ 2 and σ 3 are stress states in the x, y, and z directions, respectively. Figure 6 depicts the values of Von Mises maximum stress for different wind velocities and bole lengths. It is obvious that the wind velocity has a significant effect on maximum stress in models 1 and 2. As the velocity increases to 25 m/s, the maximum stress increases as much as three times. Moreover, it can be concluded that the bole length effect is more obvious in model 1 than model 2. Equivalent strain can be computed numerically in a three-dimensional flexible solid structure. Equivalent strain is obtained using Eq. (10). where ε 1 , ε 2 and ε 3 are principal strains in the in the x, y, and z directions, respectively.υ ′ represents effective Poisson's ratio. Figure 7 depicts the equivalent strain with respect to wind velocity for different bole lengths. The strain is directly proportional to the bole length and wind velocity for both models. As the bole length increases, the strain increases as well. The increase is also dependent on the diameter. Contours plots of numerical study. The deformation of tree models is illustrated for various bole lengths Figure 6. Von Mises maximum stress varies in relation to various wind velocities for two tree models, model 1 (a) and model 2 (b). www.nature.com/scientificreports/ the bole length from 0.5 to 1.5 m. The realizable k-e turbulence model was used to solve the fluid domain, and a one-way FSI method was employed to calculate both the fluid and solid domains together. The results of the total force, deformation, maximum stress, maximum strain, velocity distribution, and pressure distribution of both tree models were obtained at various geometry parameters and wind velocities. The results indicate that total force, deformation, stress, and strain increased with increasing wind velocity for both models. Additionally, deformation and stress were directly influenced by bole length, canopy diameter, and wind velocity when www.nature.com/scientificreports/ trunk diameter was constant. It was also observed that a high total force can cause significant deformation and stress in a tree. Model 1 demonstrated a greater total force than model 2 due to its larger diameter. The greatest deformation occurred at a wind velocity of 25 m/s and a bole length of 1.5 m for both models 1 and 2. Overall, it can be concluded that the chosen model tree can effectively hinder wind force, but it is important to note that parameters associated with the physical properties of the tree, such as length and diameter, can affect this function. The results of this study can be applied to urban planning and design by providing valuable information on how trees react to wind loading and which factors determine their effectiveness as windbreaks. This knowledge can help to improve the selection and placement of trees in urban areas, making the environment more comfortable for residents by reducing wind velocities. Data availability The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
3,931.2
2023-05-04T00:00:00.000
[ "Environmental Science", "Engineering" ]
Multimode Nonlinear Fibre Optics: Theory and Applications This book presents a comprehensive account of the recent progress in optical fiber research. It consists of four sections with 20 chapters covering the topics of nonlinear and polarisation effects in optical fibers, photonic crystal fibers and new applications for optical fibers. Section 1 reviews nonlinear effects in optical fibers in terms of theoretical analysis, experiments and applications. Section 2 presents polarization mode dispersion, chromatic dispersion and polarization dependent losses in optical fibers, fiber birefringence effects and spun fibers. Section 3 and 4 cover the topics of photonic crystal fibers and a new trend of optical fiber applications. Edited by three scientists with wide knowledge and experience in the field of fiber optics and photonics, the book brings together leading academics and practitioners in a comprehensive and incisive treatment of the subject. This is an essential point of reference for researchers working and teaching in optical fiber technologies, and for industrial users who need to be aware of current developments in optical fiber research areas. Introduction Optical fibres have been developed as an ideal medium for the delivery of optical pulses ever since their inception (Kao & Hockham, 1966). Much of that development has been focused on the transmission of low-energy pulses for communication purposes and thus fibres have been optimised for singlemode guidance with minimum propagation losses only limited by the intrinsic material absorption of silica glass of about 0.2dB/km in the near infrared part of the spectrum (Miya et al., 1979). The corresponding increase in accessible transmission length simultaneously started the interest in nonlinear fibre optics, for example with early work on the stimulated Raman effect (Stolen et al., 1972) and on optical solitons (Hasegawa & Tappert, 1973). Since the advent of fibre amplifiers (Mears et al., 1987), available fibre-coupled laser powers have been increasing dramatically and, in particular, fibre lasers now exceed kW levels in continuous wave (cw) operation (Jeong et al., 2004) and MW peak powers for pulses (Galvanauskas et al., 2007) in all-fibre systems. These developments are pushing the limits of current fibre technology, demanding fibres with larger mode areas and higher damage threshold. However, it is increasingly difficult to meet these requirements with fibres supporting one single optical mode and therefore often multiple modes are guided. Non-fibre-based laser systems are capable of delivering even larger peak powers, for example commercial Ti:sapphire fs lasers now reach the GW regime. Such extreme powers cannot be transmitted in conventional glass fibres at all without destroying them (Gaeta, 2000), but there is a range of applications for such pulses coupled into hollow-core capillaries, such as pulse compression (Sartania et al., 1997) and high-harmonic generation (Rundquist et al., 1998). For typical experimental parameters, these capillaries act as optical waveguides for a large number of spatial modes and modal interactions contribute significantly to the system dynamics. In order to design ever more efficient fibre lasers, to optimise pulse delivery and to control nonlinear applications in the high power regime, a thorough understanding of pulse propagation and nonlinear interactions in multimode fibres and waveguides is required. The conventional tools for modelling and investigating such systems are based on beam propagation methods (Okamoto, 2006). However, these are numerically expensive and provide little insight into the dependence of fundamental nonlinear processes on specific fibre properties, e.g., on transverse mode functions, dispersion and nonlinear mode coupling. For such an interpretation a multimode equivalent of the nonlinear Schrödinger equation, the 1 www.intechopen.com standard and highly accurate method for describing singlemode nonlinear pulse propagation (Agrawal, 2001;Blow & Wood, 1989), is desirable. In this chapter, we discuss the basics of such a multimode generalised nonlinear Schrödinger equation (Poletti & Horak, 2008), its simplification to experimentally relevant situations and a few select applications. We start by introducing and discussing the theoretical framework for fibres with χ (3) nonlinearity in Sec. 2. The following sections are devoted to multimode nonlinear applications, presented in the order of increasing laser peak powers. A sample application in the multi-kW regime is supercontinuum generation, discussed in Sec. 3. Here we demonstrate how fibre mode symmetries and launching conditions affect intermodal power transfer and spectral broadening. For peak powers in the MW regime, self-focusing effects become significant and lead to strong mode coupling. The spatio-temporal evolution of pulses in this limit is the topic of Sec. 4. Finally, at GW peak power levels, optical pulses can only be delivered by propagation in gases. Still, intensities become so high that nonlinear effects related to ionisation must be taken into account. An extension of the multimode theory to include these extreme high power effects is presented in Sec. 5 and the significance of mode interaction is demonstrated by numerical examples pertaining to a recent experiment. Finally, we end this chapter with conclusions in Sec. 6. The multimode generalised nonlinear Schrödinger equation Pulse propagation in singlemode fibres is frequently modelled by a generalised nonlinear Schrödinger equation (NLSE) which describes the evolution of the electric field amplitude envelope of an optical pulse as it propagates along the fibre (Agrawal, 2001;Blow & Wood, 1989). This framework has been extremely successful in incorporating all linear and nonlinear effects usually encountered in fibres, such as second and higher order dispersion, Kerr and Raman nonlinearities and self-steepening, and its predictions have been corroborated by numerous experiments using conventional fibres, photonic crystal fibres and fibre tapers of different materials, as well as laser sources from the continuous wave regime down to few cycle pulses. Perhaps the most prominent application of the NLSE is in the description of supercontinuum generation where all the linear and nonlinear dispersion effects come together to induce spectacular spectral broadening of light, often over very short propagation distances (Dudley et al., 2006). For very high power applications, as motivated above, a further extension of the NLSE is required to deal with the multimode aspects of large-mode area fibres. A very general multimode framework has been presented recently allowing for arbitrary mode numbers, polarisations, tight mode confinements and ultrashort pulses (Poletti & Horak, 2008). Here we describe a slightly simplified version that is more easily tractable and still is applicable to many realistic situations, e.g., for the description of high power applications as discussed in the later sections. We start by considering a laser pulse propagating in a multimode fibre. The pulse can be written as the product of a carrier wave exp[i(β (0) 0 z − ω 0 t)],w h e r eω 0 is the carrier angular frequency and β (0) 0 is its propagation constant in the fundamental fibre mode, and an envelope function E(x, t) in space and time. Note that throughout this chapter we adopt the notation that vectorial quantities are written in bold face and x =(x, y, z). For convenience, we assume E(x, t) to be complex-valued, so that it includes the envelope phase as well as the amplitude, and we consider the pulse evolution in a reference frame moving with the group velocity of the fundamental mode, so that in the absence of dispersion a pulse would stay centred at time t = 0 throughout its propagation. Finally, we use units such that |E(x, t)| 2 is the field intensity in W/m 2 . The envelope function can then be expanded into a superposition of individual modes p = 0, 1, 2, ..., each represented by a discrete transverse fibre mode profile F p (x, y) and amodalenvelopeA p (z, t),as Note that |A p (z, t)| 2 gives the instantaneous power propagating in mode p in units of W, and that a simplified normalisation has been used compared to a more rigorous previous formulation (Poletti & Horak, 2008). The accuracy of this approximation improves as the fibre core size is increased and the core-cladding index contrast is decreased, leading to guided modes with an increasingly negligible longitudinal component of polarisation. The multimode generalised nonlinear Schrödinger equation (MM-NLSE) is then given by the following set of coupled equations to describe the dynamics of the mode envelopes, The following approximations have been applied here: (i) we have assumed that the Raman response and the pulse envelope functions vary slowly on the time scale of a single cycle of the carrier wave, so that we can neglect a rapidly oscillating term, and (ii) an additional term related to the frequency dependence of the mode functions has been omitted, assuming the variation of S K,R plmn is slow compared to the 1/ω 0 self-steepening term. In Eq. (2), yields the effects of dispersion of mode p with coefficients β (p) n = ∂ n β (p) /∂ω n .H e r e w e allow for complex values of the modal propagation constants β (p) where the imaginary part describes mode and wavelength dependent losses; ℜ[..] denotes the real part only. The second line of (2) represents the effects of optical nonlinearity with a nonlinear refractive index n 2 . The term ∝ ∂/∂t describes self-steepening and the two terms within the sum describe Kerr and Raman nonlinearities. The Raman term contributes with a fraction f R to the overall nonlinearity ( f R = 0.18 for silica glass fibres) and contains the Raman mode overlap factors as well as a convolution of the time dependent Raman response function h(t) with two mode amplitudes The mode overlap factors responsible for the instantaneous Kerr effect are given by Numerically, the mode functions of all the modes involved in the nonlinear effects under consideration are first evaluated at ω 0 and a table of overlap integrals is calculated. The number of modes and overlap integrals can be greatly reduced based on mode symmetry arguments (Poletti & Horak, 2008); all the applications discussed in the following will employ such reduced sets of modes. Next, the dispersion curves for these modes are calculated. Finally, the system of equations (2) is integrated numerically using a standard symmetrised split-step Fourier method (Agrawal, 2001), where adaptive step size control is implemented by propagating the nonlinear operator using a Runge-Kutta-Fehlberg method (Press et al., 2006). In order to avoid numerical artifacts, we also found it necessary to further limit the maximum step size to a fraction of the shortest beat length between all the modes considered. The accuracy and convergence of the results is further checked by running multiple simulations with increasingly small longitudinal step sizes. The framework presented above still allows for modes of arbitrary polarisation. In most practical situations, however, one is interested in a subset of modes representing only a specific polarisation state which is determined by the pump laser. The two most common cases are briefly discussed in the following. Circular polarisation Under the weak guiding condition, modes fall into groups of LP mn modes containing either two (m = 0) or four (m > 0) degenerate modes. Within each group, the modes can be combined into modes that are either σ + or σ − circularly polarised at every point in the fibre. If the light launched into the fibre is, for example, σ + polarised, the form of the overlap integrals (4) and (6) guarantees that no light is coupled into the σ − polarised modes during propagation and those modes can therefore be eliminated entirely from the model. It is worth emphasising that this is an exact result within the weak guiding limit. Using the properties of circular polarisation vectors, the overlap integrals are then simplified to where the mode functions have been written as F p = e + F p for σ + polarised modes with real-valued scalar mode functions F p . Linear polarisation The situation is slightly more complicated in the case of linearly polarised pump light. In this case, nonlinear coupling between orthogonal polarisation modes is in principle allowed, leading to, for example, birefringent phase matching and vector modulation instability (Agrawal, 2001;Dupriez et al., 2007). However, for many practical situations where modes can be described as LP mn modes, if linearly polarised light is launched into the fibre, nonlinear coupling to orthogonal polarisation states is effectively so small that most of the pulse energy remains in its original polarisation throughout the entire pulse propagation. This allows Multimode Nonlinear Fibre Optics: Theory and Applications 5 halving the number of modes to be considered in the model with significant computational advantage, and a simpler definition of the overlap factors (4) and (6). There are several important practical situations where this approximation can be acceptable: (i) For degenerate modes (no birefringence), the overlap factor (6) for four-wave mixing (FWM) between modes of parallel polarisation is three times larger than that for orthogonal polarisation. Since the dispersion properties, and therefore the phase matching conditions, are the same, nonlinear gain is much higher for the same polarisation and thus will dominate the dynamics. (ii) For few-moded fibres power transfer to orthogonal modes by FWM can be negligible if either the phase matching condition cannot be fulfilled at all, or if the phase matching condition is achieved only for widely separated wavelength bands where the difference in group velocities limits the effective interaction length due to walk-off effects. In these situations one can therefore use an approximate theoretical description of pulse propagation by restricting the MM-NLSE to the LP mn modes of the fibre with the same linear polarisation everywhere. Assuming real-valued x-polarised mode functions F p = e x F p ,t h e overlap integrals then reduce to A further simplification is also sometimes possible. If linearly polarised light is predominantly launched in an LP 0n mode, power transfer into LP mn modes with m > 0 can only be initiated by spontaneous FWM processes. By contrast, other LP 0n modes of the same polarisation can be excited by stimulated processes, see Sec. 3.1. Thus, if the dominant processes within the pulse propagation are stimulated ones, e.g., in the regime of high powers and relatively short propagation distances, the study can be effectively restricted to LP 0n modes with the same polarisation. Supercontinuum generation in multimode fibres One of the first applications where the MM-NLSE presented in the previous section can provide deep insights is that of supercontinuum (SC) generation in multimode fibres. As already mentioned, the complex dynamic underlying SC generation in singlemode fibresisby now well understood. Octave spanning SC in suitably designed fibres arises as a combination of various nonlinear phenomena, including soliton compression and fission, modulation instability, parametric processes, intrapulse Raman scattering, self phase modulation (SPM) and cross phase modulation (XPM) (Dudley et al., 2006). As the fibre diameter is increased though, as required for example to increase the SC power spectral density without destroying the fibre, the fibre starts to support multiple modes. Previous theoretical models were usually restricted to two polarisation modes of a birefringent fibre (Agrawal, 2001;Coen et al., 2002;Lehtonen et al., 2003;Martins et al., 2007) or included a maximum of two spatially distinct modes (Dudley et al., 2002;Lesvigne et al., 2007;Tonello et al., 2006). Using the full MM-NLSE, however, fibres with arbitrary modal contents can be studied, for which a rich new list of intermodal nonlinear phenomena emerges, causing the transfer of nonlinear phase and/or power between selected combinations of modes . In this section, using simulations of a specific few-moded fibre as an illustrative example, we will discuss how modal symmetries and launch conditions can have a drastic influence on intermodal power transfer dynamics. For pump peak powers in the range of tens to hundreds of kW, if the nonlinear length of the pump pulses is shorter than the walk-off length between the modes involved, significant power transfer into high-order modes with the appropriate symmetry can occur, which can be beneficial, for example, to further extend the SC spectrum to shorter wavelengths. Even if conditions for significant intermodal power transfer are not met, it is found that intermodal XPM can still play a significant role in the SC dynamics by broadening the spectrum of modes which would not otherwise present a significant spectral broadening if pumped on their own. To discuss the intermodal nonlinear dynamics leading to SC generation we focus on a moderately multimoded holey fibre (HF) consisting of two rings of large circular air holes with pitch Λ = 2.7μm and relative hole size d/Λ = 0.93, surrounding a solid core with a diameter of a few optical wavelengths (D = 2Λ − d = 2.9μm ), see Fig. 1. From 400nm to 2000nm the fibre supports 14 modes with effective areas ranging between 3.6 and 6.1μm 2 . T oreducethe computational time it is possible to combine these modes into 7 pairs of circularly polarised modes and to exploit the forbidden power exchange between modes with opposite circular polarisation (see Sec. 2.1), only to focus on the 7 right-handed circularly polarised modes M1, M2,..., M7 shown in Fig. 1. The group velocity dispersion (GVD) curves of these modes are significantly different from each other, with a first zero dispersion wavelength (ZDW) ranging from λ 7 = 550nm for M7 to λ 1 = 860nm for M1. Effect of modal symmetries and launch conditions on intermodal power transfer Equation (2) shows that the transfer of power between modes is mediated by FWM terms of the form S K plmn A l A m A * n ,withl, m = n. If only a single mode l is initially excited with a narrow spectral line, the strongest power transfer to mode p andthereforethefirsttobeobservedin the nonlinear process is the one controlled by degenerate FWM terms of the form S K plln A l A l A * n . If both modes p and n are initially empty, power transfer starts with a spontaneous FWM process and is therefore slow. If one of the generated photons is however returned into the pump l by stimulated emission, the process becomes much faster and tends to dominate the nonlinear dynamics in the limit of high-power pulse propagation over short distances. Interestingly, these S K plll A l A l A * l processes produce automatic phase-locking of mode p to the pump mode l, similarly to what happens in non-phase matched second and third harmonic generation processes (Roppo et al., 2007). However, processes S K plll require (i) that modes p and l belong to the same symmetry class, and (ii) that they present a large overlap. For the HF under investigation these conditions are only fulfilled for the two LP 0n modes M1 and M6, and therefore one would expect significant power transfer only between them. This expected behaviour is indeed confirmed by the numerical simulation shown in Fig. 2(a), where a hyperbolic secant pump pulse with temporal profile A p (0, t)= √ P 0 sech(t/T 0 ) with T 0 = 100fs (full width at half maximum 176fs) and centred at λ p = 850nm is launched into M1 only and propagated through 30mm of the HF. Here the pulse peak power P 0 is set to 50kW, corresponding to a 10nJ pulse and, for mode M1, to a soliton of order N = 166. As one would expect from single mode SC theory (Dudley et al., 2006), besides SPM-induced spectral broadening, such a high-N pulse develops sidebands which grow spontaneously from noise, through an initial modulation instability (MI) process. The characteristic distance of this phenomenon L MI ∼ 16L NL = 16λ/(3πn 2 S K 1111 P 0 )=6.9mm correlates well with the simulation results. As expected, of all the other 6 modes only M6 is significantly amplified at wavelengths around λ p , and subsequently develops a wide spectral expansion and an isolated peak at 360nm. Further analysis of spectrograms and phase matching conditions indicates that this peak is a dispersive wave in M6, phase matched to a soliton in M1 and slowly shifting to shorter wavelengths as the soliton red-shifts due to the effect of intrapulse Raman nonlinearity. Under these launching conditions the study can thus be restricted to the LP 0n modes of the fibre without loss of accuracy. Simulations also show that if either M2, M3, M4, M5 or M7 are selectively launched, no power is transferred to any of the other modes, and each of them evolves as in the single mode case. When two or more modes contain a significant amount of power, they can all act as pumps for weaker modes. Moreover, if these modes belong to different symmetry classes, additional 9 Multimode Nonlinear Fibre Optics: Theory and Applications www.intechopen.com 8 Will-be-set-by-IN-TECH FWM terms come into play, giving rise to a much richer phenomenology. As an example, Fig. 2(b) shows what happens when both M1 and M2 are simultaneously excited with a P 0 = 50kW sech pulse. This pulse corresponds to an N = 27 soliton for M2, due to its much larger value of β (2) 2 at the pump wavelength. As a result, the SC generated in M2 has a more temporally coherent nature, as it originates from soliton compression and fission mechanisms (the fission length L fiss = N · L NL is around 16mm). Due to a shorter ZDW than M1, the final SC in M2 also extends to much shorter wavelengths than the one in M1 (400nm versus 550nm, respectively), which can be one of the benefits of using multimode fibres for SC generation. Moreover, in addition to M6, also M3 and M4 are amplified from noise, generating a complex output spectrum, where the final relative magnitude of different modes is a strong function of wavelength. This is reminiscent of early experimental results (Delmonte et al., 2006;Price et al., 2003). Non-phase matched permanent intermodal power transfer To understand the complex dynamics of intermodal power transfer it is useful to refer to the approximate analytical theory of cw pumped parametric processes, which neglects the effects of GVD and pulse walk-off but still provides a valid reference (Stolen & Bjorkholm, 1982). Within this framework, parametric gain leading to exponential signal amplification requires the propagation constant mismatch For multimode processes, an estimate of γ can be obtained by averaging all the intermodal nonlinearities γ plmn = 3πn 2 λ S K plmn which contribute to SPM and XPM between the relevant modes. However, in most practical situations involving SC generation in highly nonlinear multimode fibres, Δβ plmn ≫ γP 0 for all the relevant FWM processes considered. Thus, no parametric gain is typically observed and each FWM term leads to an oscillatory power exchange between modes, as shown by the dynamic gain curves of high order modes when only M1 and M2 are initially pumped, reported in Fig. 3(a). The oscillation periods are given by the beat lengths L b ∼ 2π/|Δβ|. For example, for the process leading to amplification of M6, Δβ 6111 = 4.1 · 10 5 m −1 , corresponding to a value of L b = 15.3μm in agreement with the simulation. For modes amplified by a cascade of intermodal FWM processes, such as M5 and M7 in the example, the signature of multiple beating frequencies can be clearly observed. Despite the non-phase matched nature of most FWM processes, simulations show that after long enough propagation some power is permanently transferred into the weaker modes. This is shown, for example, in Fig. 3(b) extending the propagation distance of M4 from 0.4mm to 4mm. A more detailed analysis excluding XPM and Raman effects found this behaviour to be uniquely caused by the temporal walk-off between the pulses involved. The typical length scale of this permanent power transfer is therefore of the order of the walk-off length of all the pulses involved, given by L pq 1 | for modes p and q. For the example in Fig. 3(b), L 12 W = 3mm, L 24 W = 2.4mm and L 14 W = 1.3mm, which correlate well with the simulation. In conclusion, nonlinear intermodal power transfer is governed by two length scales, a beat length leading to fast initial power oscillations and a walk-off length leading to permanent power transfer. In order to observe in practice intermodal nonlinear effects, the nonlinear length of the pump pulses must be shorter than the walk-off length, i.e., high peak powers are required. Otherwise, nominally multimode fibres can exhibit the same nonlinear behaviour as singlemode ones. Scaling a fixed fibre structure to larger core sizes allows for larger power throughput, but at the same time longer beat and walk-off lengths lead to much stronger mode coupling, and significant amounts of power can be transferred into higher order modes. In this case, as shown in Fig. 2, higher order modes may also serve to extend the SC spectral extension to much shorter wavelengths. Effect of intermodal cross phase modulation Intermodal power transfer mediated by FWM terms, which can permanently exchange power between modes even in the absence of proper phase matching, is not the only intermodal nonlinear effect which can occur in a multimode fibre. Intermodal XPM can also play a role in significantly broadening the spectrum of a mode which would not undergo a significant spectral expansion if propagated on its own (Chaipiboonwong et al., 2007;Schreiber et al., 2005). To illustrate this phenomenon, we simulate the propagation of a pulse launched in M1 and/or M2 at 725nm, where M1 is in the normal dispersion region and M2 is in the anomalous region. In order to observe significant spectral expansion and intermodal effects within the distance where the pulses are temporally overlapped, we increase the input power up to a value of P 0 = 500kW, close to the estimated fibre damage threshold. Figs. 4(a) and (b) show that when M1 is individually launched, only some SPM-based spectral expansion is visible, whereas if only M2 is launched, a wide MI-based SC develops. On the other hand, if the same input pulse is launched simultaneously in both modes as in Fig. 4(c), a much wider output spectrum is developed also in M1. Under these operating conditions the intermodal power transfer is negligible, as confirmed by nearly identical spectral results obtained when all S K plmn and S R plmn coefficients responsible for intermodal FWM are set to zero. Therefore, the increased spectral expansion in M1 must be generated by intermodal XPM effects alone. This is indeed confirmed by the simulation in Fig. 4(d), showing that when all intermodal XPM effects are artificially switched off, M1 and M2 produce a very similar spectrum to that of their individual propagation. Self-focusing in optical fibres in a modal picture For laser powers larger than discussed in the previous section and into the MW regime, the nonlinear refractive index induced in the glass by the laser may become strong enough to introduce significant spatial reshaping of the beam in the transverse direction. The refractive index of a material is given by n 0 + n 2 I, including both the linear, n 0 , and nonlinear term, n 2 ,a n dw h e r eI is the position-dependent intensity of the laser. Thus, if the beam has a Gaussian-like transverse profile and the optical Kerr nonlinearity n 2 is positive, as is the case in most of the commonly used transparent materials, the induced nonlinear refractive index is maximum at the centre of the beam and decreases towards the pulse edges. Therefore, the induced index profile forms a focusing lens, acting back on the laser beam itself. This effect is known as self-focusing and has been studied extensively in bulk materials for nearly 50 years (Askaryan, 1962;Chiao et al., 1964). For input powers P below a critical power P crit , self-focusing is finally overcome by the beam divergence. In the case of P > P crit , however, the pulse undergoes catastrophic collapse leading to permanent damage of the material (Gaeta, 2000). The critical power is given by where the numerical factor slightly depends on the beam profile in a bulk material (Fibich & Gaeta, 2000). Numerically, self-focusing in bulk media is most commonly modelled by slowly-varying envelope models or, more accurately, by a nonlinear envelope equation (NEE) describing the dynamics of the transverse beam profile Φ(x, t) (Brabec & Krausz, 1997; Ranka & Gaeta, 1999), where D mat {Φ} is a dispersion term similar to (3) describing the effect of material dispersion and ∇ 2 ⊥ is the transverse Laplace operator. The NEE incorporates many features similar to the MM-NLSE (2), e.g., higher order dispersion, Kerr nonlinearity and self-steepening terms. However, even in the presence of rotational symmetry, the envelope function Φ is a two-dimensional object (radial and temporal coordinate), in contrast to the MM-NLSE which only uses a finite number of one-dimensional (temporal) envelope functions to describe the same situation. If the number of modes is small, the MM-NLSE is thus computationally significantly more efficient, both in terms of reduced memory requirements and faster dynamics simulation. It is now well established that the same process of self-focusing occurs in optical waveguides and fibres and that the same power threshold for catastrophic collapse applies (Farrow et al., 2006;Gaeta, 2000). However, for powers below P crit the observed light propagation behaviour is qualitatively different from that observed in bulk media, since here the light is additionally bound by total internal reflection at the core-cladding interface, which can lead to additional spatial and temporal interference and dispersion effects, such as periodic oscillations of the beam profile or catastrophic pulse collapse even when the launched peak power is below the critical value. In this section we will discuss these effects within the framework of the MM-NLSE, which leads to an easy understanding of fibre-based self-focusing within a modal picture Milosevic et al., 2000). Such an interpretation is particularly useful in the context of high-power fibre lasers, which now achieve peak powers close to the critical power with pulse lengths approaching the nanosecond regime (Galvanauskas et al., 2007). Continuous wave limit We start our discussion with the case of cw propagation, which in practice is also a good approximation to the behaviour of long pulses (ps to ns regime) near the pulse peak, and use the MM-NLSE restricted to the linearly polarised LP 0n modes, as discussed in Sec. 2.2. The MM-NLSE thus reduces to with S K plmn given by (8). Specifically, we assume propagation in a short piece of a step-index fibre with a pure silica core of 40μm diameter and a refractive index step of 0.02 between core and cladding. This fibre is similar to photonic crystal large-mode area fibres which are commercially available, where the index step has been increased such that the fibre supports eight LP 0n modes. The zero-dispersion wavelength of this fibre is at 1.26μm, and we assume a pump laser operating at 1300nm wavelength. The critical power (9) for silica at this wavelength is P crit = 5.9MW. Note that at this power level pulses up to approximately 100ps length can be transmitted through the fibre without fibre damage (Stuart et al., 1996). Figure 5 shows the dynamics of light propagation along this fibre when cw light is launched into the fundamental LP 01 mode with a power of 0.7P crit =4.84MW. The curves in Fig. 5(a) show the power |A p | 2 in the lowest order modes obtained by solving Eq. (11). Power from the fundamental mode is quickly transferred over sub-mm propagation distances into higher order modes by FWM processes, most prominently by induced FWM involving three pump photons as described by terms of the form ∂A p /∂z ∝ iA 2 0 A * 0 , see Sec. 3.1. However, because of the phase mismatch β 0 between the fundamental mode and the higher order modes the initial FWM gain is reversed after a certain propagation distance (about 1mm for the chosen parameters) and power is coherently transferred back into the pump from the higher order modes. This process is repeated subsequently leading to a periodic exchange of power between modes. The phase mismatch increases for increasing mode order and thus the maximum transferred power decreases. In Fig. 5(b) we depict the corresponding 2D beam intensity |E(x, z)| 2 calculated by summing the modal contributions (1), normalised to the maximum field |E(0, 0)| 2 at the fibre input. The field experiences significant periodic enhancement on the beam axis at positions where large fractions of the total power propagate inside higher order LP 0n modes. At these positions of enhanced intensity, the full width at half maximum (FWHM) of the beam profile is strongly reduced, as shown in Fig. 5(c). The intermodal FWM processes together with the modal phase mismatch are therefore responsible for periodic beam self-focusing and defocusing in a fibre. This complements the standard interpretation of self-focusing in a bulk medium using Gaussian beam propagation, which describes the same phenomenon as focusing by a Kerr-induced lensing effect, followed by beam divergence and subsequent total internal reflection at the core-cladding interface. We finally note that a stationary solution can be obtained for the cw MM-NLSE in which the modal amplitudes and phases are locked in such a way that no oscillations occur. In the bulk interpretation this corresponds to the situation where nonlinear focusing and diffraction are perfectly balanced, thereby generating a stationary spatial soliton. It may seem that this modal description of self-focusing is only possible in multimode fibres but breaks down in singlemode fibres, for example in large-mode area photonic crystal fibres designed for endlessly single mode operation (Mortensen et al., 2003). However, in this case the role of the higher order bound modes of a multimode fibre is taken over by the cladding modes, and it is the FWM-induced power exchange between the guided mode of a singlemode fibre and its cladding modes which provides a modal interpretation of self-focusing. Using only a finite number of modes in the simulation of the MM-NLSE necessarily limits the transverse spatial resolution that can be achieved by this method. For example, the LP 0n mode function exhibits n maxima and n − 1 zeros along the radial direction within the fibre core region. With simulations using n different modes one can therefore expect a maximum resolution of the order of R/n where R is the core radius. Simulations with pump powers approaching the critical power P crit will thus require a larger number of modes in order to correctly describe the increasingly small minimum beam diameter. We investigate this behaviour in Fig. 6. Here we show the minimum beam diameter achieved during the first period of self-focusing and diffraction, i.e., at approximately 1mm of propagation for the parameters of Fig. 5, when the MM-NLSE is restricted to different numbers of modes. For clarity, the beam diameter is normalised to the diameter of the launched beam (LP 01 mode). We observe that simulations with 2, 3, and 6 modes are accurate up to pump powers of approximately 0.2P crit ,0 . 4 P crit ,a n d0 . 8 P crit , respectively, compared to simulations involving all 8 bound fibre modes of this sample fibre. For comparison, we also show the results of the NEE beam propagation method (10). This confirms the accuracy of the MM-NLSE with 8 modes up to 0.95P crit corresponding to a nearly five-fold spatial compression of the beam. For the simulations shown in Fig. 6 we used the same 4th-5th order Runge-Kutta integration method with adaptive step size control (MATLAB R2010b by MathWorks, Inc.) for both the MM-NLSE and the NEE. Each data point required approximately 0.9s of CPU time on a standard desktop computer with the 8-mode MM-NLSE and <0.2s with 6 modes. In contrast, the corresponding NEE simulations with 1024 radial grid points required 101s, that is, two to three orders of magnitude slower than the MM-NLSE. Short pulse propagation Next, we consider the propagation of short pulses in the regime of peak powers close to the critical power, where in addition to transverse spatial effects the pulse may exhibit complex temporal dynamics related to intermodal and intramodal dispersion, self-steepening and nonlinear effects. As an example we consider sech-shaped pulses with a temporal FWHM of 100fs launched with a peak power of 0.8P crit into the fundamental mode of the multimode fibre considered above. The pump wavelength is again set to 1.3μm. The simulations discussed in the following used a 6-mode MM-NLSE with 2048 temporal grid points solved with a split-step Fourier method (Poletti & Horak, 2008;). The initial dynamics of the pulse propagation are shown in Fig. 7. After 1mm of propagation, Fig. 7(a), a significant amount of power has been transferred from the fundamental mode into the higher order modes, leading to a transverse beam focusing to approximately 40% of the input beam width. The transverse beam size depends on the pulse power and thus varies along the pulse shape: the beam diameter is smallest near the temporal peak of the pulse, but remains unchanged in the trailing and leading edges where the power is low. Propagating further to 2mm, Fig. 7(b), most of the power has been converted back into the fundamental mode, similar to the cw case of Fig. 5. However, the transfer is not complete and is not uniform along the pulse. This is related to the walk-off of the higher order modes because of intermodal dispersion as well as a slight dependence of the beam oscillation period on power. Therefore, the spatial FWHM of the beam at 2mm propagation length is below that of the fundamental mode in some parts of the pulse while it exceeds it in other parts. Continuing the propagation of Fig. 7, the spatial beam variations persist, but the deviations from a simple oscillation become more prominent. This is shown clearly in Fig. 8(a) in the beam properties after 7mm of propagation. At this point the initial sech-shaped temporal profile has steepened on the trailing edge and an ultrashort pulse peak is forming due to the interference of the modal contributions. In particular, the first high order mode exhibits a similar power level as the fundamental mode. Simultaneously, the beam diameter is strongly reduced. At 7.4mm of propagation, Fig. 8(b), this peak has narrowed further and reaches Fig. 7 to (a) 7mm and (b) 7.4mm of fibre length exhibiting simultaneous spatial and temporal collapse. the critical power for catastrophic collapse while the beam diameter has reduced to 20% of the fundamental mode. For even longer propagation lengths the simulations show the pulse breaking up into many ultrashort high-intensity parts around this initial instability, however the MM-NLSE with 6 modes becomes invalid at this point due to its limited spatial and temporal resolution. Simulations with the MM-NLSE restricted to the fundamental mode reveal only a very small amount of pulse reshaping due to self-steepening over this propagation distance (a shift of the pulse peak by about 10fs) and exhibit none of the complex dynamics seen in Fig. 8. We therefore conclude that the simultaneous spatial and temporal collapse of the pulse observed here is a pure multimode effect, driven by FWM-based power exchange together with modal dispersion and self-steepening, in agreement with investigations based on beam propagation methods (Zharova et al., 2006). Multimode effects in gas-filled waveguides As discussed above, the peak power that can be transmitted in optical fibres is limited by the critical power for self-focusing and catastrophic collapse to levels of a few MW. According to Eq. (9), for a fixed laser wavelength P crit only depends on the material linear and nonlinear refractive index. In general, the linear refractive index does not vary much across transparent media, between 1 for vacuum and ∼4 for some non-silica glasses (Price et al., 2007) and semiconductors, whereas the nonlinear index n 2 can span many orders of magnitude. A common method for guiding extremely high power pulses is thus in hollow-core capillaries or fibres, where most of the light propagates in a gas. For example, n 2 ≈ 5 × 10 −23 m 2 /W in air, compared to 2.5 × 10 −20 m 2 /W in silica glass, thus pushing P crit into the GW regime. In contrast to solid-core fibres, gas-filled capillaries do not support strictly bound modes, but all modes are intrinsically leaky with losses scaling proportional to λ 2 /R 3 where λ is the light wavelength and R is the radius of the capillary hole (Marcatili & Schmeltzer, 1964). Hence, the capillary hole must be sufficiently large in order to allow for transmission of light over long distances. For example, 800nm wavelength light propagating in the fundamental LP 01 mode of a silica glass capillary with a 75μm radius hole experiences losses of ∼3dB/m. For such a large hole compared to the laser wavelength, the capillary is multimoded, and this is the situation we will consider in the following. It should be noted, however, that single-mode guidance in hollow-core fibres is in principle possible using bandgap effects in photonic crystal fibres (Knight et al., 1998;Petrovich et al., 2008). 17 Multimode Nonlinear Fibre Optics: Theory and Applications www.intechopen.com Using fs pulses at 800nm wavelength from commercial Ti:sapphire laser systems it is possible to reach peak powers large enough to observe nonlinear effects, and even self-focusing, in gases. Capillary guidance is used in this context for several high-power applications. One of these is pulse compression, where the nonlinearity of the gas in the capillary is exploited to spectrally broaden a pulse by self-phase modulation, which allows the pulse to be compressed after the capillary by purely dispersive means such as gratings or dispersive mirrors (Sartania et al., 1997). For intensities above ∼10 13 W/cm 2 , the electric field of the laser is large enough to start ionising the gaseous medium. The generated plasma exhibits a negative refractive index, which can counteract the self-focusing effect of the neutral gas and lead to pulse filamentation (Couairon & Mysyrowicz, 2007). In another application, ionisation and recombination effects are used for high harmonic generation of XUV and soft X-ray radiation, processes whose efficiencies can be enhanced significantly by phase matching techniques in capillaries (Rundquist et al., 1998). In the following we will therefore discuss how the MM-NLSE can be extended to include these important effects and demonstrate a few sample effects related to the multimode nature of hollow capillaries typically used for such high-power applications. Ionisation and plasma effects in the multimode nonlinear Schrödinger equation The starting point for this derivation is the capability of high-intensity light to ionise the gas inside the capillary. Two effects contribute to the ionisation: (i) direct multiphoton ionisation, where several photons are absorbed simultaneously to eject one electron from its orbit, and (ii) tunneling ionisation, where the electric field of the laser is so strong that it deforms the electric potential of the nucleus and allows an electron to tunnel through the potential barrier. Tunneling ionisation occurs at higher field strengths than multiphoton ionisation, and is the dominant process for the effects we want to discuss here. The rate of tunneling ionisation W can be calculated using Keldysh theory (Popov, 2004) as where κ 2 = I p /I H is the ratio of the ionisation potential I p of the gas species over the ionisation potential for hydrogen I H = 13.6eV, W 0 = m e e 4 /h 3 = 4.13 × 10 16 s −1 , F(x, t)=E (x, t)/(κ 3 E a ) is the reduced electric field of the laser with E a = 5.14 × 10 11 V/m the atomic unit of field intensity and E (x, t) the real-valued electric field in units of V/m corresponding to E(x, t), Eq. (1). The dimensionless parameters C κl and n * are specific for the gas and can be looked up in tables (Popov, 2004). For the case of argon, which we will use as our example here, we have I p = 15.76eV, C κl = 0.95, and n * = 0.929. Given the modal amplitudes A p (z, t) we can calculate the electric field E(x, t) and thus the ionisation rate W(x, t) at every point and time in the capillary. From this we obtain the fraction of neutral atoms r 0 (x, t) and the fraction of ionised atoms r 1 (x, t)=1 − r 0 (x, t) by solving The generated plasma modifies the refractive index of the gas to where the plasma frequency is given by Here ρ is the gas density and e and m e are the electron charge and mass, respectively. The MM-NLSE thus aquires a new nonlinear term which includes a self-steepening correction term and the projection of the modified laser field onto mode p via a spatial overlap integral. In addition to the effect of the plasma induced refractive index, we also have to consider the loss of energy from the propagating laser pulse due the ionisation process itself (Courtois et al., 2001). In the modal decomposition, this leads to a nonlinear loss term in the propagation of the mode envelope A p of the form The full MM-NLSE in the presence of gas ionisation by tunneling in the strong-field limit thus becomes (Chapman et al., 2010) where the individual terms are given by (2), (3), (16) and (17). Ultrashort pulse propagation in capillaries In the following we present simulation results of the extended MM-NLSE (18) for a specific experimental situation (Froud et al., 2009). In particular, we consider a 7cm long capillary with a75μm radius hole filled with argon at a pressure of 80mbar in the central 3cm of the capillary; the Ar pressure tapers down over 2cm to 0mbar at the input and output. Laser pulses of 40fs length at 780nm wavelength are launched with a Gaussian waist of 40μm centred into the capillary. For the simulations, 20 linearly polarised LP 0n modes are considered, as discussed in Sec. 2.2. Results from two sets of simulations with different launched pulse energies, 0.5mJ and 0.7mJ, respectively, are presented in Fig. 9. The distribution of Ar + ions in the capillary is shown in Figs. 9(a) and (b). As expected, ionisation mainly occurs on axis where the laser intensity is maximum. Moreover, because the transverse beam size of the launched laser pulses is not ideally matched to the fundamental mode of the capillary, power is also coupled into the first higher order mode, which leads to mode beating and thus to the periodic ionisation pattern along the capillary length with a periodicity of ∼2cm, observed most clearly at lower powers, Fig. 9(a). At higher powers, the nonlinear ionisation processes become much stronger and a spate of additional radial and longitudinal structures are found in the ionisation pattern, Fig. 9(b). In Fig. 9(c) the partial Ar + pressures of (a) and (b) are averaged over the transverse cross section of the capillary. The distribution shown in this figure can be easily verified Fig. 9. Propagation of 40fs pulses at 780nm wavelength in a hollow-core capillary (length 7cm, hole radius 75μm) filled with argon with partial ionisation. (a), (b) Partial pressure of Ar + ions (in dB of mbar) vs position z and radius r inside the capillary for launched pulse energies of 0.5mJ and 0.7mJ, respectively. (c) Ar + pressure averaged over the capillary cross section vs z. (d) Corresponding integrated pulse energy vs z. The total gas pressure in the capillary centre is 80mbar. experimentally as it is proportional to the intensity of the Ar + ion fluorescence observed at 488nm (Chapman et al., 2010;Froud et al., 2009). Finally, in Fig. 9(d) the pulse energy summed over all modes is presented versus the propagation distance for these two simulations. The effect of propagation losses due to ionisation, described by the term L ion {A p } in Eq. (17), is clearly visible with strong losses associated with the peaks of large ionisation in Fig. 9(c). Because of the highly nonlinear nature of tunneling ionisation, losses at slightly higher input energies (0.7mJ instead of 0.5mJ) are several times larger. The spatial and temporal distribution of ions generated by the propagating laser pulse acts back on the pulse through its (negative) refractive index, according to the term N pl {A p } given in Eq. (16). Because of the strong localisation of the regions with high ionisation, different capillary modes are affected differently resulting in strong intermodal scattering and mode-specific spectral broadening, as is demonstrated in Fig. 10. At a relatively low pulse energy of 0.3mJ where ionisation is weak, a slight blue-shift of the spectral contribution of the excited LP 02 mode is observed, but no higher order mode excitation. Increasing the pulse energy to 0.5-0.7mJ, more and more light is scattered into higher order modes. Moreover, the spectrum first develops a small peak at the long-wavelength side of the pump (790-800nm) and then a very broad and high-intensity shoulder at short wavelengths. It is interesting to note that these short wavelength parts of the spectrum are more pronounced in the higher order modes LP 02 and LP 03 of the capillary, in fact they contain more power than the fundamental mode at these wavelengths for launched pulse energies above 0.6mJ. This finding has again been confirmed by experiments, where a strong position-dependence of the spectrum was observed in the far field beyond the capillary (Chapman et al., 2010). These selected results demonstrate clearly that mode interference and mode coupling, i.e., transverse spatial effects, play a significant role in the propagation of high-intensity laser pulses in regimes where ionisation becomes important. This also impacts other applications of such systems, for example the angular dependence of high harmonic generation as recently observed in a capillary-based XUV source (Praeger et al., 2007). Conclusions and outlook To summarise, we presented an analysis of nonlinear effects of short laser pulses propagating in multimode optical fibres. We developed a general theoretical framework which is based on the modal decomposition of the propagating light and takes the form of a multimode generalised nonlinear Schrödinger equation. This approach provides new insights into the significance of fibre properties, e.g., modal dispersion and mode overlaps, for nonlinear pulse propagation, and for moderately multimode fibres and waveguides it has been shown to be numerically significantly more efficient than beam propagation methods. We subsequently discussed several applications of the model covering laser peak powers in the kW (supercontinuum generation), MW (self-focusing effects) and GW regime (ionisation and plasma nonlinearities) highlighting the importance of multimode effects throughout. While we focused our discussion here on the high-power regime, we emphasise that there is also rapidly growing interest in the application of multimode fibres at low, W-level peak powers. A fast emerging area of interest comes, for example, from optical telecommunications, where in an attempt to increase the fibre capacity researchers are now considering the use of several fibre modes, or several cores within a single fibre, as independent channels. Intermodal nonlinear effects are expected to pose an ultimate limit to the maximum information capacity of the link, which we believe could be estimated by simulations using our model. Various sensing and imaging applications can also benefit from multimode fibres. Moreover, new sources in the mid-IR spectral region are currently being developed for spectroscopy and sensing applications that require novel waveguides such as soft glass fibres or semiconductor-based waveguides and fibres, some of which are intrinsically multimoded at near-IR pump wavelengths. We therefore expect that the multimode nonlinear Schrödinger equation discussed in this work will provide a valuable tool in the analysis and investigation of many future photonics applications. 21 Multimode Nonlinear Fibre Optics: Theory and Applications www.intechopen.com
11,982.6
2012-01-25T00:00:00.000
[ "Physics" ]
Two new interpretation of Plato's Protagoras As we know, one of the most important ideas of Protagoras is Epistemic Relativism that this theory is attributed to him during the history of philosophy, without any dispute; But in the new era commentators such as Dr. Qavam Safari and Cornford by further and more precise reading the conversations between Protagoras and Theaetetus have concluded to this belief that Plato has interpreted the Protagoras’ rule of “Humanism“, by assimilating it with a course which he calls it “secret” and also the theory of Theaetetus “knowledge = perceiving”, in a way that it leads to perfect and very sophisticated relativism; and then in an ahistorical effort, Plato has imposed this relativism to the Protagoras’ mind. Whether this claim is proved or remained only as a claim, it should be discussed; so we have attempted to address this important issue in the present article. INTRODUCTION Before entering the main argument, firstly it is necessary to get briefly familiar with personal life and important ideas of Protagoras, the great sophist of the history of philosophy. Protagoras About his personal life, there are some contents sporadically in historical books; including that he is born in Abdera, a Lonian Colony, located on Thrace, at about 480 BC. in Ancient Greece; And Xerxes, the Persian king, for hospitality of Protagoras' father, ordered to teach literacy to him. Perhaps his first visit to Athens might have been before 433 BC. that he was invited from Pericles to write the new constitution of recently established Pan-Hellenic city, "Thurii", in Southern Italy. He was about forty years in the Greek cities to educate youth, and was the first who took the training wage, and in later life due to writing a book about rejection of the gods, was sentenced to death in Athens, and all his books were burned in the public square, but he fled, and in the way of Sicily was killed in the Black Sea about 411 BC. (Kerferd. G.B, 1997, 229) Protagoras was the most famous Sophists. Apparently, Plato declared him as the first person, who called himself Sophist and also received tuition for his training (a349Prot,). In fact, he could be named as the first and greatest Sophist that the human being and his life is his main concern. He not only did not limit his teaches domain to a particular topic, but also he was an expert in the art of rhetoric and the techniques it relies on, and also in some important issues such as education, law, ethics and politics (Gomperz, 1997, vol 1, 457). Very few fragments from Protagoras have survived, though he is known to have written two major works: Antilogiae and Truth. The latter is cited by Plato, and was known alternatively as The Throws. It began with the "man the measure" (ἄνθρωπος μέτρον) pronouncement. The first, "Antilogiae" which perhaps has been about Gods. It seems that the fundamental basis of charge of impiety to him is located in this work (On the Gods), the book that started with these words: "Concerning the gods, I have no means of knowing whether they exist or not or of what sort they may be, because of the obscurity of the subject, and the brevity of human life." (Kerferd, 1997, 229). Probably the value of ritual religious, the subject of this book, has been as a part of civilized life, or perhaps it has described different forms of beliefs and worships anthropologically, common among the various ethnic groups. The book called "Great Logos" has also been mentioned, that might have been "Truth", which also there are other several titles (Guthrie, 1997, vol 11, 191). Protagoras begins his book "Truth" with these words: "Man is the measure of all things: of things which are, that they are, and of things which are not, that they are not". This sentence became the basis of Protagoras' main philosophical thesis that causes friction and reaction of many others and it was because of his humanistic point of view. His intellectual and epistemological approach has become a foundation that he offered his philosophical insights on the basis of its centrality. 1. Individualism We could pursue the conversations of Socrates in Dialogue by Theaetetus to understand that the word "man" in Protagoras' sentence (man is the measure of all things) is an individual word or a practical one. Because Plato, as the first and oldest philosopher who discussed about Protagoras' opinions, had supposedly said of Socrates' talking to Theaetetus in this dialogue that: "Those individual things are for me such as they appear to me and for you in turn such as they appear to you -you and I being "man"." (Theaetetus, 152) Among these and many similar statements in the Epistle of Theaetetus, it certainly suggests that the Protagoras meant "a man". So we would be led to a kind of individualism. (Copleston, 1990, Vol 106, 1). On the other hand, Protagoras believes that: No one can say contradictory to another's saying or consider his saying as a wrong expression; because the only judge of human's feelings and beliefs is human himself, and till he regards these as true affairs, these are true for him; because he believes that anyone who makes a percept, it's just according to what he has understood, so it's truth; because the truth is not apart from what the man understands and as individuals perceive differently, so one person thinks it's right while the other thinks it's wrong and the third person suspect what is right or wrong; therefore it is true and false, International Letters of Social and Humanistic Sciences Vol. 24 13 right and wrong together (Motahari, 1994, Vol 1, 57). Because the nature affects everyone in his judgments (Below, 1997, 81). On the basis of what was mentioned, the criterion of right and wrong, truth and untruth is mankind, not the kind of man. Because some interpretations such as: the criterion of understanding is revealing a thing to man; and no one can bring contradictory to another's saying; and the uppermost is that a thing could be true and false together, could not be applicable for mankind. Therefore Protagoras considers mankind as the basis of everything including good and evil, presence and absence, quality and quantity, and thought and action; indeed the most prominent intellectual and epistemological basis of his is individualism and his emotion-oriented approach is also based on individualism. Sensationalism The most important debate that is about Protagoras' famous expression, and is also proposed in Socratic interpretations and criticisms, is the theory of sensory perception contained in it. The basis of this theory is whether the intent of Protagoras by "thing" in his famous sentence, "man is the measure of all things…", was only objects of sense or not. Protagoras considers feeling and consequently senses as the only way of knowing and connection with the universe. He believes that the man has nothing just his senses to perceive affairs; because the reasoning is also based on sensory degrees, and sensory perception is different among individuals; so having no choice but to take authentic whoever feels whatever; while man knows that everyone perceives everything in his own way and those affairs which are feel, are not fixed and immutable, but rather unstable and convertible. (Foroughi, 1994, 16) Some even likened him to John Locke and said: Protagoras considered the feeling as the only means of awareness and knowledge and never believed in truth beyond the feeling, and said that there is no absolute truth, but it is whatever that occurs for certain people in certain circumstances (Will Durante, 1992,vol 2, 400). He even believed to gain geometry definitions from the tangible world which have no reasonable and former principles and also no perfect truth by their own nature (Gompertz, 1994, vol 1, 472). As all the human feelings have rooted in different causes, in Protagoras point of view, are of equal value and rating. Nothing is superior to the other. In fact he considers the cause of equality of all people's feelings in the way that: in usual situation, a person perceives objects in a different way from in unusual situation. Thus the way of feeling is various according to age, sleep and waking, health and disease, and dementia. So how we can rate the feelings? Could we consider some as honest and true, but the others as false and virtual? Or as all feelings are natural, so their causes are out of hands and all of them are true; so everything is true (Mahdavi, 1995, 27). Protagoras believes that wholesome feeling and temperament is related to lucid ideas and sagacity, because he just calls one as a wise who has the most common temperament. This person has the most common, powerful and perfect ideas (Guthrie, 1994, vol 2, 20). Finally as we, human beings, are with different feeling and wisdom, so all objects proportionally are different, thus all perceptions are true and real (Takrini, 1979, 15). So according to what we described, it becomes clear that as he believes in the theory of sensory perception, therefore his purpose by "things" is sensory phenomena which are understandable by feelings. 3. Relativism Protagoras' claims in the fields, individualism and sentimentalism led to cognitive relativism. If there have been no fixed and objective substance, in other words, a constant affair in objects, then in addition to flux and changes of objects, these also would have happened frequent exposure to a broad general relativity of understanding the outside world. Because on the basis of attitude, obtained perceptions would be numerous for each person and so would be valid for him. Finally it could be acceptable to vote for one of perceptions by the consensus of people in one way of understanding, but it is impossible to consider other claims as false and untrue. Because there is no fault and everyone is right, but understandings' a bunch of people is better for some specific interests and it must be accepted. The final result of this approach is the proliferation of knowledge and skepticism. As every perception is real even mutual perception. Therefore no cognition can claim that it is monopoly in decoding the face of the world. On the other hand, a part of rhetoric training was that the students were taught to prove both side of a problem with equal success. He wanted to teach his students in a way that they could praise and denounce everything, and could certainly defend such weak arguments to appear to be stronger (DK: c2, a21). And also he said in "Antilogiae": "To everything there are two opposing arguments." Address training was not limited to form and style, but it was also a discussion of the nature and essence of the object. How such trainings could not indoctrinate this belief that every truth is relative and no one has certain knowledge? Indeed, it seems to be minor and variable, not general and constant, because truth for everyone is what convinces him and it is entirely possible to convince anyone that black is white. It could be to believe in, but never to know what the judge said. (Guthrie, 1994, Vol 10, 98-99). Socrates in (Euthydemus: 286b-c) says: "Protagoras, and even thinkers before him" believe that Contradiction is impossible, and concludes that they seem impossible for someone to say the wrong words. Aristotle says (Metaphysics, 1007b18): Those who accept the words of Protagoras must accept the idea that "It could be about a single object, the conflicting statements that simultaneously, both are true" and "You can prove anything on any subject or reject it". A little later in (1009a6), after referring to the denial of law of contradiction, he says: "Protagoras word is cognate with this idea, and either or both theories are true or both false; because if all phenomena and beliefs are right, However, everything must be right and wrong, because a lot of people are having the opposite opinion. " (Guthrie, 1994, Vol 11, 46-43). So Protagoras denies any absolute and fixed knowledge and justifies all fields of knowledge with the same base and in the discussion of recognition considers the relativity and skepticism. In fact the citation of relativity of knowledge is a kind of skepticism; it would seem that knowledge is not absolute in this idea. Indeed, we absolutely do not perceive the truth but in proportion to their cognitive abilities and forces (sensation, imagination, memory, reasoning, etc.) know the facts. So regardless of our cognitive faculties is not clear that how and what absolutely is the truth. Truth in our opinion is what we consider it (Saneii Darreh Bidi, 2006, 343). Thus cognitive relativism follows a kind of skepticism. This kind of skepticism was not only among the sensible things of this world, but it has also spread to the gods and their existence was suspected. As we have seen, Philosophical ideas of relativism, sensationalism and individualism throughout the history of philosophy, without any dispute, has been placed among the ideas of Protagoras and no research has been done about its accuracy and inaccuracy. But in the current era, one of the most important foundations of the human sciences is the research about the accuracy of expressed opinions in this area; by the research Cornford and Dr. Qavam Safari have done, they have expressed some commentary and discussions about the rejection of validity of the votes assignment from Plato to Protagoras, which we will analyze it. What we would discuss about in the following is part of the research into the epistemic relativism. In the contemporary philosophy of west, an important part of recent versions of epistemic relativism, more or less, are renewed efforts which have usually done within the framework of the revival of Protagoras in new terms and replication to the Socrates 'bugs. Protagoras time ever, some scholars have advocated epistemic relativism. And against them, many thinkers have not accepted this theory due to the problems they posed about it. Here, first, we would present the discussion between Socrates and Protagoras (quoting Plato) in Theaetetus' dialogue about relativism. The reason for selecting the Theaetetus' dialogue is that in this conversation, the main topic of discussion is review and critique of the theories of knowledge and as if epistemic relativism assignment roots to Protagoras can be found here. After reviewing some of the conversation, some contemporary narratives to defend Protagoras will be considered. PLATO EXPOSED TO CORNFORD AND DR. QAVAM SAFARI CHARGE As we mentioned in the introduction, Dr. Mahdi Qavam Safari in the article, "Plato's ahistorical interpretation of humanism rule of Protagoras", and Cornford in his book, "Plato's Theory of Knowledge", believe that Plato has interpreted the "humanism" rule of Protagoras, by combining it with his so-called "secret" doctrine and the theory of Theaetetus "knowledge = perceiving" in a way that it leads to perfect and very sophisticated epistemic relativism; and then in an ahistorical effort, Plato has imposed this relativism to the Protagoras' mind. Plato and, after him, many scholars and historians of philosophy and philosophers have alternatively attributed epistemic relativism and subjectivism, or both, to Protagoras, despite dramatic differences among them. Guthrie also believes that Plato, with respect to Protagoras, disagrees widely with his beliefs and therefore he may has interpreted his views, despite the possibility of different interpretations, in a way that those ideas seem completely irrational. This is especially true in this sense that Plato is not a historian of philosophy and even when he attempts to narrate philosophical thoughts in the time before him, his narration is clearly not quite historic; but it is a story that a philosopher, such as Plato, narrates it and so it is more writing of philosophy rather than writing the history of philosophy. This is perfectly normal and we should not expect to read the history of philosophy in the dialogues of Plato or, for example, the works of Aristotle and Hegel and even Jaspers. However the point of the matter is that contrary to the idea of Greek historian, Lesky, disagreement with some interpretations of Plato about the views and beliefs of his predecessors does not mean that we distrust him or know him as a "liar", but it only means that Offering a different interpretation of Plato's interpretation is where, there are other possible interpretations. (Guthrie, 1962, 189) Plato attempts to review and criticize the theories of knowledge in dialogues of Theaetetus. The first theory is the theory of knowledge which is based on "knowledge is perceiving". As also mentioned earlier, according to Plato's approach of analysis, he believes that finally Theaetetus' theory of knowledge is united with Protagoras' rule of humanism and Heraclitus theory of becoming. (Plato, 1961, Theaetetus, 160e) It is as if Plato believes that Theaetetus' theory of knowledge is based on the epistemological theory of Protagoras (man is ILSHS Volume 24 the measure of all things) and cognitive theory of Heraclitus (becoming popular and enduring objects). Plato tries to draw a connection between these three theories in front of the reader; so as soon as he poses the theory of "knowledge = perceiving", he considers it as a different expression of Protagoras 'humanism rule. For relevance and connection of Theaetetus' theory of knowledge with humanism rule, he begins the subject with the example of "the same wind" and its different perceiving from those two men, and says: Socrates That you have given, but one which Protagoras also used to give. Only, he has said the same thing in a different way. For he says somewhere that man is "the measure of all things, of the existence of the things that are and the non-existence of the things that are not." You have read that, I suppose? Theaetetus Yes, I have read it often. Socrates Well, is not this about what he means, that individual things are for me such as they appear to me, and for you in turn such as they appear to you -you and I being "man"? Theaetetus Yes, that is what he says. Socrates It is likely that a wise man is not talking nonsense; so let us follow after him. Is it not true that sometimes, when the same wind blows, one of us feels cold, and the other does not? or one feels slightly and the other exceedingly cold? Socrates Then in that case, shall we say that the wind is in itself cold or not cold or shall we accept Protagoras's saying that it is cold for him who feels cold and not for him who does not? Theaetetus Apparently we shall accept that. Socrates Then it also seems cold, or not, to each of the two? Theaetetus So the hot and cold and in cases such as those, "appearing" is as "perceiving". Those individual things are for anyone such as those appear to them; or assume that they are such a thing. (Ibid, 152c-151e) As Qavam Safari says, the main source of attributing Relativism to Protagoras is this section of the dialogue of Theaetetus which here, of course, was posed with some removed parts. In his point of view, for many readers throughout the history, Plato's eloquent writing and powerful expression causes to be agree with his reasoning with no doubt, as Theaetetus does. Although Plato has wrote this part of dialogue to clarify the meaning of the doctrine "knowledge = perceiving" and thus, "Humanism", but his own writing is not less ambiguous than Protagoras' words. He also has attributed some subjects to Protagoras as he interprets his words that we believe, besides not coordinating together, these are made by Plato, himself. He also believes that Plato could have incorporate the doctrine of Protagoras with another doctrine of Heraclitus and also Anaxagoras, to conclude that it is not easy to attribute Subjectivism and even Relativism to him. Some specialists in Greek philosophy have also believed that Plato's result is that much real that they simultaneously have attributed both Relativism and Subjectivism to Protagoras. For example Guthrie believes that: The best title to describe Protagoras' standpoint is "radical subjectivism", because this title includes perceiving as well as thoughts and beliefs, unlike the title, "positivism" and "phenomenalism", and equally applied on sensible matters such as hot and cold, and also the concepts of true and false. In Guthrie's opinion, according to Protagoras' radical subjectivism, "there is no reality beyond phenomena and unrelated to them; there is no difference between appearance and existence" (Guthrie, 1969, 186). Therefore, Qavam Safari considers ambiguity and even amphibology in some parts of Plato's writing about Theaetetus, and believes that his results in this section are not acceptable without scrutiny. According to these two cases, Cornford accuses Plato for his untrue, ahistorical interpretation of the theory of Protagoras, in the analysis of this part of Plato's dialogue in the book "Plato's theory of knowledge": firstly, based on the text of Theaetetus dissertation; and secondly, based on the report of Sextus Empyricus. A -Analysis and review of Cornford's opinion, based on the text of Theaetetus' dialogue: He translates a part of the item 152b which is already the main sightly subject: "Socrates: Then in that case, shall we say that the wind is in itself cold or not cold or shall we accept Protagoras's saying that it is cold for him who feels cold and not for him who does not?" (Cornford, 1960, 32) In fact this statement is a conditional theorem of the kind of detached. In other words, Socrates and Theaetetus are encountered with dilemma to choice between two different rules; which at least do not have possibility to make consensus. Cornford has no comment about the provisions of primary theorem (shall we say that the wind is in itself cold or not cold?) and ILSHS Volume 24 also this point that the whole phrase is a detached. His discussion about the provisions of secondary theorem or second part of the statement reveals Protagoras' idea. He believes that the statement, "or shall we accept Protagoras's saying that it is cold for him who feels cold and not for him who does not" includes a variety of interpretations and among them are two possible interpretations which are: 1. "the wind is in itself cold or not cold". Heat and cold are two features which have the possibility of co-exist in a unit of a natural object. I feel one of that features and you, another. 2. "the wind is in itself not cold neither not cold". Wind has none of the features which we feel them separately and with sense, and the wind itself is not understandable with sense. Wind is something out of us and makes us to feel cold and warmth. Our sensory belongings, such as cold and warmth besides natural objects spread out independently, do not exist. But only when the action of perceiving is occurred, sensory belongings of cold and warmth are happened. According to Cornford, between these two theories, it's more likely that Protagoras claims the first and simplest interpretation: it's based on that the wind is cold and also not cold. The second theory and interpretation (the wind is in itself not cold neither not cold) is a main characteristic and feature of the theory of perceiving which is promptly suggested as a "secret training". The feature of this interpretation is that this named training is not found in Protagoras' book. The first theory has not left the treatment of Naïve Realism to common praxis; the same attitude and mentality which does not hesitate that features to perceived sense by us do not exist in objects themselves". (Ibid, 33-34) For best result, we bring the reading of Cornford with the second reading of the text together to easily discover similarities and differences between them: -Cornford's reading of the text: "shall we say that ``the wind is in itself cold or not cold`` or shall we accept Protagoras's saying that ``it is in itself nor cold neither not cold`` ". -Second reading of the text: "shall we say that ``the wind is in itself cold or not cold`` or shall we accept Protagoras's saying that `` it is in itself cold and also not cold``". As we see there is only one difference between the two readings: The secondary theorem in Cornford's reading superintends on the general sentence, "refusal of the unity of opposites". Cornford has no rational and justified reason for his interpretation, but another one has become justified and reasonable according to logic criteria and considering two terms: loyalty to discontinuity (dilemma facing Socrates and Theaetetus); and adherence to primary provisions in the words of Plato's text. So by the existence of logic reasoning in the review of Cornford's reading the text of Theaetetus' dialogue revealed that Cornford's reading is nor loyal to primary theorem, available in content of the item 152b, neither compatible with detached mode (separate or paradox) means dilemma in front of Theaetetus. But the second reading contains both case and is in front of Cornford's reading from this perspective. So, Cornford's reading does not coordinate with items 158e-159e from Theaetetus' dialogue. Earlier we saw that Cornford claimed: it is likely that Protagoras asserts the first and simpler interpretation among two theories, based on that wind is warm and cold together. The second interpretation is a main characteristic and feature of the theory of perceiving which would immediately be suggested as a "secret training". This interpretation requires that named training could not be found in Protagoras' book." (Ibid, 35). As Cornford says, Plato ascribes the expression "refusal of the unity of opposites", which according to the theory of perceiving means only interacting active objects with passive senses, to Protagoras. But Cornford's elicitation is contrary to the explicit text of Theaetetus' dialogue; since Plato, according to secret training of perceiving in the items 158e to 159e, presents explanation of the theory of perceiving by using the analogy of "healthy Socrates and ill Socrates" and tasting wine. By this presentation, Plato talks explicitly of the theory of historical Protagoras about opposites co-exist on the unit of wine itself and the advent of each of the two on the senses of healthy and sick Socrates. Therefore, by ignoring the conflict of his reading with the named items, Cornford has committed the mistake and represented a distorted face of Plato. But the question that arises in this regard is whether these words are Plato's own description or the quote of Protagoras' own words? As we saw, Cornford was willing to consider Theaetetus' interpretation justly as a quote, and this idea of his led to contradiction between his own interpretation with the original text. Guthrie also says that "it should be a part of Protagoras' own reasoning". (Guthrie, 1969, 171). But Dr. Qavam Safari believes that it cannot be a quote, because firstly Plato's own interpretation in Theaetetus prior to that, is "and he meant something about…"; and then to discussed explanation, Socrates immediately asks Theaetetus, "Does he mean that?". This interpretation is also repeated in Cratylus and it is clear that if it was a quote, it was inapposite to represent this statement. Secondly, in 152a 4, Socrates tells Theaetetus: you have read Protagoras' words without any doubt, and he answers: I have read it "often". So if those words were quoted, Socrates did not need to ask Theaetetus that "Does he mean this?" after explaining the "purpose" of Protagoras. Thirdly, if that explanation, in fact, was quoted, it was clear that "man" in the rule of "humanism" refers to the human individuals and any person has been in consideration of Protagoras. So those who consider it as the quote, should never talk about it that whether the man used in Protagoras' rule is partial man or total man?, or perhaps he was not aware of differences between them. Thus according to him, there is no evidence that we consider those interpretations as the quote and even the style of Plato's literature allows us to consider those words as Protagoras' own words. So his word is inevitably limited to the rule of "humanism" and remains in its ambiguity. We should consider these explanations as Plato's interpretation of Protagoras' words that if it was so we would get an important result: it is possible to present another interpretation instead of Plato's interpretation of Protagoras' words and his interpretation is not the only possible interpretation. This result could be effective in the way of understanding Protagoras' words, because as we saw, we could understand his word in another way. B -Analysis and Review of Cornford's opinion on the basis of Sextus Empyricus' report: Sextus Empyricus, Greek philosopher and skeptic of third century AD, in a book named "Pyrrhonic Sketeches" talks about the theory of Protagoras. Cornford, in book "Plato's theory of knowledge", quotes an important issue from Sextus which we bring it exactly here: "Protagoras says that objects consist of basic fields of all manifestations and appearances. Therefore object is an independent issue and could be all things which appear to all humans.Humans perceive various objects according to changes in their circumstances and conditions in different times. A person in the normal situation perceives the things of an 20 ILSHS Volume 24 object which are possible to appear for a usual man; and a person in the abnormal situation perceives the things of an object which are possible to appear for an unusual man. This rule is true on different times and days of life, on sleep-wake states and on any condition. Thus, according to Protagoras, it is confirmed that human is the criteria of what exists: whatever appears to man is in a way that exists and anything that does not appear to anyone, does not exist". (Cornford, 1960, 35, quoted from the paper " Pyrrhonic Sketeches " By Sextus). According to this report, some of the main elements in the thought of Protagoras are: 1 -The existence of any matter or thing which contains the basic fields all manifestations and expressions which appear to humans. 2 -Ability to "emerge" status and qualities of external objects on the senses of men, in any case, (the issue of the representation of the senses); either normal (i.e. healthy) or abnormal conditions (such as illness). 3 -Variation in the appearance and manifestations of what appear is subject to some factors such as senses states, multiplicity and objects change and time difference (The importance of the interaction between foreign affairs with senses). 4 -Understanding the being or not being of a thing and how to be and how not to be, depends on appearance or non-appearance of that thing on human senses. As a result, human is the criteria of being and not being of affairs. 5 -Existence of the principle of causality and admission of relating senses with reality in cooperation with external objects is also among the basic assumptions of Protagoras. Cornford believes that the historical Protagoras, like Heraclitus and Anaxagoras, respects to the theory of "co-exist and unity of opposites", while Plato introduces Protagoras as someone who is the contender of the theory of "refusal of the unity of opposites". (Ibid, 33-34). According to the mentioned issues, there is no conflict between what achieves by the report of Sextus, with Plato's interpretation of Protagoras' thought. According to the report of Sextus Empyricus as an impartial source, text of the item 152b verifies the introduction of true Protagoras by Plato. Sextus' report contradicts Cornford's self-made Plato. Indeed, the problem of this issue is concerned with Cornford's inaccurate reading of the expression of the item 152b in Theaetetus' dissertation, which during that reading, Plato is introduced as one who considers Protagoras as the pursuant of the theory of "refusal of the unity of the opposites"; while this immature and unreflective treatment is by Cornford himself. Cornford represents a face for Plato as the garbler of Protagoras' theory and then he arises against him and accused him of making a false interpretation of Protagoras. But Dr. Qavam Safari responses to this criticism in this way that: assuming these statements are Protagoras' purpose, can we consider him as the relativistic only because of this idea, or not? Then he analyzes this subject: In his view, after quoting these words of Sextus, Cornford adds properly that "If this is Protagoras's view, his doctrine would not be subjectivism; even the term "relativism" is also dangerously misleading". (Ibid, 35). In accordance with Sextus' interpretation, objects, means traits and characteristics, exist, whether anyone will understand them or not. Therefore Cornford correctly concluded: "The result is that the second view which says: As long as a person does not understand the wind It is neither cold nor hot, is the interpretation that Plato believes in, due to vague words of Protagoras." (Ibid, 36). In other words, the view is an interpretation which Plato attains by flux "secret training". In the following, Qavam Safari noted to Guthrie's opinion as opposed to this interpretation. Guthrie believes that Cornford has quoted Sextus' words incompletely because Sextus, in the sentence before the quoted items, ascribes the doctrine "object is in flux" to Protagoras. He writes in the following: "This definitely belongs to "secret training", and when Sextus tries to go beyond the theory of "humanism" and its explicit essentials, it discredits his testimony on Protagoras' noble thoughts. (Guthrie, 1969, 185). But from his point of view, three points need to be noted about Guthrie's words. First, Cornford is not unaware of Guthrie's sightly sentence which Sextus ascribes it to Protagoras, and even shortly after the discussed item, he quotes and interprets it: "In fact Sextus says that Protagoras believed "Object is in flux" and during its flux, additives compensate for excreta, and our feelings are modified according to the different times of life and physical condition. Those words may just mean that additives compensate for feeding of fixed excreta. (Cornford, 1960, 35). It should also be noted that he documents his words about nourishment and compensating the excreta by party d207 and fidon d87, and adds that Sextus' source is unknown in attributing flux doctrine to Protagoras and probably he has also involved in misunderstanding due to Socrates' report by which he introduces Protagoras as the follower of flux doctrine. Second, we assume that Sextus would have an independent source except Plato himself in Theaetetus by attributing flux doctrine to Protagoras. On the basis of this assumption, unlike Guthrie's conclusion, we attain this result which only according to flux doctrine and contrary to Plato's interpretation, does not lead to interpret his words about the theory of humanism to epistemic relativism or even worse "radical subjectivism": it means that we could consider him as a believer in flux doctrine, but two results of epistemic relativism and radical subjectivism could be ignored of attributing to him. Third, if presenting secret training of flux, according to Guthrie, leads to go beyond the humanism and its explicit essentials and therefore discrediting the testimony of a person about Protagoras' main thoughts, it is astonishing that why Guthrie ignores this point about Plato and does not say that he is the first person who goes beyond his "humanism" and "its explicit essentials" by presenting flux doctrine as secret training, and attributing it to Protagoras. Thus "he discredits his testimony about Protagoras' noble thoughts". So Dr. Qavam Safari has associated with Cornford rejecting this criticism and insisted in his claim based on Plato's accusation about attributing epistemic relativism to Protagoras. He also presents other reasons except from Cornford's reason to prove this accusation. Here are a few examples of these reasons which we would mention them briefly: In his opinion, if we even consider the meaning of the word, faintai, as "sensorial sounding", the time of declaration of rule, with regard to historical context is not acceptable. Plato's equation, among the sensory data, is acceptable only when, firstly we consider difference between the substance and its qualities, and secondly we do not accept the possibility of comorbidity of antithetic qualities in unity of substance. These are two conditions of "faintai = sensorial sounding", interpreting Protagoras' rule to epistemic relativism requires them to fulfill; but unlike procedure of the history of the development of concepts, attributing a clear understanding of those two conditions to Protagoras, and also attributing the condition "faintai = sensorial sounding" to he, surely are untrue interpretations in the absence of insight evidences. So no one can judge about the beliefs of others while has no clear conception of their existence or absence, based on certain concepts, let alone that attributes logical implications of limited and distinctive meaning of that concept to them. This is while Plato not only attributes a specific meaning of the word, faintai, to Protagoras, but ILSHS Volume 24 also in criticism to his rule, Plato does not remain faithful to the meaning of the word which he has attributed to Protagoras. His reason for this disloyalty is that Plato goes beyond the semantic field of "sensorial sounding" when criticize the rule, not just in terms of word, but due to the meaning and conception. He expresses that he has legal concept of "sounding" in his mind: If the Protagoras' truth is really true, and whoever would be the only criterion for his own vote and beliefs arbitration, thus Protagoras does not have any right to know himself superior to others and teach them. This is while we have to assume ourselves not as wise as him, and should participate in his teaching class. If Protagoras' rule is not a joke, then "total occupation to dialectic" (161e 6) would be an "amazing and dull performance of stupidity" (162a 1). On the other hand, we also remember that when Socrates wanted to make clear Protagoras' purpose, he exemplifies the wind. He said that it is possible to consider the same wind as cold in one's opinion, and warm in another's opinion. Now passing this example which absolutely induces the sensorial concept of faintai to "total occupation to dialectic" and surely inducts legal concept of faintai, or in other words, the meaning of dokei, is the important extended sign and Plato's intellectual incoherence in narrating and criticizing humanism. Now we believe, as Jonathans Barnes says, that "So Plato is not faithful consistently phenomenological interpretation, offered in Theaetetus 152 a". (Barnes, 1989, 543) Qavam Safari investigates this disloyalty from another aspect. In this opinion, due to example of the wind, being the human thought as the criteria is only limited to a predicate. The wind blows, one feels it cold, another feels not cold. The first one says: the wind is cold, while the second one says: it is not cold. The wind here is a unit affair which we speak about its kind of description. If we just consider what has mentioned till now, we should definitely present Protagoras' rule, contrary to the opinions of some commentators: "Man is the measure of all things, The criterion is the modality of things which exist, and The criterion is the lack of modality of things which do not exist. That is, if I feel the wind is cold so I am the criterion of being cold for the wind, and if I feel that the wind is not cold, so I am the criterion of being not cold for the wind. So the rule with this presentation is only observing the predicate existence and we only expect this matter and nothing else. Aristotle, as well as Barnes, has pointed out that (ibid, 544) he would accept Plato's predicate interpretation which comes from example of the wind. While he discusses about the result of Protagoras' words, he also states that carting of opposites is impossible simultaneously in one substance. But if Protagoras' words be accepted, it would lead to this impossible result that carting of two opposites become possible: "If anyone thinks that man is not a battleship, so he's not a battleship, but..."(Aristotle, Metaphysics Gamma 4, 1007 b 23). Thus Plato, by exemplifying the wind, limits being human thought as measure merely to predicate existence, and it is not compatible with phenomenological interpretation presented in Theaetetus. So Dr. Qavam Safari, by using the meaning of "faintai = sensorial sounding" and also Plato's disloyalty to doctrines stated in his dialogue, shown that Plato's interpretation of Protagoras' humanism cannot be a historical interpretation. He, besides Cornford and by mentioning some recalled reason, accused Plato for forging ahistorical Protagoras. He believes that Plato has imposed some certain ahistorical subjects to Protagoras' mind. They also believe that Plato, by relating humanism to his own favorite doctrine which calls it as International Letters of Social and Humanistic Sciences Vol. 24 secret doctrine of Protagoras and also by harmonizing that rule and that doctrine with the theory of Theaetetus "knowledge = perceiving", successes to represent that interpretation and gradually distributes it from traits to essence, and establish the most advanced form of epistemic relativism. So, as these two commentators believe, historical Protagoras is different from Protagoras who Plato has introduced. CONCLUSION As we saw, some different critiques and interpretations are represented by great thinkers of the world of philosophy on Plato's dialogue and his critique toward Protagoras. None of these critiques has seriously proved Plato's accusation in attributing of untrue ideas to Protagoras. There is still need for debate and responding to these critiques. But what is hopeful, is opening new gates of these critiques and interpretations which cause to further and more exact reading of Plato's dialogue in order to achieve new philosophical points in these dialogues.
9,679.6
2014-03-17T00:00:00.000
[ "Philosophy" ]
A bound on the charm chromo-EDM and its implications We derive bounds on the electric and chromo-electric dipole moments of the charm quark. The second one turns out to be particularly strong, and we quantify its impact on models that allow for a sizeable flavour violation in the up quark sector, like flavour alignment and Generic U(2)^3. In particular we show how the bounds coming from the charm and up CEDMs constrain the size of new physics contributions to direct flavour violation in D decays. We also specialize our analysis to the cases of Supersymmetry with split families and composite Higgs models. The results exposed in this paper motivate both an increase in experimental sensitivity to fundamental hadronic dipoles, and a further exploration of the SM contribution to flavour violating D decays. Introduction Electric dipole moments (EDMs) set stringent bounds on the CP structure of any new physics (NP) which becomes relevant at energies not far from the Fermi scale. An interesting question to ask is if and how one can exploit the current and foreseen experimental reach to constrain the flavour structure of such NP as well. This issue becomes particularly relevant when the NP energy scale associated to the third generation is much lower than the one associated with the first two. This situation is typical of models which aim at evading collider and precision bounds while keeping the Fermi scale as natural as possible. In this class of theories, the new degrees of freedom related to the third generation often mediate the dominant contributions to the dipole moments of the light quarks. For quarks of the first generation, this immediately translates in a contribution to the EDMs of nucleons and nuclei. In this case, the nonobservation of those EDMs sets bounds on flavour violating parameters relating the first and the third generation. If also the second generation quarks were found to give relevant contributions to the EDMs of nucleons and/or nuclei, than one could also constrain flavour violation between the second and third generation. In this paper we show that this is actually possible, by computing the charm chromo-electric dipole moment (CEDM) contribution to the neutron EDM. We also show that the bound one derives in this way has interesting consequences for the flavour violating phenomenology of some models. The current and foreseen experimental sensitivities to the electric dipole moments of the neutron, deuteron and mercury are summarized in Table 1. The quoted projection for d n is expected to be reached within a few years by more than one experiment, the one for d Hg by an upgrade of the same apparatus that sets the current bound. On the other hand, the experiment aiming at the measurement of d D is still in the proposal stage 1 [3] Future sensitivity ∼ 10 −28 [4][5][6][7][8] ∼ 10 −29 [9] ∼ 10 −30 [3] Table 1. Current bounds (90% C.L. for d n , 95% C.L. for d Hg ) and expected sensitivities on the EDMs of the neutron, deuteron and mercury, in e cm. In the SM all the EDMs and CEDMs vanish exactly at the two-loop level [10], the threeloop contributions have been evaluated in [11,12] and, e.g. for the down quark, yield the estimate d d 10 −34 e cm. The neutron EDM is however dominated by long distance effects, the most recent estimation of them [13] resulting in d n 10 −31 e cm. This number is well below current and foreseen experimental sensitivities. Therefore d n remains a genuine probe of physics beyond the Standard Model. This paper is organized as follows. In Section 2 we derive bounds on the electric and chromo-electric dipole moments of the charm quark. In Section 3 we discuss their implications for various NP models, both from an effective field theory (EFT) point of view (Sec. 3.2) and in the specific cases of Supersymmetry (Sec. 3.3) and composite Higgs models (Sec. 3.4). We summarize and conclude in Section 4. Bounds on the charm quark dipole moments In terms of fundamental dipoles, the electric dipole moments (EDMs) of the neutron [14], deuteron [15][16][17] and mercury [18] read 2 : where d u,d ,d u,d are respectively the EDMs and CEDMs of the up and down quarks, d e is the electron EDM, and w is the coefficient of the Weinberg operator. For q = u, d, s, c, b, t, they are defined via the following phenomenological Lagrangian where 0123 = 1. The expressions (2.1) and (2.2) assume a PQ symmetry to get rid of the θ term. Ignoring this assumption would not only introduce a strong dependence on θ, but also modify the one on the CEDMs. The CEDMs linear combination affecting the EDMs would change, but not the order of magnitude of their impact [14]. In studying the implications of the d n bound, in the rest of this paper we will conservatively use the values 0.5 and 12 MeV, respectively, for the coefficients (1 ± 0.5) and (22 ± 10) MeV in Eq. (2.1). The Weinberg operator in (2.4) mixes via renormalization group (RG) evolution into the quarks EDMs and CEDMs, while the converse is not true. However, when in the running from high to low energies a quark q is integrated out, its CEDM gives the following threshold correction to the Weinberg operator at one-loop level [21][22][23] where all the parameters are evaluated at the mass of the quark. The uncertainty from going to higher loops in (2.5) can be estimated to be at the level of 8 α s (m q )/4π, about 25% for q = c, where 8 is a colour factor. The subsequent running makes also the lighter quarks dipole moments sensitive tod q . In terms of the charm CEDM evaluated at the scale m c , w and the dipoles d u,d ,d u,d at the hadronic scale of 1 GeV read (2.6) 2 A recent reevaluation of the neutron EDM [19] sets a value which is smaller than the one used here, namely d n = 0.79d d − 0.20d u + e(0.59d d + 0.30d u ) (PQ-symmetric case, w contribution ignored). The difference stems from having evaluated a parameter with the lattice instead of using QCD sum rules. For the mercury EDM, see also the recent error estimate of [20], which makes the quark CEDMs impact compatible with zero. In deriving (2.6) we have used the running from [24,25] at one-loop. The relevant running of the Weinberg operator at two-loops is, to our knowledge, unknown. Moreover, in the extraction of a bound ford c , the impact of the up and down EDMs is subleading with respect to the one of w. This is evident by inserting (2.6) into (2.1) and (2.2), and makes the known two-loop running unnecessary. The experimental bound on d n then implies or, equivalently, m c |d c | 6.7 × 10 −9 . This is to be compared to the previous and only bound existing in the literature, |d c | 3 × 10 −14 cm, obtained from ψ → ψπ + π − at the Beijing spectrometer [26]. As already said, the bound (2.7) comes mainly from the direct contribution of w to d n . The mercury EDM bound thus yield a much weaker constraint ond c , than the one set by d n . An analysis analogous to the one we performed here can be carried out also for the bottom and top CEDMs, as was done in [21] and [27]. As a cross-check of our derivation, we verified that our procedure reproduces their results. The indirect constraints on the charm EDM are weaker. They can be derived from both the mixing of d c into d d via electroweak running, and from the d c contribution to B → X s γ. In the first case, using [28] for the running and the bound on d n , one gets where again d c is evaluated at the charm mass scale. In the case of B → X s γ, the contribution of d c is relevant since it has the same loop and CKM suppressions of the Standard Model one (|V cb | |V ts |). To derive it, we use [29] for the charm dipole contribution to the Wilson coefficient C 7γ , and [30] for the dependence of BR(B → X s γ) on C 7γ . In explicit models one generically expects a charm magnetic dipole moment, of size similar to d c , to be generated. However the sensitivity of C 7 to it is more than one order of magnitude smaller than the one to d c , and we ignore it here for simplicity, like we do with other possible NP contributions. Implications for New Physics It can be convenient to express NP contributions to the EDM and CEDM of a given quark q in terms of the following high scale effective Lagrangian where c q ,c q are coefficents of order one, and ξ q are suppression factors, all depending on the specific model and in principle complex. With these definitions, the quark EDMs and CEDMs read Size of the bounds in EFT Imposing d n < 2.9 × 10 −26 e cm and considering one operator at a time in (3.1), for Λ = 1 TeV we find where all the coefficients are evaluated at the scale Λ = 1 TeV. Notice that the 4 fermion operator contributions to the EDMs [33] have been ignored. Given the uncertainties present in casting the bounds, this approximation is justified in those models where such operators are not enhanced with respect to the dipole ones. This happens for example in Supersymmetry, where they arise at loop level, or in composite Higgs models with partial compositeness, where they appear at tree level but their coefficients are further suppressed, with respect to the dipole operators ones, by an extra light quark Yukawa coupling. Interplay with bounds from flavour violating processes The new bound we derived can be relevant for models allowing for a sizeable flavour violation in the right-handed up quark sector, while at the same time providing a large splitting between the energy scales associated with the third and the first two generations of quarks. Such a scenario is favoured by naturalness arguments when combined with current direct NP searches, and consistent with data due to the fact that the stronger constraints in flavour violation come from processes involving down quarks. Explicit realizations are models of flavour alignment (see e.g. [34]), composite Higgs models (CHM) with an anarchic flavour structure, or Generic U (2) 3 [35]. In such models, measurements of CP asymmetries in processes like D → ππ and D → KK are among the most stringent probes of flavour violation in the up quark sector. This is true in particular for chromo-magnetic dipole operators of both chiralities, that are instead less efficiently constrained by D −D mixing or K [36]. We write the high scale effective Lagrangian contributing to such processes as 9) c D , c D are coefficients of order one, and ξ 8 , ξ 8 are suppression factors, all depending on the model and in principle complex. The most recent measurement of the CP asymmetry in D decays is [37] yielding to the world average [31] ∆A CP = (−3.29 ± 1.21) × 10 −3 . The Standard Model contribution could possibly account for such a value, however its determination is still object of intensive discussion, see e.g. [38][39][40][41]. Our approach is therefore to require the NP contribution to be smaller than the average central value: following the analysis of [36] and considering one operator at a time, we find that this implies, for Λ = 1 TeV, It is important to keep in mind that the above bound is plagued by O(1) uncertainties due to the poor knowledge of the matrix elements of O 8 and O 8 . As stated in the introduction, we are interested in models where the degrees of freedom associated with the third generation are those giving the dominant contribution to the operators in (3.1) and (3.8). This translates in the assumptions 11) and, for the dipole moments where W L i3 and W R 3i are flavour violating parameters that quantify the communication between the i th generation of quarks, and the new degrees of freedom associated with the third generation. For instance in Supersymmetry, if gluino contributions dominate, they are the matrices in flavour space in the gluino-quark-squark vertices. Notice that everywhere Λ is the energy scale associated with the third generation quarks, and that the phases of the parameters in (3.11), (3.12) are flavour violating ones. • The first important observation is that ξ u ξ c = ξ 8 ξ 8 . In the absence of a direct constraint on ξ c , it was the bound from ∆A CP that allowed to set the stronger constraint on that combination of parameters. Now, as one can see by taking the product of (3.3) and (3.5), and of (3.10) with itself, the EDM of the neutron is already setting the stronger bound by a factor of ∼ 60. This conclusion will be strengthened by the foreseen experimental sensitivities, in the absence of improvements in the understanding of the SM contribution to ∆A CP . • The above generic situation can be specialized to the case of W L q3 V qb , with V the CKM matrix, as typical of models of alignment. We now assume, for simplicity, maximal phases and all the O(1) coefficients to be one. In this case, the bounds from and those from the charm and up CEDMs require, respectively, (3.14) where again we have chosen a NP scale Λ = 1 TeV, and considered one operator at a time. Without considering the contributions from the charm CEDM computed in this paper, one could have saturated the ∆A CP measured value without being in conflict with the EDMs constraints, via requiring a very small W R 3u , see e.g. [42]. Now this possibility is challenged and, with the forseen experimental sensitivities, in these models the neutron EDM will become by far the most powerful observable to probe the flavour violating parameters in (3.11) and (3.12). This conclusion would be strengthened by more than an order of magnitude (totalizing a ∼ 10 3 better sensitivity to |W R 3c | with respect to ∆A CP ) if the deuteron EDM will be measured with a precision of ∼ 10 −29 e cm. We stress that all these bounds should be considered as O(1) limits, barring finetunings of the unknown coefficients and overall phases in front of the operators considered here. This implies, for example, that formally there is the possibility to make the phases entering the CEDMs small so to be in agreement with the bounds (3.3) and (3.5), while keeping larger the ones relevant to ∆A CP and invalidate the above conclusion. • In Generic U (2) 3 models, one has where u < c < 1 are suppression parameters related to the breaking of U (2) 3 symmetry in the right quark sector. 3 In the case of maximal phases and O(1) coefficients equal to one, again considering one operator at a time, the bounds from ∆A CP now imply and those for the charm and up CEDMs require, respectively, Like before, the EDMs are starting to become more sensitive to the parameter c than direct CP violation in charm decays, and will become the best observable to probe the amount of U (2) 3 breaking in the up-right quark sector. In this scenario, the flavour symmetry imposes the following relations among the O(1) complex coefficients: c D =c c and c D =c u (see Appendix A.2 of [35]). Thus, contrary to the previous case, in Generic U (2) 3 it is not possible to play with the order one parameters and phases to avoid the above conclusions. A remark is in order to avoid possible confusion. In [43] direct CP violation in D meson decays was related to the neutron EDM. The result was that the same ∆C = 1 operators inducing ∆A CP at a level compatible with the measured value, also induce a contribution to d n . This contribution is obtained by long distance effects at tree-level, in analogy with the dominant SM contribution by the same authors [13], and its size is at most one order of magnitude below the current experimental sensitivity (and now even smaller, in light of the new ∆A CP measurement). Here we pursue a different analysis, namely we identify a class of models where a sizeable contribution to ∆A CP is accompained by flavour conserving CP-violating operators, and study the impact of the last ones on d n , that was not considered in [43]. The contribution to d n that we find, in these explicit models, is more than an order of magnitude larger than the model independent one obtained in [43]. Supersymmetry with split families Split-families SUSY (often referred to as "Natural SUSY") [44][45][46][47] is an explicit realization of the situation described in the previous section. The dominant contributions to the Wilson 3 In terms of the notation of Ref. In the suppression factors ξ 8 , ξ 8 and ξ q , the elements W are those of the mixing matrices entering the gluino-quark-squark vertices of the respective chirality, which are responsible for the flavour violation. Fixing for illustrative purposes mg = 2mt and assuming maximal phases, the bounds from the CEDMs of the up and charm quarks read, respectively 20) to be compared with the ones coming from ∆A CP (3.21) Choosing instead mt = mg one would obtain bounds weaker by a factor of ∼ 1.3. In split-families SUSY one can improve the robustness of the previous bounds by taking into account all the dominant contributions to d n . Under some assumptions that will be discussed, it is in fact sufficient to add to the previous picture the up electric dipole moment d u . To see this, let us first consider the bounds on the top and bottom CEDMs, (3.7) and (3.6). Again the Supersymmetric contribution to them is dominated by gluino-squark loops, and it reads as the one in Eq. (3.18), with the appropriate squark mass and mixing substitution for the bottom case. In addition, the suppression factor for the top case reads W R tt W L tt , the one for the bottom y b /y t W R bb W L bb . The bounds (3.6) and (3.7) then imply Thus it is safe to neglect the top and bottom CEDMs contribution to d n for values of the matrix elements of order one. Let us now come to the contribution from the down quark EDM and CEDM. First notice that, with respect to the up quark (C)EDMs, they are suppressed by a bottom yukawa coupling, being proportional to y b /y t W L bd W R bd . Also, the k parameter constrains the size of the combination W L bs W R bs W L bd W R bd to be much smaller than the corresponding one in the up sector, if sbottom and stops have similar masses. In light of these observations, we assume a negligible down quark contribution to the neutron EDM. One is left then with d u ,d u andd c as dominant contributions to d n . The coefficient of the up-quark electric dipole moment, in the notation of (3.1), reads The neutron EDM can then be written in the compact form (3.25) where s u and s c are the sines of the phases of W R tu W L tu /V ub and W R tc W L tc /V cb respectively. The sign ambiguity in the contribution of the Weinberg operator to d n can be reabsorbed in the sign of s c . Assuming the same matrix elements for the operators O 8 and O 8 for simplicity, one can cast ∆A CP in the analogous form (3.26) where s 8 and s 8 are the sines of the phases of W R tc W L tu /V ub and W R tu W L tc /V cb respectively. Also, the presence of a flavour blind phase in the mixing can easily be reabsorbed in the definitions of s u,c , s 8 and s 8 . In Figure 1 we show the bounds on d n and ∆A CP in the |W R tc |-|W R tu | plane, for mg = 2mt = 1.5 TeV and (A t − µ/ tan β)/mt = 1. For illustrative purposes we assume all the phases to be maximal, and the left rotation W L elements to be equal in magnitude to the respective CKM ones. The generalization to the case where there are deviations from these reference values is easily readable off Eqs (3.25) and (3.26). At present, for the reference values of the parameters in Eqs. (3.20) and (3.21), the right charm-stop mixing angle θ R ct (W R tc cos θ R ct sin θ R ct ) is not strongly constrained. In particular, values of |W R tc | 0.3 would both weaken the experimental lower bounds on the stop mass and mildly reduce fine-tuning [48]. The projected sensitivities to EDMs shown in Table 1 allow one to infer the impact of near-future experimental searches. If flavour violating phases are not significantly suppressed, a negative result at those experiments would reduce the allowed range for the charm-stop mixing by roughly two order of magnitudes 4 . One could wonder when contributions from the exchange of squarks of the first two generations could interfere with the above ones, and affect in this way the bounds we derived. Those contributions are suppressed by a factor y u,c /y t evaluated at the high scale, but are at the same time CKM enhanced if one normalize consistently the left mixing matrices W L . Because of this, and of the bound in (3.20), the contribution tod u by a scharm circulating in the loop is potentially the larger one. We checked that, for the reference values for mg and mt that we chose, mc 5 TeV is compatible with the bound ond u for Im(W R cu ) as large as 1. A smaller value of mc would imply a stronger bound on Im(W R cu ), and would in general affect the bound on |W R tu | by O(1). Thus it would affect the vertical axis of Fig 1, but it will not change the impact, on this picture, of the newly derived bound ond c . Its impact would of course be changed by a modification of the bound on |W R tc |. We checked that the same lower bound mc 5 TeV implies that the contribution tod c is dominated by the third-generation diagram, until Im(W R tc ) 10 −2 Im(W R cc ). Thus, with these values of the masses, effects of the first two generation squarks would start to become relevant for the future reach of EDM experiments, if EDMs are still measured to be consistent with zero and if Im(W R cc ) 1. Finally notice that we have neglected the contribution that would come from CP violation in the Higgs and gaugino sectors, which in any case would also be constrained by the bound [49] on the electron EDM (see e.g. [50]). Composite Higgs models It is interesting to see how the new bound on the charm CEDM impacts on composite Higgs models [51][52][53][54], as a concrete realization of a dynamical suppression of flavour violating processes. We will in fact stick to partial compositeness [55] as a way to give masses to the SM quarks and to suppress at the same time flavour-changing neutral currents. We will consider a simplified two-site picture, in the spirit of [56]. In particular we will include one composite resonance for each SM boson and fermion field. For the purpose of understanding the rest of this section, it is sufficient to define the following phenomenological Lagrangian for the strong sector 5 (3.27) and for the mixing of the composite fermions F with the elementary ones f Here H denotes the Higgs doublet, and ρ µ the composite vectors. Indices in flavour space are understood for the mixings λ f , as well as for the composite Yukawas and fermion masses Y F and m F . A sum over all species of fermionic and gauge fields is also understood. The mixings can always be brought to diagonal form, and rotated away in order to obtain the SM fields f SM = cos θ f f + sin θ f F . The dominant contributions to chirality breaking operators come from one-loop diagrams involving a fermion resonance and either the Higgs boson or the longitudinal component of W and Z. In fact diagrams with a vector and a fermion resonances running in the loop have the same flavour and CP structure of the SM Yukawa terms. Thus they will be diagonal in flavour space, as well as real, in the mass basis for the SM fields. On the contrary the presence of two additional vertices with the composite Higgs introduces two extra composite Yukawas, which are anarchic in flavour space, giving rise to operators that are generically not aligned with the mass basis. Notice that a semiperturbative composite Yukawa coupling is preferred both by the Higgs mass value and naturalness arguments [58][59][60][61][62], as well as by precision constraints [57], and thus a loop expansion in this coupling is not inconsistent. The contributions to the Wilson coefficients of the up and charm CEDMs (3.1) and of ∆A CP (3.8) are suppressed by where λ C is the Cabibbo angle, and Y * u,c , Y * 8,8 are linear combinations of elements of the anarchic composite Yukawa matrices of (3.27), which are in general complex. Notice that those linear combinations depend also on which generations of composite resonances are running in the loop. To simplify the discussion, it is convenient to decouple the first two generations of composite fermions. Naturalness considerations and the measured value of the Higgs mass require only the third generation resonances to lie close to the Fermi scale, while the other ones could well be heavier 6 [58]. This assumption implies the relation Y * 8 Y * 8 = Y * c Y * u , and also that the order one coefficients of (3.1) and (3.8) are all equal. They can be obtained from Refs. [64,65], where the one-loop contribution coming from a fermion resonance running in the loop together with the Higgs and the Goldstone bosons is computed. Neglecting terms further suppressed by O(m 2 W /m 2 T ), we find where we assumed the partners of the left-and right-handed top and bottom quarks to have the same mass m T . The bounds on the up and charm CEDMs then imply to be compared with the ones coming from ∆A CP Currently one could still saturate the experimental upper limit on ∆A CP in such scenarios [66,67], without running into any conflict with the bounds from the neutron EDM (for example by taking Y * 8 sufficiently larger than Y * u ). With the foreseen improvement in experimental sensitivity this possibility will be strongly challenged, for semiperturbative values of the composite Yukawas. Notice also that the combination Y * 8 Y * 8 = Y * c Y * u is more constrained by the bounds from the CEDMs than from those coming from ∆A CP . An analysis of the total contribution to d n , similar to the one performed in Section 3.3 for Supersymmetry, cannot be carried out in an analogously simple way for CHMs. This is due mainly to the presence of potentially unsuppressed contributions to d n from d d andd d . Summary and conclusions Measurements of CP violating observables are among the strongest indirect probes of high energy scales. It is therefore important to study their implications for our knowledge of physics beyond the SM. In this paper, we pursued a step in the above direction. We derived bounds on the charm electric and chromo-electric dipole moments, d c andd c . For d c , we considered its possibile dangerous contributions to the neutron EDM, d n , and to the branching ratio BR(B → X s γ). In the first case we made use of the contribution of d c to d d from electroweak running, and derived the bound in Eq. (2.8). In the second case we considered the relevant loop process proportional to d c , yielding to the bound in Eq. (2.9). However, the stronger bound was by far the one on the charm CEDMd c . We obtained it via its threshold effect in the three gluon Weinberg operator. This operator in turn contributes to hadronic dipole moments, like the neutron and the deuteron ones, yielding tod c < 1.0 × 10 −22 e cm at 90% C.L. at the charm mass scale. This is one of the two main results of this paper. We also pointed out the relevance of this bound for models allowing for a non-negligible flavour violation in the right-handed up quarks sector. These models are still largely unconstrained due to the weakness of the flavour and CP violating bounds compared to those for the down-quark sector. Explicit examples are models of flavour alignment and Generic U (2) 3 . Before this work, the CP asymmetry in flavour violating D decays, ∆A CP , was setting the stronger constraints on the relevant flavour violating parameters in these models. We found that the current bound on d n is already sligthly more constraining than ∆A CP . More importantly, the lack of a theoretical understanding of the SM contribution to ∆A CP , combined with the expected improvement in experimental sensitivity to d n , will make the neutron EDM the most sensitive probe for these flavour violating parameters, strengthening the current bounds by more than two orders of magnitude. We also specialized our analysis to various new physics models, such as split-families Supersymmetry, and composite Higgs models with partial compositeness. In particular in the first case, under some motivated assumptions, it was possible to find concise expressions for the total supersymmetric contribution to both d n and ∆A CP . We think that these results constitute a further motivation to increase the experimental sensitivities to d n and ∆A CP , and to continue the effort to achieve a better theoretical control of the latter.
7,194.6
2013-12-09T00:00:00.000
[ "Physics" ]
12/15-Lipoxygenase Is Required for the Early Onset of High Fat Diet-Induced Adipose Tissue Inflammation and Insulin Resistance in Mice Background Recent understanding that insulin resistance is an inflammatory condition necessitates searching for genes that regulate inflammation in insulin sensitive tissues. 12/15-lipoxygenase (12/15LO) regulates the expression of proinflammatory cytokines and chemokines and is implicated in the early development of diet-induced atherosclerosis. Thus, we tested the hypothesis that 12/15LO is involved in the onset of high fat diet (HFD)-induced insulin resistance. Methodology/Principal Findings Cells over-expressing 12/15LO secreted two potent chemokines, MCP-1 and osteopontin, implicated in the development of insulin resistance. We assessed adipose tissue inflammation and whole body insulin resistance in wild type (WT) and 12/15LO knockout (KO) mice after 2–4 weeks on HFD. In adipose tissue from WT mice, HFD resulted in recruitment of CD11b+, F4/80+ macrophages and elevated protein levels of the inflammatory markers IL-1β, IL-6, IL-10, IL-12, IFNγ, Cxcl1 and TNFα. Remarkably, adipose tissue from HFD-fed 12/15LO KO mice was not infiltrated by macrophages and did not display any increase in the inflammatory markers compared to adipose tissue from normal chow-fed mice. WT mice developed severe whole body (hepatic and skeletal muscle) insulin resistance after HFD, as measured by hyperinsulinemic euglycemic clamp. In contrast, 12/15LO KO mice exhibited no HFD-induced change in insulin-stimulated glucose disposal rate or hepatic glucose output during clamp studies. Insulin-stimulated Akt phosphorylation in muscle tissue from HFD-fed mice was significantly greater in 12/15LO KO mice than in WT mice. Conclusions These results demonstrate that 12/15LO mediates early stages of adipose tissue inflammation and whole body insulin resistance induced by high fat feeding. Introduction Insulin resistance is a pathophysiological condition associated with obesity, aging, and type 2 diabetes that affects skeletal muscle, liver, adipose tissue, and immune cells. Obesity and insulin resistance are associated with macrophage infiltration and inflammation in the adipose tissue of humans and rodent models where a feed-forward cycle of reciprocal adipocyte and macrophage activation results in the secretion of inflammatory proteins and further macrophage recruitment [1,2]. Pro-inflammatory factors secreted by macrophages and adipocytes are elevated in adipose tissue from obese and type 2 diabetic patients [3]. Adipose tissue inflammation induces insulin resistance through inactivation of insulin receptor substrates (IRS) by cytokine-activated JNK, IKKb and SOCS [1]. High fat diet (HFD) feeding, a commonly studied model of insulin resistance in rodents, rapidly causes progressive metabolic maladies [4,5]. Insulin resistance in heart, adipose tissue, liver, and muscle, adipose tissue hypertrophy and inflammatory cell infiltration, and hyperinsulinemia are significantly robust phenotypes observed as early as 1-3 weeks of HFD, with minimal to no total body weight gain [4,[6][7][8]. After 16-20 weeks of HFD, these phenotypes are much more pronounced and additional severe metabolic dysregulations are present including dyslipidemia & ectopic triglyceride storage, hypo-adiponectinemia, adipose tissue hypoxia, cell death and remodeling, beta-cell decompensation, mild hyperglycemia, and deterioration of cardiac function [4][5][6]. The key molecules involved in initiating HFDinduced adipose tissue inflammation and macrophage infiltration are not well characterized. Recent studies suggest an important role for 12/15-lipoxygenase (12/15LO) in monocyte recruitment to and regulation of inflammation in vascular and adipose tissue. 12/15LO has been implicated in the development of autoimmune diabetes and in vascular complications of diabetes. Arachidonic acid stimulates insulin secretion by b-cells and this process is inhibited by the 12/15LO activity [27]. Moreover, 12/15LO mediates cytokineinduced b-cell damage [28]. Other data suggest that non-obese diabetic (NOD) mice congenic for a targeted deletion of 12/15LO are in fact protected from autoimmune diabetes [29]. A recent study has shown that long-term (8-24 weeks) high fat feeding induces 12/ 15LO activation and beta cell damage in pancreatic islets, both of which were prevented in 12/15LO KO mice [30]. 12/15LO is also involved in vascular complications of advanced diabetes, such as atherosclerosis and nephropathy, which manifest life-threatening conditions. Specifically under diabetic conditions, vascular smooth muscle cells (VSMC) express 12/15LO which in turn mediates a VSMC switch from a contractile phenotype to a migratory and inflammatory phenotype [9,22,31,32]. This change together with 12/15LO-mediated lipoprotein oxidation and monocyte adhesion to endothelial cells explains the involvement of 12/15LO in the pathogenesis of high fat diet (HFD)-induced atherosclerosis [33][34][35][36]. Although these previous studies show that 12/15LO plays a role in regulating b-cell survival, advanced diabetic complications and, in a very recent publication, chronic (8-24 weeks) HFD-induced insulin resistance and inflammation [30], the possible involvement of 12/15LO in the early development of insulin resistance has not been studied. Given that 12/15LO is an inflammatory modulator, we asked whether it is required for the onset of HFD-induced insulin resistance. We found that 12/15LO deficiency protected mice from exhibiting elevated inflammatory markers in adipose tissue and, remarkably, prevented whole-body insulin resistance induced by 2-4 weeks of high fat feeding. Animals Male C57BL6 wild type mice were from Jackson Laboratories. Male 12/15LO knockout mice, backcrossed in a C57BL6 background for 10 generations, were a generous gift from Dr. Colin Funk (Queen's University). Starting 16 weeks of age, mice were fed either normal chow (12% kcal from fat; Purina 5001 Lab Diet) of high fat diet (41% kcal from fat; TD96132, Harlan Teklad) for 2 or 4 weeks. All mice involved in clamp studies (wild type and 12/15LO KO, n = 20 each) were singly housed during the two weeks of dietary intervention. This was done in order to ensure equal food access, protect the implanted catheters, and prevent fighting between mice. Mice used for other experiments (acute insulin and adipose tissue FACS studies, n = 20 per strain) were housed 1-3 in a cage. There were infrequent circumstances (,5 mice total in these studies) when a non-clamp study mouse was singly-housed (for example, an aggressor had to be separated from cage mates to prevent stress and injury to the other mice and to ensure access to food. Mice were housed under controlled light (12:12 light:dark) and climate conditions with unlimited access to food and water. All procedures were performed in accordance with the Guide for Care and Use of Laboratory Animals of the National Institutes of Health and were approved by the University of California, San Diego, Animal Subjects Committee. Analytical methods Total RNA was isolated from cell lysates using an RNeasy kit from Quiagen (Valencia, CA). Quantitative RT-PCR was performed to measure MCP-1, OPN and GAPDH mRNA levels using a Rotor-Gene RG3000 qPCR machine (Corbett Research, Brisbane, Australia). Primers and probes were from Applied Biosystems (Foster City, CA). The protein levels of MCP-1 in conditioned media were measured by ELISA using a kit from R&D Systems (Minneapolis, MN). The protein levels of OPN in conditioned media were assayed by immunoblot using a primary antibody from Santa Cruz Biotechnology (Santa Cruz, CA). In vivo metabolic studies Insulin sensitivity was assessed in mice fed HFD for 2 weeks using a sub-maximal hyperinsulinemic euglycemic glucose clamp technique as previously described (34), with the following modifications: 1) isoflurane was used for anesthesia during the catheter insertion surgery three days prior to clamp, 2) glucose tracer was infused at 2 mCi/hr during the clamp, and 3) insulin was infused at 3 mU/kg/ min during the clamp. The mice were conscious during the clamp and fully recovered after the procedure. Four days later, the mice were fasted for 5 hr, anesthetized (isoflurane) to collect blood by cardiac puncture, and then euthanized (pentobarbital) to collect gastrocnemius muscle, liver, and epididymal adipose tissues. Excised tissues were flash-frozen in liquid nitrogen. Plasma glucose specific activity, glucose disposal rate (GDR), and hepatic glucose output (HGO) were calculated as previously described [39]. In a separate group of mice, acute insulin stimulation was achieved by intraperitoneal injection of 6 hr-fasted mice with 0.85 U/kg insulin. After 15 min, the mice were sacrificed and muscle was harvested as described above. Fluorescence-activated cell sorting (FACS) of adipose tissue SVCs Adipose tissue stromal vascular cells (SVCs) were isolated and analyzed by FACS as previously reported [6] with minor modifications. Briefly, freshly harvested epididymal fat pads were separately rinsed and minced in DPBS +1% BSA and then treated with 1 mg/mL type II collagenase (Sigma, St. Louis, MO) for 25 min in a 37uC shaking water bath. Adipose tissue cell suspensions were filtered through a 100 mm mesh. SVCs were separated from floating adipocytes by centrifugation, incubated in RBC lysis buffer (eBioscience, San Diego, CA) for 5 min, and resuspended in fresh DPBS +1% BSA. SVCs were incubated with Fc Block (BD Biosciences, San Jose, CA) for 15 min and then stained for 30 min with fluorescent-conjugated antibodies against F4/80 (Ab Serotec, Raleigh, NC) and CD11b (BD Biosciences). Cells were washed two times and re-suspended in DPBS +1% BSA and propidium iodide (Sigma). Presence of the fluorescent stains in the SVCs was analyzed using a FACS Calibur flow cytometer (BD Biosciences). Control SVCs preparations included unstained cells, PI-only stained cells, and fluorescence-minus-one (FMO) stained cells and were used to set gatings and compensation. Plasma and tissue analyses Plasma insulin levels were measured using the Insulin Ultrasensitive (Mouse) EIA method (Alpco, Salem, NH). Muscle lysates were analyzed by western blotting with antibodies against total Akt and phospho-serine Akt (Cell Signaling, Danvers, MA). Signal intensities on chemiluminescence-exposed autoradiographs were densitometrically quantified using a digital Kodak 3D Image station and associated digital image analysis software (Kodak, New Haven, CT). The protein levels of IL-1b, IL-12p70, IFNc, IL-6, IL-10, Cxcl1 and TNFa in adipose tissue lysates were measured using a multiplex (7-plex) ELISA (Meso Scale Discovery, Gaithersburg, MD). Statistical analyses Student's t-test and ANOVA (and Tukey's post hoc test) were applied for statistical analyses. A p-value cutoff of 0.05 was used to determine statistical significance. 12/15LO expression increases chemokine production Expression of 12/15LO in various smooth muscle cells and macrophages induces the expression of proinflammatory genes [21,22,24,26]. We used fibroblast cell lines that stably express human 15LO or LacZ (as a negative control) [37] to test whether our 15LO-expressing cells (15LO cells) produce excess proinflammatory proteins compared to LacZ-expressing control cells (LacZ cells). We focused on chemokines that could attract monocytes to adipose tissue and have been shown to be involved in the pathogenesis of insulin resistance. Monocyte chemoattractant protein-1 (MCP-1) contributes to macrophage infiltration in adipose tissue and insulin resistance [40,41]. Osteopontin (OPN) is a proinflammatory cytokine and monocyte chemotactic factor that also mediates obesity-induced insulin resistance [42]. We found that the 15LO cells expressed significantly higher levels of MCP-1 and OPN (mRNA and protein) than the LacZ cells (Figure 1). Expression of other inflammatory mediators TNFa, MIP-2, MIP-1a and IkBf was not different between the 15LO and LacZ cells (not shown). HFD-induced inflammation in adipose tissue is absent in 12/15LO KO mice Because 12/15LO regulates the expression of monocytic chemokines (Figure 1 and refs. [21,22,24]), we compared the effects of short-term high fat feeding on adipose tissue inflammation in C57BL6 wild type (WT) and strain-, gender-, and age-matched 12/15LO knockout (KO) mice. We assessed adipose tissue macrophage infiltration in isolated epididymal white adipose tissue-derived stromal vascular cells (SVCs) using fluorescenceactivated cell sorting (FACS), which is a more sensitive and comprehensive method compared to immuno-histochemistry. Because macrophages expressing F4/80 and/or CD11b are increased in adipose tissue after HFD [5,40,43], we measured the percentage of live SVCs that express F4/80 or CD11b and the percentage of live SVCs that express both F4/80 and CD11b. The percentage of adipose tissue-derived SVCs expressing F4/80 and/ or CD11b was significantly increased in WT mice fed HFD for two weeks (Figure 2) compared to WT mice fed normal chow (NC). This trend of increased macrophage presence also existed after four weeks of HFD. In contrast, adipose tissue-derived SVCs isolated from 12/15LO KO mice fed HFD for four weeks exhibited no change in the percentage of cells expressing F4/80 and CD11b compared with NC-fed 12/15LO KO mice. We next measured cytokine protein levels in epididymal white adipose tissue (eWAT) lysates from WT and 12/15LO KO mice fed NC or HFD for two weeks. We observed significantly elevated levels of IL-1b, IL-12p70, IFNc, IL-6, and IL-10 in eWAT lysates from HFD-fed WT mice compared to NC-fed WT mice (Figure 3). Cxcl1 (KC) and TNFa levels also tended to increase after HFD in WT mice but this increase did not reach statistical significance. None of these cytokines were elevated in the plasma of WT mice fed 2-week HFD compared to NC controls. In contrast to WT mice, 12/15LO KO mice were completely protected from HFDinduced increases in IL-1b, IL-12p70, IFNc, IL-6, IL-10, Cxcl1 and TNFa levels in eWAT. The absence of HFD-induced cytokine elevation in the adipose tissue of 12/15LO KO mice ( Figure 3) corresponds with the absence of HFD-induced macrophage infiltration in the adipose tissue of these mice ( Figure 2) and supports the notion that, in contrast to WT mice, 12/15LO KO mice fed HFD for two to four weeks do not exhibit macrophage-mediated, adipose tissue inflammation. 12/15LO KO mice are protected from HFD-induced insulin resistance We investigated whether the absence of HFD-induced adipose tissue inflammation in 12/15LO KO mice correlated with protection from whole body insulin resistance. We conducted euglycemic hyperinsulinemic clamp studies on WT and 12/ 15LO KO mice fed NC or HFD for two weeks. High fat feeding induced significant differences in clamp data from HFD-fed compared to NC-fed WT mice, specifically, 76% lower glucose infusion rate (Ginf), 60% lower glucose disposal rate (GDR), and 76% higher hepatic glucose output rate (HGO) (Figure 4A-C). 12/15LO KO mice were completely protected from the severe HFD-induced changes in Ginf, GDR, and HGO that we observed in WT mice. There was no significant difference in basal glucose turnover rate (basal HGO = basal GDR) between the NC-fed WT (15.962.2 mg/kg/min) and 12/15LO KO mice (16.061.9 mg/kg/min) or between the WT NC-fed and WT HFD-fed mice (15.761.9 mg/kg/min). The clamp study data indicate that 12/15LO deficiency provides protection from HFD-induced hepatic and skeletal muscle insulin resistance. HFD-fed 12/15LO KO mice did exhibit a similar fold increase in eWAT mass but not hyperinsulinemia, compared to HFD-fed WT mice (Table 1). To demonstrate that there was no direct dependency between adiposity and insulin resistance in WT and 12/15LO KO mice, we normalized the GDR values to the fat pad mass as a % of body weight. Figure 4D shows that, even after this normalization, there remains a dramatically different insulin sensitivity between WT and 12/15LO KO mice fed chow or high-fat diet. Diminished insulin-activated Akt phosphorylation in muscle from HFD-fed WT compared to 12/15LO KO mice In order to assess the effects of 12/15LO KO on insulin signal transduction in skeletal muscle from mice fed HFD, we examined acute insulin-activated Akt phosphorylation in a separate group of mice. Serine-phosphorylation of Akt in muscle lysates (phospho-Ser 473 Akt normalized to total Akt protein) was determined by western blot using primary antibodies for total Akt protein and phospho-Ser 473 Akt. As expected, acute insulin treatment induced phosphorylation of Akt in both groups ( Figure 5). However, the absolute level of insulin-stimulated Akt phosphorylation and the insulin-stimulated fold change in Akt phosphorylation were both significantly greater, two-and three-fold, respectively, in 12/15LO KO mice compared to WT mice. Because activation (phosphorylation) of Akt is a critical early event in the insulin receptor signaling, these data corroborate our in vivo results ( Figure 4B) demonstrating greater insulin sensitivity in the muscle of HFD-fed 12/15LO KO mice compared to WT mice. Discussion HFD-induced and obesity-related insulin resistance is associated with chronic low grade inflammation in adipose tissue characterized by macrophage infiltration and elevation of inflammatory cytokine expression. HFD-induced effects on adipose tissue inflammation are initiated early in the course of high fat feeding and are coincident with insulin resistance [5,6]. Although the known biological roles of 12/15LO include regulating inflammation, a role for 12/15LO in the early development of HFDinduced adipose tissue inflammation and/or whole body insulin resistance has not been described. 12/15LO is expressed in all vascular cell types, including endothelial cells, macrophages and VSMCs, and its expression is elevated under inflammatory conditions. Therefore, we hypothesized that 12/15LO might regulate HFD-induced adipose tissue inflammation. Indeed, we found that high fat feeding induced macrophage infiltration into adipose tissue of WT but not 12/15LO KO mice, presumably, via 12/15LO-regulated chemokine secretion. The difference in adipose tissue macrophage infiltration between HFD-fed WT and 12/15LO KO mice corresponded with their difference in adipose tissue inflammatory cytokine elevation. In addition to being protected from adipose tissue inflammation, HFD-fed 12/ 15LO KO mice were also dramatically protected from hepatic and skeletal muscle insulin resistance compared to HFD-fed WT mice. Adipose tissue macrophages are derived from circulating monocytes that attach to and migrate through endothelial cells (ECs) in the tissue microvasculature. 12/15LO expression and activity is increased in atherogenic and hyperglycemic diabetes models [12,36,44,45], conditions similar to the postprandial state of non-diabetic, diet-induced insulin resistance in which case hyperlipidemia and hyperglycemia are more pronounced and sustained. In these model conditions, activated 12/15LO regulates monocyte attachment to ECs in part by increasing expression of adhesion molecule ICAM-1 on the surface of ECs via activation of RhoA and NF-kB [12,36]. 12/15LO increases the expression of chemokines MCP-1 and OPN (our data presented here and [21,24,26]). Thus, 12/15LO-mediated regulation of chemoattractants and an adhesion receptor may account for the increased monocyte infiltration into adipose tissue that we observed in HFDfed WT but not 12/15LO KO mice. Both WT and 12/15LO KO mice exhibited an approximate two-fold increase in eWAT mass during the two-week high fat feeding period. Although HFD-induced eWAT expansion in 12/ 15LO KO mice was somewhat blunted compared to WT mice, the fold change in mass compared to NC-fed controls was have recently demonstrated that 12/15LO is up-regulated in adipose tissue from high fat fed mice and that 12/15LO products induce inflammation and insulin resistance in 3T3-L1 adipoctyes [46]. HFD-induced elevation of eWAT cytokine levels could originate from macrophages, adipocytes, endothelial cells and/or preadipocytes within adipose tissue. We speculate that proinflammatory macrophages and adipocytes are the most likely sources of elevated eWAT cytokines in high fat diet-fed WT mice. We used a two-and four-week high fat feeding protocol because it significantly induces the phenotypes of insulin resistance and adipose tissue hypertrophy and inflammation in WT mice without causing the severe metabolic dysfunctions that manifest after longer high fat feeding [4][5][6][7][8]. Mouse phenotypes induced by our high fat feeding protocol were similar to those observed in other time course studies of HFD-induced adipose tissue inflammation and insulin resistance [4][5][6]. Two-week high fat feeding induced hyperinsulinemia and severe hepatic and skeletal muscle insulin resistance in WT but not 12/15LO KO mice. 12/15LO KO mice exhibited significantly greater insulin-stimulated skeletal muscle Akt phosphorylation after HFD, compared to WT mice, corresponding to their greater insulin sensitivity. Given the cross-talk between adipose tissue, liver, and skeletal muscle that affects insulin sensitivity [1,47], protection from adipose tissue inflammation in the 12/15LO KO mice may be the primary site of action leading to protection from whole body insulin resistance. In summary, we find that 12/15LO is a key modulator of the onset of high fat diet-induced insulin resistance in liver, muscle and adipose tissue. We provide evidence that the mechanism by which the 12/15LO KO mice are protected from the initial stages of HFD-induced insulin resistance involves suppression of adipose tissue pro-inflammatory macrophage infiltration and inflammatory cytokine elevation. 12/15LO is a key participant in the development of diet-induced insulin resistance and, thus, a viable therapeutic target for the treatment of human insulin resistance and type 2 diabetes.
4,513.6
2009-09-29T00:00:00.000
[ "Biology", "Medicine" ]
Screening of liquid media and fermentation of an endophytic Beauveria bassiana strain in a bioreactor A novel approach for biological control of insect pests could be the use of the endophytic entomopathogenic Beauveria bassiana isolate ATP-02. For the utilization of the endophyte as a commercial biocontrol agent, the fungus has to be mass-produced. B. bassiana was raised in shake flask cultures to produce high concentrations of total spores (TS), which include blastospores (BS) and submerged conidiospores (SCS). The highest concentration of 1.33×109 TS/mL and the highest yield of 5.32×1010 TS/g sucrose was obtained in the TKI broth with 5% sugar beet molasses which consists of 50% sucrose as a carbon source. In spite of the lower sugar concentration (2.5%) the amount of TS could be increased up to 11-times in contrast to the cultivation with 5% sucrose. The scale-up to a 2 L stirred tank reactor was carried out at 25°C, 200–600 rpm and 1 vvm at pH 5.5. A TS yield of 5.2×1010 TS/g sucrose corresponding to a SCS yield of 0.2×1010 SCS/g sucrose was obtained after 216 h. With regards to the culture medium the cost of 1012 TS amounts to 0.24 €. Plutella xylostella larvae, which were fed with oilseed rape leaves treated with spores from fermentation resulted in 77 ± 5% mortality. Moreover, spores from submerged cultivation were able to colonize oilseed rape leaves via leaf application. This is the first report of fermentation of an endophytic B. bassiana strain in a low-cost culture medium to very high yields of TS. Introduction In the past decades, many microorganisms have been isolated and investigated for use as a biocontrol agent. Now, many promising strains are available for release into the environment and especially with the renewed interest in biocontrol await further exploitation for large-scale application in agriculture (Glare et al. 2012). The first step for commercialization of a biocontrol agent like Beauveria bassiana is the mass-production by fermentation (Burges 1998;Ravensberg 2011). B. bassiana strains that were applied to the insect and act on the outer surface of the plant show efficacy against a wide range of insect pests and have the potential of becoming a costeffective biocontrol agent (Khachatourians 1986). However, the approved products of B. bassiana contain aerial conidia (AC), which are produced by either a solid-state or a diphasic fermentation. These processes are in classical biotechnology considered to be labour-intensive and unsuitable for conventional production of fungal biomass (Feng et al. 1994;Patel et al. 2011;Ravensberg 2011;Rombach et al. 1988). In contrast to these propagules, blastospores (BS) and submerged conidiospores (SCS) would be produced in submerged cultivations in a shorter time with higher yields and state-of-the-art process control. Furthermore, it was shown that BS and SCS of a B. bassiana isolate are as virulent to grasshoppers as the AC (Hegedus et al. 1992). Until today, no products with BS or SCS of B. bassiana are available. A few reports on growth requirements and shake flask culture of B. bassiana strains show the best growth and germination in complex media (Bidochka et al. 1987;Chong-Rodríguez et al. 2011;Hegedus et al. 1990;Humphreys et al. 1990;Pham et al. 2009;Rombach 1989;Safavi et al. 2007;Samsináková 1966;Thomas et al. 1987;Vega et al. 2003). However, only a few publications deal with the production of mycelium (Núñez-Ramírez et al. 2012) and the production of BS in complex (Humphreys et al. 1989(Humphreys et al. , 1990 and mineral media (Lane et al. 1991) by submerged fermentation, respectively. Endophytic B. bassiana strains can exist asymptomatically in a variety of plants like banana (Akello et al. 2008), opium poppies (Quesada-Moraga et al. 2006), maize (Bing and Lewis 1992) and sorghum (Tefera and Vidal 2009). The recently isolated endophytic B. bassiana strain ATP-02 showed great potential for a novel plant control measure in a variety of crops (Tefera and Vidal 2009). However, it remained unknown if this strain can be mass-produced to high yields and if the spores from a submerged fermentation are able to colonize plants. That is why the objective of the present work was to produce spores of endophytic B. bassiana ATP-02 in a costeffective culture medium on lab-scale and to scale-up the process to a 2 L stirred-tank reactor. Finally the virulence of the produced spores was checked in a bioassay with Plutella xylostella and their potential to colonize oilseed rape leaves via a leaf application was investigated. Materials and methods All materials used were purchased from Merck KGaA (Darmstadt, Germany), Carl Roth GmbH (Karlsruhe, Germany) or AppliChem GmbH (Darmstadt, Germany), if not mentioned otherwise. Sugar beet molasses with a dry matter content of 80% consisting of 50% sucrose was purchased from Suedzucker AG (Mannheim, Germany). All concentrations are given as (w/w). Strain B. bassiana isolate ATP-02, DSM 24665, was provided by Prof. Stefan Vidal, Georg-August-University, Department of Crop Sciences/Agricultural Entomology, Goettingen, Germany. The strain was raised at 25°C on SDA agar containing 1% casein peptone, 2% glucose and 1.5% agar-agar at pH 5.5. Temperature optimum was found at 25°C and pH optimum at 5.5 (data not shown). Cultivation in shake flask culture Different liquid media were used to cultivate B. bassiana: TKI medium with 5% carbon source (Thomas et al. 1987), Czapek-Dox medium (Kučera 1971), YPG medium and PG medium (Bidochka et al. 1987), Vogel's medium (Vogel 1956), SD medium (Odds 1991), CGM medium containing 1% glucose, 1% corn steep liquor, 0.5% NaCl, 0.1% NaNO 3 , 1% CaCO 3 (Samsináková 1966), PWG medium containing 1% glucose, 8.75% whey powder, 0.25% peptone (Kassa et al. 2008), YG medium containing 1% glucose, 1% yeast extract (Leckie et al. 2008 (Rombach 1988(Rombach , 1989 and YS medium containing 2.5% sucrose, 2.5% yeast extract (Rombach 1989). In each case 50 mL medium was placed in 250 mL DURAN® baffled flasks. The pH values of the media were adjusted to 5.5 with 0.5 M NaOH. As a starter inoculum AC from SDA agar (see above) were used. The AC were isolated by flooding the plates with 2 × 5 mL of sterile 0.1% Tween 80 and gently raking the plates with a sterile bristle brush. The shake flask cultures were inoculated with the spore suspension to give an initial spore density of 5.0×10 4 AC/mL. The flasks were incubated at 25°C on a rotary shaker at a speed of 150 rpm for 8-10 days. Every day, 1 mL samples were taken to check developmental stage and the concentration of the spores with a Thoma counting cell chamber under 400 × magnification (photomicroscope, Carl Zeiss AG, Oberkochen, Germany). Fermentation Batch fermentation was carried out in a 2 L BIOSTAT® Bplus stirred tank reactor (Sartorius Stedim System GmbH, Guxhagen, Germany) with a working volume of 1.5 L. The basal salts were dissolved in 1200 mL ddH 2 O and were autoclaved in the bioreactor. Also, a few drops of the anti-foam agent Pluronic® PE 8100 (BASF SE, Ludwigshafen, Germany) were added before fermentation started. Likewise, 300 mL of a carbon source stock solution (75 g carbon source) were autoclaved separately and were inoculated with 7.5×10 8 aerial conidia (5.0×10 4 spores/mL). To start the fermentation, the inoculum suspension was added to the bioreactor. Temperature was maintained at 25°C and fermentation time was between 8-10 days. Analysis The metabolic respiratory quotient (RQ) is an on-line parameter for the formation of biomass was calculated from the ratio of the generated carbon dioxide and the consumed oxygen, which were measured with an O 2 and CO 2 sensor (BlueSens GmbH, Herten, Germany) in the exhaust air. For the determination of fungal dry biomass 15 mL samples were centrifuged for 10 min at 20,000 g, washed two times with ddH 2 O and centrifuged again. The pellets were suspended in 5-7 mL of ddH 2 O. The cell suspensions were dried at 115°C to constant weight using a moisture analyzer (Sartorius AG, Goettingen, Germany). Each time, determination of fungal dry biomass was carried out in two replicates. The colony forming units (CFU) of BS and SCS were determined by spreading 100 μL of diluted samples on SDA plates (Odds 1991) and incubating at 25°C for 4-6 days. To ensure that a sample will yield CFU in a range between 50 and 150 colonies requires several 10-fold dilutions of the sample with 0.9% NaCl. The CFU were determined on duplicate samples. The CFU/ml was calculated as follows: Insect virulence assay Bioassays were conducted with 30 second instar larvae of Plutella xylostella L. (Yponomeutidae: Lepidoptera), which were provided by Prof. Stefan Vidal, Georg-August-University, Department of Crop Sciences/ Agricultural Entomology, Goettingen, Germany. The culture broth of the fermentation was centrifuged for 5 min at 20,000 g, washed two times with ddH 2 O and centrifuged again. The washed spore mix, consisting of 95% BS and 5% SCS, as well as pure AC from a two-weeks-old SDA culture were suspended in 0.1% Triton-X114 to obtain a final concentration of 10 6 viable spores/mL. Aliquots of 1 mL of the suspensions were brushed on the adaxial side of secondary oilseed rape leaves with an area of 80 ± 10 cm 2 . The control leaves were treated with 0.1% Triton-X114 only. High 500 mL beakers were filled with 100 mL water agar (1.0% agar-agar). In each case three stalks of the treated leaves were drilled into the solid water agar. The upper surface of the water agar was covered with sterile filter paper to prevent the larvae from getting stuck. Afterwards, ten larvae were transferred into each of the beakers with and without spores, respectively. The beakers were closed with silk gauze and incubated at room temperature. After 14 days, the dead larvae were surface sterilized with 70% ethanol for 2 min, 5% sodium hypochlorite for 3 min and 70% ethanol for 2 min, rinsed twice in sterile distilled water, and then placed on sterile tissue paper in a laminar airflow cabinet. The larvae were placed on a modified B. bassiana selective medium, consisting of 1% casein peptone, 4% glucose, 0.1375% Syllit® (Spiess-Urania Chemicals GmbH, Hamburg, Germany), 0.0005% chlortetracycline, 0.0005% crystal violet and 1.5% agar-agar (Chase et al. 1986;Rangel et al. 2010), and were incubated at 25°C for 1 week. To evaluate the efficacy of the surface sterilization method the water used to rinse the tissues after surface sterilization was plated on selective medium and was incubated, too. All tests were run for 14 days and each test was repeated thrice. The mortality data were analyzed statistically using one-way ANOVA test. Formulation components consisting of 0.1% Triton X-114 as a wetter, 0.1% gelatine 280 Bloom (Gelita AG, Goeppingen, Germany) as a humectant, 1% sugar beet molasses as nutrient and 1% titanium dioxide as a UV protection agent were autoclaved for 20 min at 121°C. The spore suspension from a submerged fermentation was centrifuged for 5 min at 20,000 g, washed twice with ddH 2 O and centrifuged again. Afterwards the spores were suspended in 0.9% NaCl and added to the formulation components up to a final concentration of 10 6 spores/mL. The control formulation was free of fungal biomass. Then, the formulations were brushed onto an area of approximately 3 cm of the tips of 9 th secondary leaves oilseed rape plants. To increase the relative humidity up to 95%, the treated leaves were wrapped with plastic bags for the first 48 h. After 7 days the leaf tips were cut off and the untreated base of the leaves were harvested for the detection of endophytic colonization with B. bassiana by microscopy and PCR. For microscopy, cross-sections of the leaf mid rip were stained with 0.5% rose bengal dissolved in 5% aqueous ethanol for 15 sec and were washed with ddH 2 O (Saha et al. 1988). Growth of B. bassiana in the plant tissue was detected at 200-fold magnification with a light microscope. Afterwards, the leaves were surface sterilized as mentioned above and then placed on sterile tissue paper in a laminar airflow cabinet. In a preliminary test it was shown that this surface sterilization method kills all spores which were applied onto oilseed rape leaves. For DNA extraction which was described above, approximately 400 mg plant tissue from the surface sterilized leaves and stems was taken. The plant tissue was crushed in a MM400 ball mill using a sterile 5 mm steel ball (Retsch GmbH, Haan, Germany) for 5 min at 30 Hz. To isolate B. bassiana DNA from a SDA culture, 50 mg fungal biomass were directly used for DNA extraction. All tissue samples of both treated and untreated oilseed rape plants were assessed by PCR. Screening of media in shake flask culture The entomopathogenic and endophytic fungus B. bassiana isolate ATP-02 was cultivated in shake flasks. The different liquid media described above were used to study the effect of various nutrients, basal salts and other complex components on submerged spore formation. In Figure 1 the concentrations and yields of total spores (TS) with regard to the different culture media are illustrated. The most promising culture medium with regard to a maximum growth and optimum sporulation was the TKI medium with 5% sucrose as a carbon source: After a cultivation time of 168 h B. bassiana produced TS in a concentration of 1.10 ± 0.01×10 8 TS/mL consisting of 100% BS. In this TKI medium a yield of 2.20 ± 0.02×10 9 TS/g sucrose was obtained. Due to the lower substrate concentration of 2% sucrose and the also high concentration of 1.09 ± 0.08×10 8 TS/mL in the YSM medium a yield of 5.43 ± 0.38×10 9 TS/g sucrose was obtained. However, in contrast to the TKI medium, the YSM medium contains 0.5% yeast extract as a complex component. In all cases, the achieved concentration of SCS was lower than 11% of the TS. The highest, but still low SCS concentration of 0.29 ± 0.04×10 6 SCS/mL was obtained in the YG medium. Influence of different carbon sources on spore formation of B. bassiana B. bassiana ATP-02 was cultured in TKI medium which was supplemented with 5% of different pure carbon sources according to Thomas et al. (1987). Furthermore, TKI medium was supplemented with 5% sugar beet molasses as a complex carbon source which consisted of 50% sucrose according to manufacturer specification (Suedzucker AG, Mannheim, Germany). In Figure 2 the different carbon sources are illustrated. The highest concentration of 1.17 ± 0.05×10 9 TS/mL corresponding to the highest yield of 4.68 ± 0.20×10 10 TS/g sucrose was obtained in the TKI medium with 5% sugar beet molasses. Furthermore, in this medium B. bassiana also produced the highest concentration of 2.00 ± 0.50×10 7 SCS/mL corresponding to a SCS yield of 8.00 ± 2.00×10 8 SCS/mL 168 h after inoculation. However, the biomass consisted of more than 98% BS. In spite of the lower sugar concentration of the molasses (2.5%) the concentration of TS could be increased up to 11-times in contrast to the cultivation with 5% sucrose. Due to the lower sugar concentration the yield of TS could be even increased up to 21-times. Interestingly, when TKI basal salts were omitted from the 5% molasses medium, TS concentration decreased by 96% compared to cultivation in the original TKI medium (data not shown). Fermentation of B. bassiana ATP-02 B. bassiana ATP-02 was raised in a 2 L stirred tank reactor to produce high concentrations of TS. Based on the cultivations in shake flasks B. bassiana ATP-02 was cultivated in the mineral TKI medium with 5% sugar beet molasses and were inoculated with 5×10 4 AC/mL. The fermentation conditions and growth parameters are given in Table 1. During 48 h fermentation time, 12.6 g/L dry biomass was produced. Maximal specific growth rate μ max was 0.14 h −1 . Average doubling time was 14.7 h. The minimal doubling time of 4.9 h was reached between 48 and 72 h after inoculation. The results of the fermentation could be verified in 7 sequentially performed runs, which are shown in Table 2. The Figure 3a and b show the details of the fermentation no. 1. The fermentation process can be subdivided in two phases: a phase of mycelium formation and a following phase of spore formation. At the beginning of the fermentation the amount of dry biomass increased because the fungus produced mycelium. After 62 h a sudden decrease of the RQ to 0.6 followed by a sharp decrease of pH to 4.7, a short recuperation of RQ and a decrease of pO 2 to 4% was observed. A sample taken shortly thereafter at 72 h still yielded 21 g biomass/L and the broth was still visibly viscous. Then, biomass dry weight decreased which was accompanied by a visible reduction of mycelium. Preliminary HPLC data indicated a formation of oxalate (data not shown). At this time the concentration of TS started to increase up to 1.29 ± 0.04×10 9 TS/mL corresponding to a yield of 5.16 ± 0.16×10 10 TS/g sucrose at the end of the fermentation. However, the biomass consisted of more than 95% BS. 96 h after inoculation the viability of TS started to decrease. The maximum concentration of 0.84×10 9 viable spores/mL corresponding to a yield of 3.36×10 10 viable spores/g sucrose was obtained 168 h a b Figure 2 Influence of different carbon sources of the TKI medium on spore formation. B. bassiana was cultivated in 250 mL shake flasks (n = 2). after inoculation. During the further fermentation process the concentration of viable spores decreased to 0.78×10 9 TS/mL corresponding to a yield of 3.12×10 10 TS/g sucrose at the end of the fermentation. Besides, the biomass dry weight decreased during the spore formation phase from 21 g/L to 12 g/L. Furthermore, 80 h after inoculation the RQ decreased continuously and the pO 2 reached a noncritical value of 20% and a clogging of the pO 2 electrode was noted. Insect virulence assay The spore mixture from a submerged fermentation, consisting of 95% BS and 5% SCS as well as pure AC harvested from a petri dish were applied in a virulence test against the diamondback moth, P. xylostella. After 14 days, in the control without fungal spores 93 ± 5% of the larvae developed into viable adult insects. However, 77 ± 5% of larvae fed with spore mix-treated leaves as well as 90 ± 8% of larvae fed with AC-treated leaves died within a week (Figure 4). The dead larvae were surface sterilized, placed on a B. bassiana selective medium and mycelium grew out of all larvae treated with fungal spores and the mycelium was identified as B. bassiana by PCR. The mycelium which grew out of dead larvae not treated with B. bassiana was clearly not B. bassiana, so that these larvae did not die by B. bassiana induced mycosis. It was observed that the spores from submerged fermentation (P < 0.01; F 1,4 = 220.5) as well as the pure AC (P < 0.01; F 1,4 = 156.3) significantly affected the mortality of larvae. Besides, the number of dead larvae was not significantly affected by the type of spores. Penetration assay The influence of a formulation with B. bassiana spores (10 6 TS/mL) on the penetration of oilseed rape leaves was investigated. The formulation was brushed onto the 9 th secondary leaf tips of seven oilseed rape plants and afterwards, endophytic B. bassiana was detected in the tissue of the untreated leaf base by PCR and microscopy. After 7 days, no hyphae growth was observed microscopically in control leaves treated without B. bassiana (n = 2). However, hyphae growth was observed in the mid rip cross-sections of 100% of leaves treated with the formulation. A randomly selected cross-section of these leaf mid rips is illustrated in Figure 5a. To verify that the microscopically detected mycelium was B. bassiana, a PCR was performed. B. bassiana was detected in all untreated areas of the leaves treated with the formulation by PCR and subsequent gel electrophoresis. The positive PCR signals of five randomly selected leaves are shown in Figure 5b and no PCR amplification was observed in control plants. Discussion This study deals with the fermentation aspects of the endophytic B. bassiana ATP-02 which might prepare the way to exploit endophytes as commercial biocontrol agents. Screening of media in shake flask culture In the past, mass-production of B. bassiana has focused on AC, but the production through surface cultivation Figure 4 Virulence test with P. xylostella larvae. The larvae were fed with AC-treated as well as BS/SCS-treated (95% BS and 5% SCS) and non B. bassiana (Bb)-treated oilseed rape leaves. Means (±SD) followed by different letters are significantly different at P < 0.01 using one-way ANOVA test. In each case, standard deviations resulted from three replicates with 10 larvae. or a two-stage process in which the fungus is allowed to develop under submerged conditions and subsequently transferred to a solid media to sporulate requires long cultivation times, large amounts of space and can be labour-intensive (Hall and Papierok 1982). The obvious advantages of a submerged cultivation are that the fungus produces spores in a relatively short time with high yields under controlled sterile conditions as well as a simpler scale-up in contrast to solid-state fermentation (Feng et al. 1994;Hegedus et al. 1992;Patel et al. 2011). It was previously known that B. bassiana strains grow in a variety of liquid mineral and complex media, but that the conidiation of the fungi under submerged conditions may be strain-specific (Kassa et al. 2008) and needs to be investigated for each strain in detail. Furthermore, it was hypothesized that the endophytic B. bassiana strain ATP-02 does not grow in the same way as the established B. bassiana strains that are applied in classic biocontrol on the surface of plants or in the soil. However, results on cultivation in shake flasks showed that B. bassiana ATP-02 was able to produce BS in the described culture media. Generally, in a submerged cultivation B. bassiana can produce two types of spores, namely BS and SCS. BS are relatively large, thin-walled and single-celled hyphal bodies (Bidochka et al. 1987). SCS, on the other hand, are small, spherical, more uniform in size and show a higher shelf life than BS (Thomas et al. 1987;Hegedus et al. 1992;Holder et al. 2007). They arise from the fungal mycelia or directly from BS in a process known as microcycle conidiation (Smith et al. 1981). Thomas et al. (1987) describe the direct formation of SCS from BS in the mineral TKI medium with 5% glucose after a cultivation time of 96 h. This phenomenon was not observed in the present work. From the biotechnological point of view the most important properties of the culture medium are high yields of TS and SCS, respectively. Although a higher TS yield was obtained in the YSM medium consisting of basal salts, 2% sucrose and 0.5% yeast extract a 2.5-fold, the mineral TKI medium without expensive complex components was further optimized because of cost-effectiveness. Influence of different carbon sources on spore formation of B. bassiana The highest concentrations and yields of TS and SCS were obtained in the TKI medium with 5% sugar beet molasses as a carbon source. It should be pointed out that the concentration of TS could be increased up to 11-times in contrast to the cultivation with 5% sucrose in spite of the lower sucrose concentration of 2.5% in the molasses. The utilized sugar beet molasses consisted of 50% sucrose and only traces of other sugars like glucose, fructose and raffinose as well as different proteins and basal salts according to manufacturer specification (Suedzucker AG, Mannheim, Germany). Furthermore, it could be shown that in addition to the present basal salts of the sugar beet molasses the TKI medium is necessary for optimal growth of B. bassiana. Sugar beet molasses is a residue of the agricultural industry and consequently, it is a low-cost source, which is a big advantage compared to other carbon sources. Therefore, the cost of 1 L TKI basal medium amended with 5% sugar beet molasses amounts to only 0.31 €. Fermentation of B. bassiana ATP-02 After the optimized cultivation of B. bassiana ATP-02 in shake flasks the process was scaled-up to a 2 L stirred tank reactor. Based on the cultivations in shake flasks B. bassiana ATP-02 was cultivated in the TKI mineral medium with 5% sugar beet molasses. In a preliminary test the fermentation was inoculated with a 5-days old shake flask culture of B. bassiana, which contained only BS. During the fermentation the fungus produced only mycelium (data not shown). Since the objective of cultivation was a high concentration of TS, further fermentations were inoculated with 5.0×10 4 AC/mL. The achieved concentrations and yields of TS and SCS were comparable with the cultivation of B. bassiana ATP-02 in shake flasks and could be verified in 7 sequentially performed fermentation runs. An unusual point during all fermentations is the sudden decrease of biomass dry weight in correlation with the low concentration of oxygen in the culture broth 72 h after inoculation. At this point it was observed that the finely dispersed mycelium lysed and the pH and the viscosity of the culture broth decreased. HPLC analysis indicated presence of oxalate. The reason for the rapid pH decrease in our fermentation is not clear, but it can be presumed that intracellular oxalate was suddenly released due to the lysis of mycelium. The following increase of pH suggests that oxalate was then metabolized by the growing spores. These presumptions are supported by other studies which also indicate that B. bassiana strains are able to produce, secrete and metabolize oxalate in vitro (Bidochka and Khachatourians 1993;Kirkland et al. 2005). During the fermentation process the pH value was not regulated due to the fact that fluctuating pH values between 4.0 and 6.5 have no considerable impact on the growth of B. bassiana (Padmavathi et al. 2003;Thomas et al. 1987). The typical limitation of oxygen can be prevented by increase of the stirrer speed or agitation rate (Patel et al. 2011). But the primary objective of this fermentation process was not the production of finely dispersed mycelium but rather the mass-production of sprayable TS without any mycelium. The recurring decrease of biomass dry weight in the spore formation phase cannot be explained in detail. It may be hypothesized that the mycelium is decreasing but the spore formation does not compensate the weight loss. It can be ruled out that spore biomass was lost during sample preparation as biomass was not filtered but centrifuged. Furthermore, the achieved TS concentration and yield of the described fermentation process was higher than those obtained by other investigators in studies of liquid shake flask cultivations of epiphytic B. bassiana strains. For example, Thomas et al. (1987) reported a maximum concentration of 5.0×10 8 TS/mL corresponding to a yield of 1.00×10 10 TS/g glucose, Rombach (1989) described that B. bassiana produced TS in a maximum concentration of 0.17×10 9 TS/mL corresponding to a yield of 0.85×10 10 TS/g sucrose, Vega et al. (2003) obtained a maximum concentration of 1.24×10 9 BS/mL corresponding to a yield of 1.65×10 10 BS/g glucose and Pham et al. (2009) reported a maximum concentration of 0.85×10 9 BS/mL. The highest described concentration of BS was reported by Chong-Rodríguez et al. (2011), who obtained an inconsistent concentration of 6.38×10 9 ± 3.63×10 9 BS/ml in a somewhat costly complex medium, which consisted of 5% sucrose, 2% corn steep liquor and basal salts. In comparison to other published data on cultivation of B. bassiana isolates in a solid-state or submerged cultivation it can be shown that the described fermentation process is very economical with regard to the achieved concentration and yield of TS. Furthermore, an obvious advantage of the fermentation process is that the cost of 10 12 TS amounts to only 0.24 € with regard to the utilized culture medium. A further increase of the TS concentration can likely be realized by fed-batch fermentation. Insect virulence assay The mortality was 77 ± 5% for P. xylostella larvae, which were fed with oilseed rape leaves treated with BS and SCS, and 90 ± 5% for larvae fed with AC-treated leaves. These larvae mortalities are in accordance with those obtained by other investigators. Godonou et al. (2009) reported that AC of B. bassiana caused P. xylostella larvae mortality ranging from 20 to 94%. BS which were sprayed on P. xylostella larvae showed a mortality ranging from 95 to 100% (Fargues et al. 1983). In addition, Chong-Rodríguez et al. (2011) described that BS of B. bassiana maintained for six months at 4°C showed a mortality of more than 80% against third-instar P. xylostella larvae 8 days after application. Furthermore, Ortiz-Urquiza et al. 2010 observed that the composition of the culture medium affected the virulence of AC from B. bassiana because of an increased or decreased secretion of virulent proteins. But the influence of the culture media on the virulence of B. bassiana was not investigated in this work. Since no further mortality tests were conducted with pure BS and pure SCS, it can only be hypothesized that BS must have killed the larvae, because the spore mix consisted of 95% BS which are the preferred propagule of B. bassiana in the haemocoel of infected insects (Jackson et al. 2010;Shimizu et al. 1993;Sieglaff et al. 1997). Furthermore, BS are highly infective against a number of insect pests and have a lower LD 50 when compared to AC or SCS (Hegedus et al. 1992). Finally, it was shown here that spores from a submerged cultivation are as virulent to P. xylostella larvae as AC. Penetration assay The simple penetration assay indicated that the spores from submerged fermentation show endophytic properties. This is in line with studies on AC that were applied to leaves and could to some extent colonize plants Lewis 1991, 1992;Gurulingappa et al. 2010;Landa et al. 2013;Posada et al. 2007;Quesada-Moraga et al. 2006;Quesada-Moraga et al. 2009;Tefera and Vidal 2009;Wagner and Lewis 2000). Many questions about endophytism remain that are not within the scope of this fermentation study. To the best of our knowledge, this is the first report of fermentation of an endophytic B. bassiana strain in a low-cost culture medium to very high yields of TS, which are able to penetrate oilseed rape leaves via a leaf application. This should further encourage the recent activities to exploit the biocontrol potential of endophytic entomopathogenic fungi. Besides, the evidence that the endophytic strain grows in simple cultivation conditions much like the classic biocontrol strains is further proof that in nature some microorganisms are facultative endophytes, which can optionally live inside plants and other habitats (Hardoim et al. 2008). The results also clearly suggest to further explore submerged cultivation for entomopathogenic fungi in general. Further studies are required to produce higher amounts of SCS of B. bassiana, which show a higher shelf life than thin-walled BS and may also persist longer when sprayed onto plant leaves.
7,182
2014-05-29T00:00:00.000
[ "Biology", "Environmental Science" ]
Role of Protein Targeting to Glycogen (PTG) in the Regulation of Protein Phosphatase-1 Activity* We have recently cloned from 3T3-L1 adipocytes a novel glycogen-targeting subunit of protein phosphatase-1, termed PTG (Printen, J. A., Brady, M. J., and Saltiel, A. R. (1997)Science 275, 1475–1478). Differentiation of 3T3-L1 fibroblasts into highly insulin-responsive adipocytes resulted in a marked increase in PTG expression. Immobilized glutathioneS-transferase (GST)-PTG fusion protein specifically bound either PP1 or phosphorylase a. Addition of soluble GST-PTG to 3T3-L1 lysates increased PP1 activity against32P-labeled phosphorylase a by decreasing theK m of PP1 for phosphorylase 5-fold, while having no effect on the V max of the dephosphorylation reaction. Alternatively, PTG did not affect PP1 activity against hormone-sensitive lipase. PTG was not a direct target of intracellular signaling, as insulin or forskolin treatment of cells did not activate a kinase capable of phosphorylating PTG in vivo or in vitro. Finally, PTG decreased the ability of DARPP-32 to inhibit PP1 activity from 3T3-L1 adipocyte lysates. These data cumulatively suggest that PTG increases PP1 activity against specific proteins by several distinct mechanisms. We have recently cloned from 3T3-L1 adipocytes a novel glycogen-targeting subunit of protein phosphatase-1, termed PTG (Printen, J. A., Brady, M. J., and Saltiel, A. R. (1997) Science 275, 1475-1478). Differentiation of 3T3-L1 fibroblasts into highly insulin-responsive adipocytes resulted in a marked increase in PTG expression. Immobilized glutathione S-transferase (GST)-PTG fusion protein specifically bound either PP1 or phosphorylase a. Addition of soluble GST-PTG to 3T3-L1 lysates increased PP1 activity against 32 P-labeled phosphorylase a by decreasing the K m of PP1 for phosphorylase 5-fold, while having no effect on the V max of the dephosphorylation reaction. Alternatively, PTG did not affect PP1 activity against hormone-sensitive lipase. PTG was not a direct target of intracellular signaling, as insulin or forskolin treatment of cells did not activate a kinase capable of phosphorylating PTG in vivo or in vitro. Finally, PTG decreased the ability of DARPP-32 to inhibit PP1 activity from 3T3-L1 adipocyte lysates. These data cumulatively suggest that PTG increases PP1 activity against specific proteins by several distinct mechanisms. While much attention has been focused on the activation of protein kinase signaling cascades, many enzymes involved in glucose and lipid metabolism are regulated by dephosphorylation (2). As the main physiological hormone controlling glucose utilization, insulin exerts many of its effects through the activation of type 1 protein phosphatase (PP1). 1 However, insulin treatment of cells results in the dephosphorylation of only a limited number of proteins, while simultaneously other proteins are phosphorylated (3). This paradox suggests that mechanisms must exist whereby insulin activates discrete pools of PP1, leading to the dephosphorylation of specific target proteins. PP1 is found in many cellular compartments, including the nucleus, plasma membrane, and glycogen particle. It is thought that the cellular localization of this enzyme is mediated by its association with targeting proteins (4,5). We have recently identified a novel PP1 glycogen-targeting subunit from 3T3-L1 adipocytes, termed PTG for protein targeting to glycogen (1). PTG is the third member of a family of PP1 glycogen-targeting subunits, which also includes R GL , isolated from muscle (6,7), and the hepatic G L protein (8,9). In contrast to the restricted localization of R GL and G L , PTG is highly expressed in all insulin-sensitive tissues. 2 In addition to targeting PP1 to the glycogen particle, PTG can also form complexes with PP1 substrate enzymes that regulate glycogen metabolism, namely glycogen synthase, glycogen phosphorylase, and phosphorylase kinase. Overexpression of PTG in the metabolically inactive CHO-IR cell line dramatically increased the levels of basal and insulin-stimulated glycogen synthesis (1). PTG may therefore serve as a scaffolding protein, assembling the proteins involved in glycogen metabolism and priming them for the reception of intracellular signals. The mechanism by which insulin specifically activates glycogen-targeted PP1 activity remains poorly understood. The proposed phosphorylation and activation of the PP1-R GL complex by pp90 RSK (10) has subsequently been challenged (11)(12)(13)(14)(15). Further, the two putative PP1 regulatory phosphorylation sites of R GL are not conserved in PTG (1). Therefore, other mechanisms must exist for the regulation and activation of PP1 activity targeted to glycogen by PTG. The results presented here demonstrate that PTG is not a direct target for insulinactivated protein kinases. However, PTG does increase PP1specific activity against phosphorylase a by three separate mechanisms: by targeting the phosphatase to glycogen, by directly binding and co-localizing PP1 substrates, and by reducing the affinity of PP1 for inhibitor peptides such as DARPP-32. Cell Culture-3T3-L1 fibroblasts and CHO-IR cells were maintained as described previously (14,16). 3T3-L1 fibroblasts were differentiated * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 2 J. A. Printen and A. R. Saltiel, manuscript in preparation. Fusion Protein Expression and Purification-PTG was subcloned into the pGEX-5X-3 expression vector (Pharmacia), and fusion protein was expressed in Escherichia coli DH5␣. One liter of 2X-YT media plus 100 g/ml ampicillin was seeded with 10 ml of a saturated overnight culture and allowed to grow at 37°C for 3.5 h (A 600 ϭ 0.5-0.7). Protein expression was induced with 1 mM IPTG for 3 h at 37°C. Bacteria pellets were resuspended in 20 ml of PBS containing 1 mM benzamidine, 0.1 mM phenylmethylsulfonyl fluoride, and 10 g/ml aprotinin added just before use and were lysed by two passes through a French press (1,000 p.s.i.). 2 ml of PBS plus 10% Triton X-100 were added to the lysate which was then gently mixed at 4°C for 15 min. The supernatant from a 27,000 ϫ g, 15-min centrifugation spin was applied to 1 ml of glutathione-Sepharose beads equilibrated in PBS. After 30 min of mixing at 4°C, the beads were washed five times with PBS and analyzed by SDS-PAGE. This protocol was designed to maximize GST-PTG yield and solubility. Typically, 0.5-1 mg of GST-PTG was isolated per liter of culture. GST-PTG was batch-eluted three times with 5 ml of PBS plus 20 mM glutathione (pH 7.8), followed by concentration with a Centriprep-30 (Amicon). A GST fusion protein comprising the 40-kDa N terminus of R GL (GST-G40) was expressed, purified, and eluted in an identical fashion. E. coli DH5␣ containing a full-length hormone-sensitive lipase GST fusion protein construct (GST-HSL; provided by Dr. C. Baumann, Parke-Davis) were grown to an A 600 of 1, and then protein expression was induced with 0.5 mM IPTG for 3 h at 30°C. GST-HSL was purified as above, and immobilized GST-HSL was labeled using purified protein kinase A catalytic subunit and 50 M [␥-32 P]ATP (2500 cpm/pmol) at room temperature for 45 min. The beads were washed extensively with PP1 homogenization buffer, and 10 l of a 50% slurry were used as substrate in the PP1 assay. A saturated culture of bacteria containing a polyhistidine-tagged DARPP-32 construct was diluted and grown in LB media at 37°C for 3-4 h until the A 600 ϭ 0.4. Protein expression was induced by an overnight incubation with 1 mM IPTG at 30°C. Bacterial pellets were lysed as above, and the clarified lysate was applied to Ni-NTA agarose affinity resin. After batch elution in 50 mM Tris (pH 6.8), 150 mM NaCl, and 0.5 M imidazole, DARPP-32 was concentrated using a Centriprep-10. In Vivo and in Vitro Phosphorylation Assays-CHO-IR cells were transiently transfected with a FLAG epitope-tagged PTG construct as described previously (1). 48 h post-transfection, cells were serum-deprived for 3 h in phosphate-free Dulbecco's modified Eagle's medium plus 0.5% calf serum. Cells were then incubated for 1 h in the same media containing 1 mCi/1.5 ml of [ 32 P]orthophosphate. After stimulation, cells were washed three times with ice-cold PBS and lysed in HNTG buffer (14,18) plus protease inhibitors, and anti-FLAG immunoprecipitations were performed as described (1). Replicate culture plates were treated in parallel without [ 32 P]orthophosphate, and in vitro kinase assays were performed on the cell extracts as below. 3T3-L1 adipocytes were serum-starved for 3 h in Krebs-Ringer buffer with 30 mM Hepes (pH 7.4) plus 0.5% BSA and 2.5 mM glucose. After treatment, cells were washed three times with ice-cold PBS and were then collected in kinase lysis buffer (10 mM Hepes (pH 8.0), 50 mM ␤-glycerophosphate, 70 mM NaCl, 1% Triton) plus 1 mM sodium vanadate, and protease inhibitors were added before use. After centrifugation at 15,000 ϫ g for 10 min, 15-35 g of the lysates were assayed in duplicate for kinase activities. GST-PTG and MAP kinase assays were performed at 37°C for 10 min in the presence of 50 mM Hepes (pH 7.4), 10 mM MgCl 2 , and 40 M [␥-32 P]ATP (3000 cpm/pmol), using as substrate 10 g of GST-PTG or 2.5 g of MAP2 protein, respectively. Protein kinase A assays were performed at 37°C for 2 min as above, using 5 g of GST-G40 as substrate, in the absence and presence of 1 mM dibutyryl cAMP. Reactions were terminated by the addition of SDS-sample buffer. Samples were analyzed by SDS-PAGE and autoradiography, substrate proteins were excised from the dried gels, and 32 P incorporation was measured by liquid scintillation counting. Assay of PP1 Activity-3T3-L1 adipocytes were washed three times with ice-cold PBS and were then harvested in PP1 homogenization buffer (50 mM Hepes (pH 7.2), 2 mM EDTA, 0.2% ␤-mercaptoethanol, and 2 mg/ml glycogen) plus protease inhibitors. Cells were lysed by sonication, and nuclei and cell debris were pelleted by centrifugation at 2,500 ϫ g for 5 min. Where indicated, the resulting post-nuclear supernatant (PNS) was subjected to sequential centrifugation to prepare plasma membranes (10,000 ϫ g, 15 min) and to separate glycogenenriched pellets from the cytosol (100,000 ϫ g, 1 h). Particulate fractions were resuspended in homogenization buffer using a 23-gauge needle. For PP1 assays, 1-3 g of cellular fraction was preincubated in PP1 homogenization buffer (20-l final volume) containing 4.5 nM okadaic acid for 2 min at 37°C. Reactions were initiated by the addition of 10 l (15 g, 5 M final) of 32 P-labeled phosphorylase a (3 nM okadaic acid, 5 mM caffeine final). After 5-10 min, the reactions were terminated by the addition of 90 l of ice-cold 20% trichloroacetic acid and 5 l of 2% BSA (modified from Ref. 19). Samples were incubated for 10 min on ice, and precipitated protein was pelleted by a 2-min 15,000 ϫ g centrifugation spin. Phosphate released into the supernatant was measured by liquid scintillation counting. For Lineweaver-Burk analysis, PP1 activity was measured using 0.5 g of PNS fraction as above and 1, 2, 3, 4, and 6 M final concentrations of phosphorylase substrate. In the absence of GST-PTG, samples were incubated at 37°C for 2.5, 5, 7.5, and 10 min; in the presence of GST-PTG, samples were incubated for 1, 2, 4, and 5 min to ensure that maximal dephosphorylation did not exceed 25% of the substrate added. Reactions were terminated and quantitated as above, and results were analyzed using Cricket Graph (Cricket software). [ 32 P]Phosphorylase a (ϳ2000 cpm/pmol) was prepared as described previously (14). Determination of PTG Binding Affinities-For phosphorylase a binding measurements, 5 g of immobilized GST-PTG was resuspended in PP1 homogenization buffer plus 200 mM NaCl and 0.2% BSA. 32 P-Labeled phosphorylase a was added to final concentrations of 1, 5, 10, and 15 M (final volume 30 l). Samples were incubated at 4°C with gentle mixing for 30 min and were then washed three times with buffer. Phosphorylase a binding was determined by liquid scintillation counting. Parallel incubations using immobilized GST-PTP1B protein were subtracted as blanks for each phosphorylase concentration. For PP1 binding measurements, immobilized GST-PTG was incubated with increasing amounts of bacterial lysate in PP1 homogenization buffer containing recombinant PP1␣. Samples were treated as above, except that binding was analyzed by anti-PP1 immunoblotting and ECL detection. Standard curves of PP1 protein were included on the immunoblots. Autoradiograms were analyzed by computer-assisted video densitometry using a Bio Image system (Millipore). PTG Expression Is Increased upon 3T3-L1 Differentiation-We have recently identified by two-hybrid screening a novel glycogen-targeting subunit of PP1 from 3T3-L1 adipocytes, termed PTG (1). This protein appears to act as a molecular scaffold for glycogen metabolism and can dramatically increase glycogen synthesis upon overexpression in tissue culture cells. To determine whether PTG expression is correlated with the increase in metabolic activity and insulin responsiveness observed following 3T3-L1 differentiation, PTG expression was examined in fibroblasts and fully differentiated adipocytes by Northern analysis. A single hybridizing mRNA species of 3 kilobases was identified, which was dramatically up-regulated following adipogenesis (Fig. 1). These results correlate PTG expression with 3T3-L1 adipocyte differentiation, suggesting a critical role for PTG in the regulation of glycogen synthesis in 3T3-L1 adipocytes. PP1 and Phosphorylase a Bind GST-PTG in Vitro-We have previously demonstrated (1) that PTG can form specific complexes with PP1, phosphorylase a, phosphorylase kinase, and glycogen synthase, the key enzymatic regulators of glycogen synthesis. To better characterize the protein-protein interactions of these enzymes with PTG, the binding affinities of GST-PTG fusion protein for PP1 and phosphorylase a were determined. Immobilized GST-PTG was incubated with varying amounts of recombinant PP1␣, the beads were washed and subjected to SDS-PAGE, and binding was quantitated by densitometry scanning of anti-PP1␣ immunoblots. Binding was saturated at 850 nM PP1, with an EC 50 of approximately 335 Ϯ 18 nM (Fig. 2A). The affinity of PTG for glycogen phosphorylase was measured next. Varying amounts of 32 P-labeled phosphorylase a were incubated with immobilized GST-PTG, the beads were washed extensively, and binding was determined by liquid scintillation counting. As seen in Fig. 2B, phosphorylase a bound to PTG with an EC 50 of 5.45 Ϯ 0.14 M. In contrast to the results of Doherty et al. (21), these data confirm earlier results demonstrating that PTG can directly associate with phosphorylase a (1). PTG Increases the Affinity of PP1 for Phosphorylase a-We examined the role of PTG in both the subcellular targeting and the regulation of PP1 specific activity. GST-PTG addition to a 3T3-L1 adipocyte PNS fraction, followed by differential centrifugation, resulted in a dose-dependent, 4 -6-fold increase in PP1 activity in the glycogen-enriched fraction (Fig. 3A, GP). PTG addition to 3T3-L1 lysates also reduced the amount of PP1 activity targeted to the plasma membrane fraction (Fig. 3A, PM), indicating that changes in the level of PTG expression can impact on the cellular distribution of PP1 activity (1). Since PTG forms stable complexes with PP1 substrate proteins, it is possible that PTG can also modulate PP1 specific activity independently of glycogen localization. To test this possibility, varying amounts of soluble GST-PTG were added to 3T3-L1 adipocyte PNS fractions, and then PP1 activity was measured. PTG caused a dose-dependent 2-fold increase in PP1 specific activity against phosphorylase a (Fig. 3B), whereas addition of 500 nM GST protein had no effect on PP1 activity (data not shown). Lineweaver-Burk analysis revealed that PTG decreased the K m of PP1 for phosphorylase a 5-fold, while having no effect on the V max (Fig. 3B). Similar results were obtained in a glycogen-free, cytosolic fraction (data not shown), indicating that the stimulation of PP1 activity was independent of glycogen targeting. GST-PTG also caused a dose-dependent 3-fold increase in bacterially expressed PP1␣ activity against phosphorylase a (data not shown). GST-PTG was completely soluble and did not pellet upon ultracentrifugation in the absence of glycogen, possibly explaining differences with the results of Doherty et al. (21). The effects of PTG on PP1 activity were dependent on the substrate used in the phosphatase assay. HSL is a lipid-me-tabolizing enzyme, which is dephosphorylated by PP1 in response to insulin stimulation of adipocytes. Although GST-PTG addition to a 3T3-L1 adipocyte PNS fraction increased PP1 specific activity 2-fold against phosphorylase a (Fig. 3D, Phos a), in parallel assays, there was no change in PP1 activity when 32 P-labeled hormone-sensitive lipase was used as substrate (Fig. 3D, HSL). Thus PTG regulates PP1 specific activity by both targeting the phosphatase to glycogen and also by selectively binding and co-localizing certain substrates with PP1. PTG Is Not a Target of Intracellular Signaling-Because PTG is likely to play a critical role in the regulation of glycogen synthesis by insulin, we examined whether PTG might be phosphorylated in response to hormone treatment. Although the putative phosphorylation sites previously suggested for R GL (10,22) are not conserved in PTG (1), the possible phosphorylation of other residues on PTG was examined. CHO-IR cells transiently transfected with FLAG-PTG were labeled with [ 32 P]orthophosphate and exposed to either 100 nM insulin or 10 g/ml forskolin for 5 min. PTG was immunoprecipitated with anti-FLAG antibodies and subjected to SDS-PAGE followed by autoradiography. PTG exhibited a low basal state of phosphorylation, which was not changed by either treatment (Fig. 4A). In replicate cultures, insulin or forskolin exposure increased MAP kinase or protein kinase A activity 3-and 5-fold, respec- To determine whether 3T3-L1 adipocytes contained a protein kinase capable of phosphorylating PTG, in vitro phosphorylation assays were performed using PTG as substrate. Cells were exposed to a variety of agents, and cellular lysates were prepared and incubated with GST-PTG and [␥-32 P]ATP. Samples were then analyzed by SDS-PAGE and autoradiography. As seen in Fig. 4C, GST-PTG was not phosphorylated in vitro by basal extracts (lane 1 versus 2). Further stimulation of 3T3-L1 adipocytes with either insulin or forskolin did not lead to the activation of a PTG kinase (Fig. 4C, lanes 3 and 4). EGF or TPA treatment of cells also did not result in any measurable phosphorylation of GST-PTG in vitro (Fig. 4C, lanes 5 and 6). Finally, PTG was also not phosphorylated in vitro by purified MAP kinase or protein kinase A catalytic subunit (data not shown), further indicating that PTG is not a physiological substrate for insulin-or cAMP-activated kinases. Insulin Does Not Change the Subcellular Distribution of PP1-Translocation of PP1 to and from the glycogen particle in response to the phosphorylation of R GL has been suggested to underlie hormonal regulation of PP1 activity (10,22). Although PTG is not phosphorylated in response to external stimuli, possible changes in the cellular distribution of PP1 following insulin treatment were examined. Primary rat adipocytes were isolated and exposed to 10 nM insulin for 30 min, and cellular fractions were prepared by differential centrifugation. Insulin induced a translocation of GluT-4 protein from the low density microsomal fraction to the plasma membrane fraction (Fig. 5A). However, insulin stimulation did not modulate PP1␣ protein levels in any fraction, including the low density microsomal fraction, which contains the glycogen pellet (Fig. 5B); identical results were obtained using an anti-PP1␥ antibody (data not shown). Further insulin or forskolin treatment of 3T3-L1 adipocytes also did not cause a detectable translocation of PP1 between cellular fractions (data not shown). Taken together, these data indicate that the regulation of PP1 activity by insulin, or agents that elevate intracellular cAMP levels, occurs independently of PP1 translocation. PTG Decreases the Inhibition of PP1 by DARPP-32-PP1 is maintained in a low activity state in 3T3-L1 adipocytes by the binding of phosphorylated DARPP-32. 3 Previous studies (23) suggested that insulin may activate PP1 in primary rat adipocytes by inducing the dephosphorylation and disassociation of FIG. 3. PTG increase PP1 specific activity against phosphorylase a. A, IPTG targets PP1 to glycogen in 3T3-L1 lysates. The indicated amounts of soluble GST-PTG was added to 700 l (ϳ500 g) of a PNS fraction from 3T3-L1 adipocytes, and samples were gently mixed for 1 h at 4°C. Plasma membrane and glycogen-enriched fractions were then prepared, and PP1 activity against 32 P-labeled phosphorylase a was measured in both fractions as described under "Experimental Procedures." Results are representative of four experiments, each performed in triplicate. B, PTG increases PP1 specific activity in 3T3-L1 PNS fraction. The indicated concentrations of soluble GST-PTG were added to a 3T3-L1 PNS fraction. Samples were mixed for 30 -60 min at 4°C, and PP1 assays were then performed. Results are representative of five experiments. C, Lineweaver-Burk analysis of PP1 activity. GST-PTG (100 nM) was added to 3T3-L1 adipocyte PNS fractions as in B, and Lineweaver-Burk enzymatic analysis was performed as described under "Experimental Procedures." Results are representative of four experiments, each performed in duplicate. D, specificity of PP1 activation by PTG. 100 nM soluble GST-PTG was added to a 3T3-L1 PNS fraction as in B. PP1 assays were performed in triplicate using either phosphorylase a (Phos a) or HSL as substrate. Control specific activities were approximately 1 and 0.5 pmol of PO 4 released/min for phosphorylase a and HSL, respectively. this peptide. The role of PTG in modulating the regulation of PP1 activity by DARPP-32 was investigated. Thiophosphorylated DARPP-32 specifically inhibited PP1 activity in 3T3-L1 lysates (Fig. 6), with a K i of 3 nM, consistent with previous results (20). Addition of purified GST-PTG to the lysates caused a rightward shift in the inhibition curve of DARPP-32 (K i 30 nM, Fig. 6), indicating that PTG lowers the binding affinity of DARPP-32 for PP1. These data suggest that a decrease in the cellular phospho-DARPP-32 concentration in response to insulin might result in the preferential activation of PP1 bound to PTG. DISCUSSION The regulation by insulin of enzymes involved in glycogen synthesis is primarily mediated by the activation of PP1 (2). The activities of glycogen synthase, phosphorylase a, and phosphorylase kinase are modulated by insulin via a mechanism involving their net dephosphorylation, resulting in an increase in glucose storage as glycogen. The paradoxical dephosphorylation of only a limited number of proteins by insulin, despite the ubiquitous presence of PP1 in nearly all cellular compartments, has yet to be explained. Mechanisms must exist for the establishment of discrete pools of PP1 which are preferentially activated by insulin. PP1 is maintained in discrete subcellular compartments by association with specific targeting subunits. Three proteins have been identified which bind both glycogen and PP1, thus localizing PP1 at the glycogen particle. R GL was first purified from skeletal muscle (6,7), and G L was subsequently purified from liver (8,9). We have recently identified a third PP1 targeting subunit from 3T3-L1 adipocytes, termed PTG, for protein targeting to glycogen (1). PTG binds not only to PP1 and glycogen, but also to the primary enzymatic regulators of glycogen synthesis, namely glycogen synthase, phosphorylase kinase, and phosphorylase a (1). By co-localizing PP1 with its substrates at the glycogen particle, PTG acts as a scaffolding protein, assembling metabolic enzymes for the localized reception of intracellular signals. Studies in different cell lines indicate that the level of PTG expression correlates with cellular metabolic activity. CHO-IR cells contain no endogenous PTG protein and exhibit a low basal rate of glycogen synthesis. Overexpression of PTG in these cells resulted in a 7-10-fold increase in glycogen synthesis (1). Differentiation of 3T3-L1 fibroblasts into lipid-containing adipocytes caused a dramatic increase in insulin-stimulated glycogen synthesis. 3 PTG mRNA levels were strongly up-regulated during this differentiation protocol (Fig. 1), further indicating an important role for this protein in the regulation of glycogen synthesis by insulin. PTG appeared to be capable of regulating PP1 specific activity in vitro by several mechanisms. Firstly, a GST-PTG fusion FIG. 4. PTG is not phosphorylated in vivo or in vitro. A, PTG phosphorylation in CHO-IR cells. CHO-IR cells, transiently transfected with a FLAG-PTG construct, were labeled with [ 32 P]orthophosphate. Following a 5-min treatment with either 100 nM insulin or 10 g/ml forskolin, cells were lysed, and anti-FLAG immunoprecipitations were performed and analyzed by SDS-PAGE and autoradiography at Ϫ80°C for 2 days. FLAG-PTG is indicated by arrow. B, MAP kinase activation in CHO-IR cells. Replicate plates of cells from A were stimulated and lysed in kinase buffer. MAP kinase activity was measured in vitro using purified MAP2 protein and [␥-32 P]ATP. Samples were separated by SDS-PAGE and were analyzed by autoradiography and scintillation counting. C, PTG is not phosphorylated by kinases in 3T3-L1 adipocyte lysates. Fully differentiated 3T3-L1 adipocytes were stimulated for 5 min with either 100 nM insulin, 10 g/ml forskolin, 100 ng/ml EGF, or 100 nM TPA. Cell lysates were prepared, and in vitro PTG kinase assays were performed using [␥-32 P]ATP and GST-PTG as described under "Experimental Procedures." Samples were analyzed by SDS-PAGE and autoradiography. Coomassie-stained GST-PTG is indicated by the arrow. GST-PTG protein was omitted from the assay in lane 1. Treatment lanes: 1 and 2, basal; 3, insulin; 4, forskolin; 5, EGF; 6, TPA. In parallel assays, MAP kinase activity was stimulated 3-, 2-, and 1.2-fold by insulin, EGF, or TPA, respectively, whereas forskolin increased protein kinase A activity 4-fold (data not shown). Results are representative of two experiments, each performed in duplicate. protein bound to PP1 with high affinity. Moreover, addition of GST-PTG to 3T3-L1 lysates resulted in a concentration-dependent translocation of PP1 to the glycogen-enriched pellet (Fig. 3A). The level of cellular expression of PTG would therefore presumably dictate the localization of PP1 at the glycogen particle (1). Secondly, GST-PTG also bound directly to phosphorylase a (Fig. 2B). Addition of GST-PTG to 3T3-L1 lysates, with no subsequent fractionation, caused a 2-fold increase in PP1 specific activity against phosphorylase a in vitro. Since PTG addition decreased the K m of PP1 for phosphorylase a 5-fold, without affecting the V max of the reaction, this increase in phosphatase activity resulted from the formation of a trimeric complex between PTG, PP1, and its substrate phosphorylase a. This effect of PTG on PP1 activity was restricted to specific proteins. Although HSL is a physiological substrate for PP1 in adipocytes and in vitro, PTG did not affect PP1 activity against this enzyme (Fig. 4D). Thus, PTG regulates PP1 activity against glycogen metabolic enzymes, both by targeting PP1 to glycogen and also by directly binding to and co-localizing specific PP1 substrate proteins. The precise mechanism by which insulin activates glycogentargeted PP1 activity remains unclear. Dent et al. (10) reported that the phosphorylation state of two protein kinase A consensus sites on the R GL glycogen targeting subunit regulated PP1 activity in vitro. However, this model has subsequently been disputed (11)(12)(13)(14)(15). PTG does not share the putative regulatory phosphorylation sites of R GL (1), and PTG was not phosphoryl-ated in response to either insulin or forskolin treatment of CHO-IR cells (Fig. 4A). Additionally, neither agent could activate a kinase from 3T3-L1 adipocytes capable of phosphorylating exogenous PTG in vitro (Fig. 4C). Finally, insulin treatment had no effect on PP1 binding to PTG in CHO-IR cells (1), and insulin did not increase PP1 localization at the glycogen particle in either primary rat adipocytes (Fig. 5B) or 3T3-L1 adipocytes. 3 Taken together, these results indicate that phosphorylation of PTG and/or changes in the affinity of PTG for PP1 do not mediate the hormonal regulation of PP1 activity targeted to glycogen. In 3T3-L1 adipocytes, PP1 is maintained in a low basal activity state by DARPP-32 binding. DARPP-32 expression is dramatically induced upon differentiation of 3T3-L1 fibroblasts into adipocytes and correlates with a decrease in PP1 basal activity and increase in stimulation by insulin. 3 Furthermore, DARPP-32 is expressed in pig brown fat (24), bovine adipose tissue (25), and in primary rat adipocytes, where it has been reported to be dephosphorylated in response to insulin (23). PP1 bound to PTG was resistant to inhibition by DARPP-32 (Fig. 6), in agreement with results with R GL (26). Since PTG reduces the affinity of DARPP-32 for PP1, glycogen-targeted PP1 activity may be more sensitive to possible insulin-induced dephosphorylation of inhibitor peptides in vivo. PTG may therefore not only increase PP1 specific activity against glycogen-targeted enzymes, but also may partially underlie the specific activation of glycogen-targeted PP1 by insulin. Additional work is needed to fully test this proposed model. FIG. 6. PTG decreases DARPP-32 inhibition of PP1 activity. PNS fractions from fully differentiated 3T3-L1 adipocytes were prepared. Following addition of GST-PTG as in the legend to Fig. 3, PP1 activity was measured in the presence of the indicated concentrations of thiophosphorylated DARPP-32. Assays were incubated at 37°C for 8 min, and results are averaged from two experiments, each performed in triplicate. FIG. 5. Effects of insulin on the subcellular distribution of GluT-4 and PP1. Primary rat adipocytes were incubated for 30 min in the absence and presence of 10 nM insulin. Cells were homogenized, and subcellular fractions were prepared by differential centrifugation as described under "Experimental Procedures." Proteins were separated by SDS-PAGE, transferred to nitrocellulose, and probed with an anti-GluT-4 (A) or an anti-PP1␣ (B) antibody. Cyto, cytosol; LDM, low density microsomes; PM, plasma membrane; HDM, high density microsomes; M/N, mitochondria and nuclei. Odd lanes are from basal adipocytes, and even lanes are from insulin-stimulated cells. The glycogenenriched pellet is present in the low density microsomes under these fractionation conditions.
6,668.2
1997-08-08T00:00:00.000
[ "Biology", "Chemistry" ]
Combining Spectral and Texture Features of UAV Images for the Remote Estimation of Rice LAI throughout the Entire Growing Season : Leaf area index (LAI) estimation is very important, and not only for canopy structure analysis and yield prediction. The unmanned aerial vehicle (UAV) serves as a promising solution for LAI estimation due to its great applicability and flexibility. At present, vegetation index (VI) is still the most widely used method in LAI estimation because of its fast speed and simple calculation. However, VI only reflects the spectral information and ignores the texture information of images, so it is difficult to adapt to the unique and complex morphological changes of rice in different growth stages. In this study we put forward a novel method by combining the texture information derived from the local binary pattern and variance features (LBP and VAR) with the spectral information based on VI to improve the estimation accuracy of rice LAI throughout the entire growing season. The multitemporal images of two study areas located in Hainan and Hubei were acquired by a 12-band camera, and the main typical bands for constituting VIs such as green, red, red edge, and near-infrared were selected to analyze their changes in spectrum and texture during the entire growing season. After the mathematical combination of plot-level spectrum and texture values, new indices were constructed to estimate rice LAI. Comparing the corresponding VI, the new indices were all less sensitive to the appearance of panicles and slightly weakened the saturation issue. The coefficient of determination (R 2 ) can be improved for all tested VIs throughout the entire growing season. The results showed that the combination of spectral and texture features exhibited a better predictive ability than VI for estimating rice LAI. This method only utilized the texture and spectral information of the UAV image itself, which is fast, easy to operate, does not need manual intervention, and can be a low-cost method for monitoring crop growth. Introduction As one of the world's three major food crops, rice is the staple food for half of the global population [1][2][3]. Its leaves, as the main organ of photosynthesis, have a significant effect on the overall growth status of the rice. Leaf area index (LAI), a concept first put forward by Watson [4], is half of the amount of leaf area per unit horizontal ground surface area [4][5][6]. It is regarded as a critical vegetation structural variable that characterizes the geometry of the crop canopy [7,8] and can be used as a predictor in crop biomass and yield estimation [9][10][11]. Accurate estimation of LAI is of great significance to paddy field management and precision agriculture. It is also a key indicator of photosynthesis [12,13] and evapotranspiration [14], and therefore plays an essential role in biogeochemical cycles in ecosystems [15]. With the development of remote sensing technology, the use of remote sensing images to estimate LAI has become a hot topic [5,6]. Past research has involved a variety of remote sensing techniques to estimate LAI based on empirical models [8,10,[16][17][18][19] and physical models [20][21][22][23]. The physical model simulates the radiative transfer of the signal within a canopy, but it requires many input parameters [20,21,24] and a few studies have shown that the derived products are less accurate than those of the empirical models [6,25,26]. The other common strategy used to estimate LAI is to establish an empirical relationship between ground measured LAI and vegetation index (VI) [19,27]. Since Rouse used ratio vegetation index (RVI) and normalized difference vegetation index (NDVI) to estimate crop characteristics in the 1970s [28], many VIs based on the combination of reflectance between different wavelengths have been established to estimate LAI in many vegetation types. For example, Viña et al. [29] evaluated several VIs calculated by green, red, red edge, and near-infrared (NIR) bands recorded by radiometers mounted on a ground system for estimating green LAI of maize and soybean in Lincoln, NE, U.S.A.. Yao et al. [17] developed a winter wheat LAI model with modified triangular vegetation index (MTVI2) based on a six-channel narrowband multispectral camera (Mini-MCA6) mounted on unmanned aerial vehicles (UAV) to increase the sensitivity of the model under various LAI values. Qiao et al. [8] chose NDVI to obtain the piecewise LAI-VI relationships based on phenophases of forests, crops, and grasslands using MODIS data. Dong et al. [19] compared the potential of red edge-based and visible-based reflectance VIs, which were calculated from multitemporal RapidEye images, for spring wheat and canola LAI estimation. VI combines the different responses of vegetation under different waveband reflectivity to distinguish the vegetation foreground from the soil background. The method is simple and effectively implemented to estimate phenotypic traits. However, there remain problems to be solved in rice LAI estimation. The life cycle of rice plants ranges from three to six months from germination to maturity, depending on the variety and environment [30]. There are three main stages of the growth period: vegetative (germination to panicle initiation), reproductive (panicle initiation to heading), and ripening (heading to maturity) [31]. The morphological changes of rice and other vegetation types in different stages are very different. Rice seeds germinate and are then transplanted in soil/water. During vegetative and reproductive stages, with the tillering and jointing of the rice plant, the canopy closes gradually, the leaf area increases sharply, and the background is increasingly occluded. When 50% of the panicles have partially exserted from the leaf sheath, the plant enters heading stage [30,31]. The shape of the rice panicle is thin and long and the surface is rough. From this point on, the grain increases in size and weight and the plant enters ripening stage. With maturity, the panicle changes color from green to gold and turns heavy and droopy, which increases the complexity of the canopy structure. As such, the morphological changes of rice plants at different growth stages are more complex than those of other types of vegetation. It has been proven that the canopy light distribution will change dramatically with the emergence of panicles [32]. The complex changes of rice phenology affect the accuracy of LAI estimation by remote sensing methods during the rice growth season. Sakamoto et al. [33] investigated the relationship between visible atmospherically resistant index (VARI) and rice LAI in the entire growing season. They found that the appearance of yellow panicles in the camera's field of view would affect the LAI estimation with VARI [33]. Wang et al. [34] used ten VIs to estimate rice LAI before and after heading stages, which also proved that the relationship between LAI and post-heading VIs was weaker than pre-heading VIs. Some rice LAI estimation experiments used partial samples for prediction before heading, while some made separate predictions of pre-and post-heading stages. Li et al. [35] applied a normalized texture index based on gray level co-occurrence matrix (GLCM) to improve the method of using color indices to estimate rice LAI during tillering to booting stages. Sakamoto et al. [33] established a linear regression model of the relationship be-tween VARI and LAI when LAI was greater than 0.4 before heading. Casanova et al. [16] fitted the rice LAI of the entire growing season with three exponential piecewise functions. The coefficients of determination (R 2 ) could reach 0.7 during vegetation and reproductive stages but only 0.25 during the ripening stage [16]. It must be noted that it is not easy to ascertain the heading date, especially for the experimental fields for breeding. For example, in the study of Ma et al. [36], six rice fields with multiple cultivars distributed in China were studied, and more than 1000 rice cultivars were planted in one single field among them. It is a time-consuming and labor-consuming task for professionals to observe whether these fields are heading every day. Therefore, even if there are many challenges, it is necessary to estimate rice LAI in the entire growing season. In addition to spectral features, remote sensing images also provide more abundant texture information related to vegetation growth [37]. Zhang et al. [38] combined object-based texture features with a neural network to improve the accuracy of vegetation mapping. It has also been proven that GLCM features of high-resolution images could improve LAI and biomass estimation of forests [39,40]. With the improvement of spectral and spatial resolution of remote sensing images, the potential of texture application is also increasing. Since local binary pattern (LBP) was proposed by Ojala in 1996 [41], it has been considered one of the most effective, prominent, and widely studied local descriptors [42,43]. Due to its computational simplicity and tolerance against illumination changes [44], LBP and its extension have been widely applied in image processing, including texture classification [45][46][47], face recognition [48,49], medical image analysis [50][51][52], and archaeological surveying [53]. Compared with the GLCM method, the LBP operator is better at describing micro texture [54], and its ability to analyze multiple pixels at one time rather than a single pixel pair may provide some performance headroom on the image with the noise signal [55]. Considering that variance (VAR) reflects the image contrast, which is ignored by LBP, local binary pattern combined with variance features (LBP and VAR) proposed by Ojala in 2010 is not only robust to rotation but also retains the image contrast information [56,57]. It is easy to calculate and is abundant in information, meaning it has the potential for application in LAI estimation. Considering the advantages of high resolution, flexibility, and low cost of UAV [35,37], it is useful for observing the rice fields and extracting texture information. Based on UAV images, the main purpose of this study is to explore the effect of introducing LBP and VAR features in rice LAI estimation throughout the entire growing season. Study Area Two field experiments planting with various hybrid rice cultivars were conducted at different experiment sites: Lingshui, Hainan (18 • Table 1. In order to distinguish the different plots in the image, some whiteboards were erected on the edges of the plots. The field management for these plots was the same, including fertilizer supply (12 kg/ha) and planting density (22.5 bundles/m 2 ). The field was managed by professionals with agronomic knowledge who controlled plant diseases and insect pests immediately. Experiment 1 was carried out during a single season from February 2018 to April 2018 in Lingshui. Lingshui country is more suitable for planting crops and conducting breeding experiments from November to May than the mainland of China because of its tropical monsoon climate and high temperature throughout the year. On 10 December 2017, 42 rice cultivars were sown and on 8 January 2018 they were transplanted to 42 plots according to different cultivars (Figure 1a). The plot areas were about 63 m 2 . Each plot was divided into a subplot of 7 m × 7 m for non-destructive spectral information extraction and a subplot of 2 m × 7 m for LAI destructive sampling (around 310 bundles). For each plot 12 rows were planted and each row had double lines. The distance between the rows was 33 cm and 20 cm within the rows. Experiment 2 was conducted for a single season from June 2019 to September 2019 in Ezhou. Here, the flat terrain, sufficient sunlight and rain, and subtropical monsoon climate make it suitable for rice growth. On 11 May 2019, 48 rice cultivars were sown and on 9 June 2019 they were transplanted to 48 plots according to different cultivars ( Figure 1b). The plot areas were about 36 m 2 . Each plot was divided into a subplot of 8 m × 3 m for non-destructive spectral information extraction and a subplot of 4 m × 3 m for LAI destructive sampling (around 270 bundles). For each plot, 6 rows were planted and the distance between and within the rows was the same as in Experiment 1. LAI Sampling and Determination of Heading Date Ground destructive sampling was conducted at key growth stages from tillering to ripening. To collect LAI, three bundles of rice plants were randomly dug out from the sampling region of each plot of each campaign. The plants were placed in a bucket full of water and transported to the laboratory. After removing the roots and silt of the plants, the leaves and stems were split and the leaves were measured one by one with an LI-3100C leaf area meter (LI-COR, Lincoln, NE, United States). The relationship between LAI and the leaf areas of all three bundles of rice was calculated as: LAI = LA 3 × ρ, where ρ was the planting density (here, 22.5 bundles/m 2 ) and LA represented the leaf areas of all three bundles of rice. There were 252 and 624 samples collected in Experiment 1 and Experiment 2, respectively. The heading date was the day that 50% of the panicles have exserted in a plot, which was manually recorded by observers. In this study, the growing season of each rice cultivar can be roughly divided by heading date into pre-heading stages (tillering, jointing and booting stages) and post-heading stages (heading and ripening stages). Reflectance and Vegetation Indices from UAV Image The multispectral images were acquired using an M8 UAV (Beijing TT Aviation Technology Co., Ltd., Beijing, China), equipped with a customized 12-lens Mini-MCA camera (Tetracam Inc., Chatsworth, CA, United States) with customer-specified, centered band pass filters shown in Table 2, covering the main wavelength where plants were sensitive [10,37,[58][59][60][61][62]. The UAV flight was performed in clear and cloudless weather between 10 a.m. and 2 p.m. local time at a height of 120 m in Experiment 1 and 100 m in Experiment 2, with a spatial resolution of 6.5 cm/pixel and 5.5 cm/pixel, respectively. The morphology and characteristics of the rice in the images were similar at these resolutions, so the difference in resolution can be ignored. This instrument and similar flight heights are widely used in other forms of rice research, including rice nitrogen concentration estimation [63], rice biomass estimation [59], rice LAI estimation [37], and rice yield estimation [62]. The 12 lenses were co-registered in the laboratory before the flight, and band-to-band registration for images was performed after the flight so that the corresponding pixels overlapped spatially on the same focal plane. Next, the radiation correction using the empirical linear correction method [64][65][66] was applied to each band through eight blankets with standard reflectance of 0.03, 0.06, 0.12, 0.24, 0.36, 0.48, 0.56, and 0.80, laid on the edge of the field in advance. These blankets were measured by an ASD Field Spec 4 spectrometer (Analytical Spectral Devices Inc., Boulder, CO, USA) in different experimental areas to verify constant reflectance. The reflectance spectra of typical ground objects in the field, that is, soil background and green leaves before heading and panicles and yellowing leaves after heading, are shown in Figure 2. The reflectance of each wavelength was locally extracted from the corresponding object in the image. A rectangular region of interest (ROI) with the same size in each experiment that maximally fits the plot was determined to extract spectral information for each plot. The average reflectance of all pixels in the ROI was regarded as the plot-level canopy reflectance. Since the multispectral image taken on 26 June showed irreparable specular reflection due to its obvious water background, 24 affected samples were deleted in subsequent processing on this date. The plot-level vegetation indices were calculated from plot-level canopy reflectance. Ten VIs were tested in this study (Table 3) and were divided into three categories according to different calculation methods: ratio indices, normalized indices, and modified indices. Table 3. The vegetation indices that were tested in the study. Ratio Indices Green Chlorophyll Index (CI green ) Normalized Indices Modified Indices Texture Measurements In this study, the rice canopy texture model was described by LBP and VAR. LBP and VAR are very effective features for describing local texture, and they were calculated from the reflectance images of 550 nm, 670 nm, 720 nm, and 800 nm that constitute VI. LBP is a powerful local lighting invariant operator that describes the relationship between the gray values of the surrounding neighborhood (g i , i = 1, 2, . . . , 8) and the center pixel (g 0 ) [56]. LBP can be calculated as: It has been proven that uniform LBP is the basic structure of LBP [56]. The U(LBP) value of an LBP is defined as the number of spatial transitions (bitwise 0/1 changes) within that pattern. Moreover, the uniform LBP pattern refers to the uniform appearance pattern which has limited transitions and discontinuities (U(LBP) ≤ 2) in the circular binary presentation [57]. Considering the various values obtained from different starting neighborhoods, the uniform LBP value depends heavily on the direction of the image. Therefore, the uniform LBP is further simplified for a local rotation-invariant pattern, which is described as the following: Figure 3 shows nine patterns of LBP riu2 representing different local features. For example, a value of 0 is a spot feature, while that of 3-5 refers to line or edge and 8 to spot or flat feature. LBP riu2 describes the spatial structure but discards the contrast of local texture. Thus, it complements VAR [56,57,75]. VAR is a rotation invariance measure of local variance which characterizes the contrast of local image texture. It has been proven that the combination of LBP riu2 with VAR will enhance the performance of texture [56,57,76]. VAR is calculated as follows: LBP riu2 and VAR were extracted from the reflectance images. To acquire a more comprehensive texture feature, the two texture images were multiplied pixel by pixel to obtain an LBP and VAR texture image, which was recorded as LBP riu2 × VAR. The plotlevel LBP riu2 × VAR value was calculated by the average gray value of the pixels in ROI of LBP riu2 × VAR. To limit the influence of the panicles, the negative natural logarithm of LBP riu2 × VAR value was multiplied with the reflectance on the plot level, and the product was recorded as LV-R. Then the reflectance values in each VI formula were replaced with LV-Rs to obtain a new index, denoted as LV-VI (Figure 4). The calculated LV-VIs were evaluated for rice LAI estimation and were further compared with traditional VIs. Algorithm Development for LAI Estimation The final estimated model of LAI was developed by a k-fold cross-validation procedure. K-fold cross-validation was a statistical method which was widely applied in a model establishment [77][78][79]. In this study, k = 10, which is a common value used in many studies [77,79,80]. The samples were divided into k parts, k − 1 parts were applied to establish the model to obtain coefficients (Coef i ) and coefficients of determination R 2 i , and the remaining part was regarded as the test set. The above steps were repeated k times, and the average values of root mean square error (RMSE) and coefficient of variation (CV) based on test set were calculated as follows [81]: Relationships of VI vs. Rice LAI throughout the Entire Growing Season Some studies have proved that the exponential regression model is more suitable for fitting LAI and VI [34,37,82]. The R 2 of the exponential regression model of ten VIs and the measured LAI throughout the entire growing season are shown in Table 4, of which MTCI was the lowest and EVI2 was the highest, though their scatter fitting results before heading were similar with R 2 over 0.75 (Figure 5a,h). In addition, the R 2 of ratio indices (CI red edge , CI green , and RVI) with similar scatter fitting results before heading were also generally low (R 2 < 0.5), while the normalized indices (NDRE, GNDVI, and NDVI) were slightly higher (R 2 > 0.5). The scattered points before heading and after heading of ratio indices and MTCI were more severely separated (Figure 5a-d), while those of normalized indices and EVI2 were slightly weaker (Figure 5e-h). The hysteresis effect could significantly decrease the R 2 of LAI vs. VI fitting throughout the entire growing season. As shown in Figure 5, all indices tended to reach saturation rapidly in moderate-tohigh LAI variation before heading. The phenomenon of ratio indices was not as obvious as normalized indices. For normalized indices, the saturation of NDRE was not as obvious as GNDVI and NDVI. Comparing the R 2 before heading, it is obvious that the saturation effect decreased the fitting accuracy of the pre-heading stage. Rice LAI Estimation Combined with Texture Features The changes of rice in different growth stages could be reflected by texture. At the vegetative stage, rice was planted in a stripe shape and the width of the stripe becomes wider as the rice grows [37]. When the leaves occluded most of the background, the wide stripes combined with each other to form a flat area covering the entire field. As the rice continued to grow, panicles began to emerge in the canopy, appearing as little dots in the image and breaking the flat texture. Therefore, the difference and complementarity of spectrum and texture in different growth stages could be utilized to assist VI in LAI estimation. Variation of LBP and VAR in Pre-Heading Stages and Post-Heading Stages The relationships between LAI vs. plot-level reflectance (Figure 6a-d) and LAI vs. plot-level LBP riu2 × VAR (Figure 6e-h) are shown in Figure 6. Due to the difference in reflectivity and distribution of various objects shown in Figure 2, the results of texture varied in different wavebands (Figure 6e-h). The trends of time series changes of scatter distribution were opposite. In the scatter plot of LAI vs. reflectance, the time series bands of 550 nm, 670 nm, and 720 nm showed a counterclockwise distribution, while the same bands presented a clockwise distribution in LAI vs. LBP riu2 × VAR scatter plots. This made it possible to improve LAI estimation by combining the two indices. At the same time, with LAI increasing to a medium-high level before heading, the value of LBP riu2 × VAR was less saturated in the 550 nm and 670 nm bands, which made it beneficial for saturation reduction to combine the two indices. The relationships between the combined index LV-R and LAI are shown in Figure 6i-l. The hysteresis effects of LV-R at 550 nm, 670 nm, and 720 nm were obviously weakened, and the saturation effects at 550 nm and 670 nm were also slightly reduced. Figure 6. Rice LAI plotted against the reflectance of (a) 550 nm, (b) 670 nm, (c) 720 nm, (d) 800 nm, the LBP riu2 × VAR calculated based on the bands of (e) 550 nm, (f) 670 nm, (g) 720 nm, (h) 800 nm, and the LV-R of (i) 550 nm, (j) 670 nm, (k) 720 nm, (l) 800 nm during the entire growth season. When the horizontal axis of LBP riu2 × VAR was expressed in exponential form, it was the opposite of the trend of LAI vs. reflectance. The LV-R index combining R and LBP riu2 × VAR had a smaller separation than R between pre-heading stages (Pre-HD) and post-heading stages (Post-HD). Figure 7 shows the comparable results of using textures to improve VIs. For all VIs, the scattered points after heading in LV-VIs were closer to the fitting curve of the whole growth stage than those in initial VIs, and the saturation effects before heading of all indices were also weakened. The saturation of the ratio indices was less obviously decreased than that of the normalized indices. To verify the reliability of the model, R 2 and RMSE statistics of VI and LV-VI after 10-fold cross-validation are shown in Table 5. All LV-VIs that combined with textures have improved the effect of using solely VIs to estimate LAI. Moreover, the improvement showed low dependence on the type of VIs used, including ratio, normalized, and modified indices. R 2 increased by more than 0.13 (CI green , CI red edge , NDRE, and MTCI) and RMSE decreased by more than 0.1 (CI green , GNDVI, NDRE, EVI2). The inclusion of LBP and VAR features significantly improved the accuracy of the model for those containing the index of the 550 nm band (CI green and GNDVI), with CV decreased by more than 2.5%. There was also a notable improvement in the model containing the index of the 720 nm band (CI red edge , NDRE, MTCI, and OSAVI), with CV decreased more than 1.3% (Figure 8). However, for the model containing the index of the 670 nm band, the effect of accuracy improvement was uncertain. CV was decreased more than 3.2% for EVI2, while the improvements of others (RVI, WDRVI, and NDVI) were restricted. The relationships between estimated LAI and measured LAI of some LV-VIs are shown in Figure 9. Considering the distribution and saturation of scattered points and values of R 2 and RMSE, LV-CI green , LV-EVI2, and LV-NDRE were the best estimation models of rice LAI throughout the entire growing season in green, red, and red edge bands, respectively. Discussion In this study, we tested 10 commonly used VIs (CI green , RVI, CI red edge , GNDVI, NDVI, NDRE, MTCI, OSAVI, and EVI2) that had been widely applied in the estimation of LAI [19,78], biomass [61,83], and yield of crops [60,81]. When LAI reached a moderateto-high variation, VIs were rapidly saturated, especially for the normalized VIs such as GNDVI, NDVI, and NDRE (Figure 5e-g). Considering only the period before heading, although these VIs were saturated with limited transitional zones, R 2 could still reach above 0.65 ( Figure 5). However, when the LAI of rice was estimated for the entire growing season, R 2 was significantly reduced, and the decrease for MTCI was even more than 0.4 ( Table 4). The hysteresis effect after heading is a core issue that affects the accuracy of LAI estimation during the whole growing season of rice. Substantial hysteresis of the reflectance and canopy chlorophyll content had been discovered between the vegetative and reproductive stages in maize and soybean [84]. However, unlike the crops above, the hysteresis of rice was more exaggerated due to the complexity of the canopy structure. Rice leaves are more bent and crisscross with each other. The panicle is small and slender and varies with the growing period, so the complexity of the canopy increases [85,86]. The scattered points after heading in VI vs. LAI were clearly far away from those of the LAI vs. VI relationship before heading ( Figure 5). It is unrealistic, however, to establish a model before and after heading. The heading time of rice depends greatly on cultivars and environments, and recording the heading dates requires significant time and manpower, especially for breeding studies. Therefore, it is necessary to set an LAI estimation model for the entire growing season. This study introduced the texture feature LBP and VAR to improve the VI model and to minimize the effect of saturation and hysteresis over the entire rice growing season. The spectral curves of typical features are very different before and after heading (Figure 2). Before heading, the field is mainly composed of leaves and soil background. Taking the tillering stage as an example, leaves are more reflective than soil in green bands due to the weak absorption of chlorophyll, while the reflectance of soil is slightly higher in the red and blue main absorption bands of chlorophyll [24]. As the rice grows, the soil background is gradually occluded by leaves and the appearance of panicles. At this time, only leaves and panicles are visible in the image. In the visible light band, panicles absorb less, mainly because they have a lower chlorophyll content than leaves [32]. In the NIR band, which is mainly affected by the structure of canopy, the area with the most panicles reflects more because of its higher canopy thickness [60]. Thus, in the pre-heading reflectance images in Figure 10 (DAT 23 and 47), the bright lines in the 550 nm, 720 nm, and 800 nm images are the leaves, while in 670 nm, the slightly bright lines are the soil background. In the post-heading reflectance images in Figure 10 (DAT 63 and 85), for the visible light bands such as 550 nm and 670 nm, the panicles are embedded in the leaves like stars. In the 720 nm and 800 nm reflectance images after heading, the bright lines represent the dense, thick canopies of panicles, and the dark lines are the thin canopies with leaves. The LBP riu2 and VAR of each individual pixel in the reflectance images were calculated separately, depicting local texture information ( Figure 10). LBP riu2 only gave integer values between 0 and 9, as shown in Figure 3, which could distinguish bright spot features (with a value of 0), line or edge features (with values of 3-5), dark spot or flat features (with a value of 8), and so on. These features were related to the distribution of different objects in a rice field. As shown in Figure 10, in the first few days after transplanting (DAT 23), lines could be recognized in LBP riu2 images of any band due to the ridges in the field. As the rice grew, leaves gradually occluded soil background and continuous edges broke so that bright spots and flat areas increased simultaneously, and the entire brightness changed slightly and inconsistently. After rice heading, with panicles exerting, bright spots could be distinguished and the LBP riu2 image darkened ( Figure 10). VAR calculated the variance of surrounding pixels and provided contrast information. Homogeneous areas have smaller values. As leaves gradually covered the background before heading, VAR values decreased and the image turned slightly darker. When panicles exserted after heading, these high reflected objects decreased the homogeneity and thus the brightness of the VAR images increased drastically ( Figure 10). The edges of objects were usually fuzzy in VAR images, but these were much clearer under the specific values of LBP riu2. The combination of LBP riu2 × VAR characteristics continued to follow the trend of VAR but showed more obvious differences between objects. In particular, the edges between leaves and soil background were thinned before heading, and the spots of panicles shrank after heading, both of which better represented the truth on the ground ( Figure 10). In terms of the image, the texture features of −ln(LBP riu2 × VAR) enhanced the differences in detail between the background and leaves before heading (Figure 10, DAT 23 and 47). At the same time, the gray values of the panicles in these images after heading were very low. In the LV-R image obtained by multiplying the texture image and the reflectance image, the influence of panicles was obviously suppressed (Figure 10, DAT 63 and 85) and thus LV-R indices were more suitable for constructing VIs throughout the entire growing season. VIs presented the difference in reflectance between visible light and NIR [87]. In the tillering stage, this difference suddenly widened to a high level (Figure 11a), which led to saturation in VI vs. LAI ( Figure 5). In contrast, the difference narrowed after heading (Figure 11a) which caused hysteresis ( Figure 5). As a result of the time variation of LBP riu2 and VAR, the combination indices decreased slightly before heading and, remarkably, increased after heading in visible bands ( Figure 10). Being unabsorbed by plants, VAR at 800 nm was not as sensitive to the variation of growth stages as those in visible bands ( Figure 10), and so LBP riu2 × VAR with its negative logarithm changed less at 800 nm, meaning the difference between the visible and NIR bands narrowed after heading. As shown in Figure 11b, −ln(LBP riu2 × VAR) displayed the opposite trend to R. The relatively high value of logarithm index could narrow the difference in reflectance in the tillering stage, while the low value after heading could widen it and thus reduce the saturation and hysteresis effects. At the 670 nm band, however, −ln(LBP riu2 × VAR) had a relatively high value after heading and a weaker counterclockwise trend on the scatter plot with LAI ( Figure 6). LV-VIs with dependence on indices at this band had less of an advantage than the others. Comparing the LV-VI model with VI, as displayed in the scatter plot of Figure 7, reveals that the yellow part (after heading) data after processing are clearly similar to the green part (before heading) data. The slope of the turning point of the green part data also slowed down. By introducing texture features, the R 2 of all tested VIs increased significantly, while RMSE and CV decreased significantly ( Figure 8, Table 5). The method developed in our study is very simple for estimating rice LAI. It combines texture information with spectral information to effectively reduce saturation and hysteresis which emerges at certain stages. Compared with the physical model, the proposed model requires fewer parameters, thus avoiding the uncertainty of multiple rice cultivars and LAI estimation based on assumptions. The algorithms used in this study are exponential regressions that work rapidly and efficiently, meaning there was no need for sophisticated algorithms (e.g., machine learning method) requiring large-scale computation. This method only requires the use of reflectance images to estimate rice LAI during the entire growing season, and it is not necessary to establish a model before and after heading. Since VI, LBP, and VAR can be obtained by one UAV platform at a low cost, and can be simply and rapidly calculated from reflectance images, our method can be widely used in paddy field management and precision agriculture, especially in fields containing many cultivars for breeding. Figure 11. Temporal behaviors of (a) the reflectance ratio of visible and NIR and (b) the −ln(LBP riu2 × VAR) ratio of visible and NIR during the entire rice growing season. −ln(LBP riu2 × VAR) showed the opposite trend to R. The relatively high value of the logarithm index could narrow the difference of reflectance in the tillering stage, while the low value after heading could widen it instead, thus reducing the saturation and hysteresis effects. Conclusions In our study, we combined LBP and VAR with reflectance to construct new LV-VI parameters and applied them to LAI estimation. Compared with the corresponding VIs tested in this study, the fitting accuracy of all LV-VIs has been improved, R 2 has been improved by up to 0.166 (MTCI), and RMSE and CV were 0.147 and 3.5% lower (GNDVI), respectively. The RMSE and CV of exponential fitting of LV-EVI2 were down to 1.367 and 32.7%, respectively. Considering the revealing effect of texture on different growth stages, LAI can be estimated more accurately by combining texture with spectrum. The combination of textural features enhances the contrast between crops and soil background in the image and weakens it between pixels with and without panicles. Moreover, it further adjusts the influence of reflectance given by the change of ground feature types in different growth stages. Therefore, in the LAI vs. VI scatters, the saturation before heading and the hysteresis effect after heading are obviously weakened. Since the LV-VI can be calculated simply and rapidly from a UAV reflectance image, our method provides a simple, low-cost option for crop growth monitoring and the progress of paddy field management, especially for rice breeding studies in fields containing a variety of cultivars. Future investigations are necessary to provide comparisons with different methods for estimating rice LAI, and to consider the underlying mechanisms of different textures throughout the entire growing season. Author Contributions: All authors have made significant contributions to this manuscript. Y.G. and
8,283.8
2021-07-30T00:00:00.000
[ "Environmental Science", "Agricultural and Food Sciences", "Computer Science" ]
Polarization-sensitive plug-in optical module for a Fourier-domain optical coherence tomography system In this manuscript we communicate a theoretical study on a plug-in optical module to be used within a Fourier-domain optical coherence tomography system (FD-OCT). The module can be inserted between the object under investigation and any single-mode fiber based FD-OCT imaging instrument, enabling the latter to carry out polarization measurements on the former. Similarly to our previous communication1 this is an active module which requires two sequential steps to perform a polarization measurement. Alternating between the two steps is achieved by changing the value of the retardance produced by two electro-optic polarization modulators, which together behave as a polarization state rotator. By combining the rotation of the polarization state with a projection against a linear polarizer it is possible to ensure that the polarization measurements are free from any undesirable polarization effects caused by the birefringence in the collecting fiber and diattenuation in the fiber-based couplers employed in the system. Unlike our previous work, though, this module adopts an in-line configuration, employing a Faraday rotator to ensure a non-reciprocal behavior between the forward and backward propagation paths. The module design also allows higher imaging rates due to the use of fast electro-optic modulators. Simulations have been carried out accounting for the chromatic effects of the polarization components, in order to evaluate the theoretical performance of the module. INTRODUCTION Spectrometer-based OCT methods have been extensively used over the past decade to image translucent structures. 2 Polarization-sensitive optical coherence tomography (PS-OCT) methods emerged as early as 1992, 3 evolving from bulk-based to more compact fiber-based designs. These systems are useful in medical OCT applications due to the link between polarization properties and the health state of biological tissues. In non-destructive testing, PS-OCT also provides birefringence information, which is useful in assessing the mechanical properties of the structures evaluated. Due to their versatility, easy alignment, compact size and the need for single spatial mode selection, fiber-based systems are used in OCT practice. 4 However, external factors (such as temperature and mechanical stress) affect the birefringence of single-mode fibers (SMFs) used in OCT systems, inducing disturbances 5 to the measured polarization. One way to avoid the influence of these external factors and their corresponding disturbances is to perform the polarization selection before the collecting fiber, which was firstly demonstrated by Roth et al. in 2001. 7 Our recently reported approach, 1 summarized in Figure 1 ( while also ensuring a circular polarization state in the imaging beam, which minimizes the number of measurements required for a full characterization of polarization. This approach has, however, a few drawbacks, most notably the split-path design due to the need to employ a bulk beam-splitter to physically separate the two paths (input and output), thus enabling a non-reciprocal behavior on the sample arm. Having such a design meant that the system is not a plug-in PS-OCT module in the true sense of the word, i.e. the supporting OCT system needs to be specifically modified in order to accommodate the PS-OCT system. Additionally, the system uses a liquid crystal rotator as the polarization rotator element, which reduces the speed of the system considerably, rendering it unusable for in-vivo imaging. DESCRIPTION OF THE OPTICAL MODULE To address these issues, a new design was proposed in a follow-up publication, 6 which is summarized in Figure 1 (b) and its operation principles detailed in Similarly to our previous work, the system employs a two-step sequential procedure in order to carry the polarization measurements, using active polarization elements. However, there are two major differences from our previously reported work: (1) the polarization state rotation is carried out by two electro-optic polarization modulators (represented as EO1 and EO2 in Figures 1(b), 2 and 3), enabling higher acquisition rates and (2) the design is completely in-line (input and output share the same fiber), which allow this design to be a plug-in PSOCT module (i.e., without having to modify the supporting FD-OCT system). This is achieved by employing a Faraday rotator (represented as FR in in Figures 1(b), 2 and 3) to break the reciprocal behaviour of the remaining polarization components of the module, allowing it to behave as a polarization state encoder when illuminating the sample and as a polarization state decoder when retrieving the back-reflected signal. In Figure 2 the operation of the OM is detailed for the forward propagation of the beam (the illuminating beam). In order to perform the polarization characterization with the least amount of measurements, the polarization state of the probing beam incident on the sample should be circular; this means that the Jones matrix corresponding to the forward propagation inside the optical core (grey box in Figure 2) must be equal to unity irrespective of the values of φ EO introduced by EO1 and EO2, with the quarter-wave plate QWP turning the polarization state from linear to circular before the beam reaches the sample. out the polarization measurement before the light is re-injected into the fiber leading back to the interferometer; as before, this is done sequentially. Figures 3(a) and 3(b) depict the two states of the EO modulators within the OM (φ EO = 0 and φ EO = π/2); each of them ensures that only one of the orthogonal polarization components from the light returning from the sample (represented as red and blue lines) reaches the fiber at any given time. By taking the ratio of the intensities measured for the two states of the EO modulators it is possible to measure the retardance of the sample; moreover, since the actual measurement takes place before the light is re-injected into the fiber leading back to the interferometer, it renders it immune to any fiber and coupler-based disturbances. However, due to the sequential operation involved, the full polarization characterization of the sample cannot be directly achieved, since the orientation of the optical axis relies on the phase information that may be affected by interferometer noise altering the phase during the time required to shift from one state of the EO modulator to the other. Having higher switching rates in the EO modulators allows the phase measurement to be more tolerant to fiber fluctuations (employing the shallowest surface of the sample as a phase reference), therefore overcoming this issue. A comprehensive analysis has been presented in our recent publication. 6 6 Thick blue curves: no change in the fiber/coupler properties (fiber birefringence φ(ν 0 ) f iber , fiber polarization mode dispersion ∆T f iber and coupler diattenuation D coup ) between the two measurements necessary for the full polarization characterization; thin red curves: a change of 1% is applied to each of the parameters between the two measurements necessary for the full polarization characterization. VALIDATION OF THE CONCEPT AND ITS LIMITATIONS In order to validate the performance of the device and evaluate the intrinsic errors of the method (by estimating its working bias), simulations have been carried out. These have been thoroughly (b) described in 6 and the main results have been reproduced in Figure 4. The collecting fiber and coupler (partially represented in Figure 1(b) as sample arm input/output) have been modeled as a product of different Jones matrices which account for the chromatic birefringence φ (ν 0 ) f iber and polarization mode dispersion ∆T f iber of the fiber, as well as the diattenuation D coup in the coupler. For a broad range of values for these three parameters, two situations were considered: (1) each of the three parameters remained constant during the two states of the EO modulators, necessary to perform the polarization measurement (represented in Figure 4 as thick blue lines); and (2) each of the three parameters changed by 1% during the change of state of the EO modulators (represented in Figure 4 as thin red lines). While the former case depicts a complete flat response throughout the whole range of values, a slight bias is observed on the latter case. The simulations carried out to generate the results depicted in Figure 4 do not take into account any chromatic behavior from the elements used in the OM, namely the EO modulators and the Faraday rotator, which are known to exhibit a considerable chromatic response. In order to study the impact of the elements present in the OM on the accuracy of the polarization measurements carried out by the OM, we have modeled the chromatic response of the EO polarization modulators and the Faraday rotator as linear curves, with the expected performance for each element centered at the central wavelength of the optical source considered (850 nm). These curves are represented in Figure 5(a) and 5(b). Taking these responses into consideration, the measurement bias on both the retardance and axis orientation has been numerically simulated for different sample retardances ϕ sample and axis orientations θ sample . These biases are respectively shown in Figures 6(a) and 6(b) for two separate source spectrum full-width at half maxima (FWHM): thick (blue) lines represent a FWHM of 50 nm, whereas thin (red) lines represent a FWHM of 100 nm. It is now quite evident that the biases caused by the chromatic response of the components used far outweigh the biases shown in Figure 4, where the collecting fiber has been disturbed during the two-step measurement procedure. It is also clear that larger optical bandwidths yield larger measurement biases on both retardance and axis orientation. However, large optical bandwidths are essential to ensure good axial resolution in OCT systems. Therefore, a compromise has always to be made between PS-OCT performance (impacting the accuracy of the retardance and axis orientation measurements) and optical bandwidth (impacting the axial resolution of the OCT system). CONCLUSIONS The module presented can then be used to add the polarization-sensitive capability to any existing fiber-based OCT system without requiring extensive modifications or calibration procedures. Since it is based on the same principle as the previously reported set-up, 1 it is insensitive to disturbances on the polarization properties of the collecting fiber and coupler which would otherwise impact the polarization measurements. Unlike the previously reported set-up, though, this module allows for higher imaging rates, while allowing it to be a true plug-in module, not requiring any modifications to be made to the supporting OCT system.
2,382.8
2017-02-17T00:00:00.000
[ "Physics" ]
Deterministic Signal Associated with a Random Field Reference and Links Stochastic fields do not generally possess a Fourier transform. This makes the second-order statistics calculation very difficult, as it requires solving a fourth-order stochastic wave equation. This problem was alleviated by Wolf who introduced the coherent mode decomposition and, as a result, space-frequency statistics propagation of wide-sense stationary fields. In this paper we show that if, in addition to wide-sense stationarity, the fields are also wide-sense statistically homogeneous, then monochromatic plane waves can be used as an eigenfunction basis for the cross spectral density. Furthermore, the eigenvalue associated with a plane wave, exp[ ()] i t ω ⋅ − k r , is given by the spatiotemporal power spectrum evaluated at the frequency (k, ω). We show that the second-order statistics of these fields is fully described by the spatiotemporal power spectrum, a real, positive function. Thus, the second-order statistics can be efficiently propagated in the wavevector-frequency representation using a new framework of deterministic signals associated with random fields. Analogous to the complex analytic signal representation of a field, the deterministic signal is a mathematical construct meant to simplify calculations. Specifically, the deterministic signal associated with a random field is defined such that it has the identical autocorrelation as the actual random field. Calculations for propagating spatial and temporal correlations are simplified greatly because one only needs to solve a deterministic wave equation of second order. We illustrate the power of the wavevector-frequency representation with calculations of spatial coherence in the far zone of an incoherent source, as well as coherence effects induced by biological tissues. " New spectral representation of random sources and of the partially coherent fields that they generate, " Opt. New theory of partial coherence in the space-frequency domain. 1. Spectra and cross spectra of steady-state sources, " J. New theory of partial coherence in the space-frequency domain. 2. Steady-state fields and higher-order correlations, " J. Wide-field fluorescence sectioning with hybrid speckle and uniform-illumination microscopy, " Opt. Introduction Random field fluctuations in both space and time are due to the respective fluctuations of both primary and secondary sources.The discipline that studies these fluctuations is known as coherence theory or statistical optics [1,2].Besides its importance to basic science, coherence theory is crucial in predicting outcomes of many light experiments.For example, in quantitative phase imaging (QPI), we often employ spatially and temporally broadband light to image phase shifts associated with the imaging field [3].Such phase shifts are physically meaningful only when they are defined via averages, through field crosscorrelations (see, e.g., [4]).Whenever we measure a superposition of fields (e.g., in interferometry) the result of the statistical average performed by the detection process is strongly dependent on the coherence properties of the light.Importantly, half of the 2005 Nobel Prize in Physics was awarded to Roy Glauber "for his contribution to the quantum theory of optical coherence."For a selection of Glauber's seminal papers, see Ref [5]. A thermal source, such as an incandescent filament or the surface of the Sun, emits light in a manner that cannot be predicted with certainty.In other words, unlike in the case of a monochromatic plane wave, we cannot find a function f(r,t) that prescribes the field at each point in space and at each moment in time.Instead, we describe the source as emitting a random signal, s(r,t), and describe its behavior via probability distributions. We can gain knowledge about the random process only by repetitive measurements and averaging the results.This type of averaging over many realizations of a certain random variable is called ensemble averaging.The importance of the ensemble averaging has been emphasized many times by both Wolf and Glauber [1,[5][6][7].For example, on page 29 of Ref [5], Glauber mentions "It is important to remember that this average is an ensemble average.To measure it, we must in principle repeat the experiment many times by using the same procedure for preparing the field over and over again.That may not be a very convenient procedure to carry out experimentally but it is the only one which represents the precise meaning of our calculation." Calculating how field correlations behave upon propagation requires solving a stochastic wave equation (see Appendix), which is tedious as it involves a fourth order differential equation.In order to alleviate this problem, Wolf introduced the coherent mode decomposition (CMD) theory [8], which establishes that, for wide sense stationary fields, a square integrable cross-spectral density, W can be constructed from contributions of completely spatially coherent sources, ( ) ( ) ( ) ( ) , , , , , where the convergence is uniform in the mean-square sense.In Eq. ( 1), the functions n ψ and scalars 2 n λ are mutually orthogonal eigenfunctions and eigenvalues, respectively, of the Fredholm integral equation of W, ( ) ( ) ( ) ( ) , , , , , where D is the spatial domain of interest.Since W is Hermitian and also a non-negative definite Hilbert-Schmidt kernel, the eigenvalues 2 n λ are real and positive.It is important to emphasize that the eigenfunctions and their associated eigenvalues in Eq. ( 1) are unique since the Fredholm integral equation is evaluated within a confined region D of the source.The main benefit of this expansion is that each eigenfunction (mode) satisfies the deterministic Helmholtz equation and, thus, propagating W reduces to propagating each mode and adding up the results.Using CMD, Wolf developed a framework for studying partial coherence in the space-frequency domain [9,10]. Here, we show that if, in addition to being wide-sense stationary, the fields are also widesense statistically homogeneous, then W admits plane waves as eigenfunctions, i.e., we can write . Furthermore, W can be expressed as the Fourier transform of a real, positive function, which we refer to as the spatiotemporal power spectrum.The eigenvalue associated with each plane wave is the spatiotemporal power spectrum evaluated at the spatiotemporal frequency, (k, ω).The second order statistics is recovered in full by replacing the stochastic field with a deterministic field of the same power spectrum.This deterministic field can therefore be propagated via a second order (deterministic) wave equation, significantly simplifying the calculations (Section 3).This calculation gives the correct result when explaining coherence effects (see, e.g., optical coherence tomography [11,12]).We establish the framework for studying coherence problems in the wavevector-frequency representation.Propagation in the wavevector-frequency space allows us to easily compute second order moments of the transverse wavevector and, thus, correlation areas.We illustrate the power of this formalism by re-deriving the classic result of the van Cittert-Zernike theorem and correlation-induced spectral changes in biological tissues. Statistically homogeneous fields Statistical homogeneity, like stationarity, is a strong assumption since all the measurements, either in space or time, are finite and, technically, we never encounter such fields in practice.However, if the observation interval (spatial or temporal) is much larger than the characteristic scale of the field fluctuations (spatial and temporal correlation lengths), these assumptions are reasonable.For example, if we deal with spatially finite fields, such as beams, the assumption of homogeneity can be used provided that the spatial domain of interest, e.g., the size of the camera used as detector, is much larger than the coherence area at that plane. Statistical homogeneity, at least in the wide sense, requires that W depends on 1 r and 2 r only through the difference, 2 1 − r r .Thus, we can define a function W ′ for which ( ) ( ) Next, we show that if W is both wide sense stationary and statistically homogeneous then we can choose plane waves as an orthonormal basis for W. The proof is as follows.We expand ' W as a Fourier series, '( , ) ( ) ( , ) Thus, we have a representation of W in the form of Eq. ( 1) where the n ψ were chosen as plane waves.Since we have recovered Eq. ( 1), we easily obtain Eq. ( 2) by multiplying Eq. ( 4) by ψ m (r 1 , ω) and integrating r 1 over D and using orthonormality of the basis functions: ( ) ( ) ( ) . Thus, plane waves are eigenfunctions of the Fredholm integral equation of W with associated eigenvalues ( and we obtain the following key result: Hence, the eigenvalues associated with the plane waves, 2 n λ , are given by the spatiotemporal power spectrum, S, evaluated at the temporal frequency ω and spatial frequency n k .Note that all the information about W is contained in S. In summary, these results show: 1) that plane waves form a coherent mode decomposition of W, 2) that plane waves are eigenfunctions of the Fredholm integral equation of W, and 3) that the eigenvalues of W can be computed with the spatiotemporal power spectrum S. Note that there can be many other decompositions of W besides plane waves. Deterministic signal associated with a random field As discussed in the previous section, the second-order statistics of a fluctuating field is fully described by its spatiotemporal power spectrum, ( ) , a real, positive function.In this section, we introduce a new concept, deterministic signal associated with the random field.This property of power spectrum carries the same second-order statistics as the original stochastic field. The assumed wide-sense stationarity and statistical homogeneity ensures that the spectrum does not change in time or space; it is a deterministic function of k and ω.Therefore, we can mathematically introduce a spectral amplitude, V  , via a simple square root operation, which contains full second-order statistical information about the random field fluctuations.Of course, V  has a Fourier transform, provided that it is modulus integrable.However, the fact that V  is modulus-squared integrable in the k-domain (the spatial power spectrum contains finite energy) does not necessarily ensure that . Here we will assume that V  is integrable as well and has an inverse Fourier transform.Therefore, a deterministic signal associated with the random field can be defined as the inverse Fourier transform of V  , namely In Eqs.(8a) and (8b), V r is the spatial volume of interest and V k is the 3D domain of the wavevector.With this definition of the deterministic signal, the fourth order stochastic wave equation (Eq.( 36b)) can be reduced to the second order deterministic wave equation where U V  and s V  are the spectral amplitudes associated with the (random) source and propagating field, respectively.Back in the space-time domain, Eq. ( 9) indicates that U V  satisfies the deterministic wave equation, i.e. ( ) ( ) ( ) Comparing our original stochastic wave equation (see Appendix) with Eq. ( 10), it is clear that the only difference is replacing the source field with its deterministic signal, which in turn requires that we replace the stochastic propagating field with its deterministic counterpart. In essence, by introducing the deterministic signal, we reduced the problem of solving a fourth order differential equation to an ordinary (second-order) wave equation.Importantly, the solution of the problem must be presented in terms of the autocorrelation Γ U of V U , or its spectrum 2 U V  and not by V U itself.By the method of constructing the deterministic signal V U associated with the random field U, we ensure their respective autocorrelation functions are equal, . In other words, the fictitious deterministic signal has identical second order statistics with the original field.What information about the field is missing in going from the actual random field to its deterministic signal representation?The answer is that the second-order statistics and, thus, the deterministic signal associated with a random field, do not contain any information about the field's spectral phase.Any arbitrary phase (random or deterministic), φ, used to construct a complex signal, , has no impact whatsoever on the autocorrelation function of the signal.For example, a continuous-wave (CW) field of a particular spectrum, S(ω), and a light pulse of the same spectrum have an identical respective deterministic signal, because their temporal correlations are the same.Not surprisingly, both short pulses and broadband CW light have been successfully used for low-coherence interferometry and coherence gating [13].Spatially, a focused beam and a random field distribution of the same spatial power spectrum, S(k), have identical spatial correlations.For this reason, a speckle (random) field and a focused (deterministic) field of the same spatial spectrum have the same sectioning capabilities [14].Another illustration of this equivalence is encountered in microscopy.It is known that the size of the condenser aperture, typically filled with diffuse light, controls the coherence area at the sample plane.Using the deterministic signal associated with the incoherent field, it turns out that if the condenser aperture is filled with a plane wave, we obtain the same spatial correlation at the sample plane.Specifically, the coherence area at the sample plane is of the order of the spot that a plane wave would be focused to by the condenser. The concept of the deterministic signal associated with a random field is useful in simplifying the calculations for propagating second-order field correlations.Furthermore, propagation in the wavevector space allows us to easily compute second order moments of the transverse wavevector and, thus, correlation areas.Note that this simplification is not possible under the coherent mode decomposition [8] without assuming statistical homogeneity.Below we illustrate this approach by calculating the coherence area of a stochastic field after propagating an arbitrary distance from an extended, completely spatially incoherent source.Then, we derive the changes in coherence due to scattering by tissues, a particular type of secondary source. Propagation of coherence from primary sources An important result in coherence theory is attributed to van Cittert and Zernike (see Section 4.4.4. in Mandel and Wolf [1]).This result is known as the van Cittert-Zernike theorem which establishes the spatial autocorrelation of the field radiated in the far-zone by a completely incoherent source (Fig. 1). This result was originally formulated in terms of the mutual intensity, defined as In Eq. ( 12), the angular brackets indicate ensemble averaging over a certain area of interest (we are interested in the field distribution in a plane, r 1,2 ∈ℜ 2 ).This function J describes the spatial similarity (autocorrelation) of the field at a given instant, t, and it has been used commonly in statistical optics (see, e.g., [2].).The theorem establishes a relationship between J at the source plane and that of the field in the far zone.Such propagation of correlations has been described in detail by Mandel and Wolf [1].Here, we derive the coherence area in the far-zone using the concept of the deterministic signal associated with a random field, as follows. # Again, we assume statistically homogeneous and stationary fields (at least in the widesense) such that Eq. ( 12) simplifies to This mutual intensity, J, is the spatiotemporal correlation function introduced in the Appendix, evaluated at time delay τ = 0, ( Using the central ordinate theorem, the cross-correlation function evaluated at τ = 0 is equivalent to the cross spectral density integrated over all frequencies, ( ) ( ) Therefore, we can obtain J(ρ) via the spatiotemporal power spectrum, S(k, ω), followed by the Fourier transform with respect to k and an integration over ω. An important problem when employing incoherent sources for QPI is to find the coherence area of the field at a certain distance from the source.To this end, according to the definition introduced in the Appendix, we must calculate the variance of the transverse wave vector.Here we provide a calculation of this variance directly from the wave equation.Specifically, we start with the deterministic wave equation, satisfied by the deterministic signal, V U , associated with the random field, U, ,, , where V U and V s are the deterministic signals associated with the propagating field, U, and a planar source field, s, respectively.Therefore, Eq. ( 16) by δ(z).By Fourier transforming Eq. ( 16), we readily obtain the solution in the ω − k domain [recall Eq. ( 9)] ( ) ( ) where ( ) eliminating the negative frequency (inward) term, ( ) By Fourier transforming with respect to k z , we obtain the field U V  as function of ⊥ k and z, which is known as the plane wave decomposition or Weyl's formula (see, e.g., Section 3.2.4. in [1].), ( The modulus squared on both sides in Eq. ( 20), yields a z-independent relation in terms of the respective power spectra, If we assume that the spectrum of the observed field is centered at the origin, i.e., 0 ⊥ = k , and is isotropic, i.e., depends only on the magnitude of ⊥ k , the variance can be simply calculated as the second moment of where A ⊥ k is the ⊥ k domain of integration.Thus, integrating Eq. ( 22) with respect to ⊥ k , the . Further, if we assume that the field of interest is in the far zone of the source, which implies that 0 k β ⊥ << , then we can use the first order Taylor expansion in terms of . Finally, the finite size of the source limits the spatial frequency in the far field by introducing a maximum value of k ⊥ , say M k (see Fig. 1), such that these circumstances, Eq. ( 23) simplifies to where we employed a Taylor expansion a second time.The maximum transverse wavevector can be expressed in terms of the half-angle subtended by the source, θ, because 0 sin Thus, the coherence area of the observed field is where Ω is the solid angle subtended by the source from the plane of observation, 2 4 sin π θ Ω = .This simple calculation captures the power of using deterministic signals associated with random fields as a means to reduce the coherence propagation equation from fourth-order in correlations to second-order in fields.Specifically, by taking the power spectrum of the solution, we were able to directly calculate the second moment of the transverse wavevector and implicitly obtain an expression for the spatial coherence of the propagating field.Equation (25) illustrates the remarkable result that, upon propagation, the field gains spatial coherence.In other words, the free-space propagation acts as a spatial low-pass filter.The farther the distance from the source, the smaller the solid angle Ω and, thus, the larger the coherence area. Propagation of coherence from secondary sources The concept of the deterministic signal associated with a random field can also be used to describe light propagation in inhomogeneous media, i.e., light propagation from a secondary source.Starting with the deterministic Helmholtz equation, the secondary source term can be separated to the right hand side.( ) Taking the spatial Fourier transform and separating the variables, the deterministic field propagation can be expressed in terms of the convolution of the secondary source and the initial incident deterministic field, where k ⓥ indicates a 3D convolution in the k-domain.Considering only the positive spatial frequency component associated with z q k − , and taking the inverse Fourier transform in z k , the scattered deterministic field is, where z ⓥ indicates convolution along z and ⊥ ⓥ the 2D convolution along the transverse kvector, ( x k , y k ).The result can be simplified noting that the convolution of a function, f(z), with a complex exponential yields a simple result, namely, ( ) ( ) the Fourier transform of ( ) f z .Thus, Eq. ( 28) becomes This is a remarkably simple result, which can be regarded as the first order Born scattering solution for an arbitrary illumination field.Equation (29) allows us to calculate, at any plane z, the power spectrum of the scattered field, which equals that of the respective deterministic signal, 2 ( , ) , as a function of the illumination spectrum, and the scattering potential, χ .Once we know the power spectrum, the frequency variances can be evaluated directly as well.We illustrate these calculations by studying spatial and temporal coherence changes induced by scattering by biological tissues.In order to demonstrate the change in field correlations due to tissue, we used spatial light interference microscopy (SLIM) to obtain quantitative phase images of a tissue slice.SLIM provides the quantitative information about the optical path-length induced by the sample with 0.3 nm spatial sensitivity [15].By measuring the scattered light (Fig. 2(b)), the phase shift at each point is mapped to an image, as shown in Fig. 2(a).Figure 2(c) shows the spatial correlation function with respect to ⊥ k , which is obtained directly by taking the radial profile on the 2D Fourier transform of the image in Fig. 2(a). The coherence properties of the light scattered from the same tissue is calculated using Eq.(29).In order to express the result in terms of the scattering angle, θ, we simply make the We study the spectral width of the scattered field with respect to the scattering angle.Based on the reciprocal relationship between the spectral linewidth and coherence time, 1 τ ω Δ Δ ≈ , the coherence properties can be investigated.We define the width of the spectrum by its variance, ( ) Assuming a Gaussian incident field spectrum centered at 30) and (31) yield the spectrum at each scattering angle and also the spectral bandwidth at each scattering angle.In Fig. 2(d), the resulting normalized spectrum of the calculation from Eq. ( 30) is shown with respect to the scattering angle.We can see a redshift in higher scattering angles due to the spatial correlation of the source as it is discussed in our previous paper [16].Further, Fig. 2(e) shows the effective spectral width of the angular spectrum that indicates the change of coherence time through a biological tissue with respect to the scattering angle. Summary and discussion We presented a new formalism for calculating propagation of field correlations using the wavevector-frequency representation of optical fields.The main points of this paper are as follows.1) We first represent the coherence mode decomposition (CMD) in the wave-vector domain to prove that the plane waves can be used as an eigenfunction basis of the cross spectral density associated with statistically homogeneous fields.2) We introduced the concept of a deterministic signal associated with a random field and showed that it significantly simplifies calculations of second order correlations.3) We described spatial and temporal coherence in terms of the second order statistics (variance) of the spatial and temporal power spectra.Thus, for an arbitrary stochastic field, we can define a temporal bandwidth and coherence time for each spatial frequency (wavevector k) component and, vice versa, a spatial correlation for each temporal frequency ω. 4) We reviewed the stochastic wave equation in the Appendix and, for wide-sense stationary and statistically homogeneous fields, we solved this equation in the (k, ω) domain.Essentially, fourth order differential equations in field correlations can be replaced by second order differential equations for deterministic signals, which are defined via a Fourier transform of the spectral amplitude.These signals do not contain information about the spectral phase associated with the field.For example, the deterministic signal representation cannot make the distinction between a focused beam and a speckle field distribution with the same spatial bandwidth, or a light pulse versus a continuous wave field of the same temporal bandwidth.Therefore, it is important to note that the deterministic signal solution should only be used to generate the power spectrum (or autocorrelation) of the propagating field.From this power spectrum, first order (mean frequency) and second order (variance) statistics can be calculated both spatially and temporally, i.e., one can study how coherence changes upon propagation.5) In Section 4, we applied the deterministic signal associated with a random field to derive a well-known result of the van Cittert-Zernike's theorem, e.g., the field emitted by a spatially incoherent source gains coherence upon propagation.First we established that the mutual intensity, a quantity that is traditionally used for describing spatial coherence in a plane, is merely the frequency averaged cross-spectral density.This result allows us to easily calculate propagation of field correlations directly in the frequency (k, ω) domain.6) If one is only interested to know the spatial and temporal variances, as measures of spatial and temporal coherence, we show that this second order statistics can be calculated straight from the wave equation in the frequency domain [e.g., Eq. ( 29)].We illustrated this approach with correlations of fields propagating from primary and secondary sources. Experimentally, we only have access to field correlations and not the fields themselves.Our results indicate that, for statistically translation-invariant fields, the deterministic signals give the same results as the actual (stochastic) fields.This explains why theoretical descriptions of interferometric experiments can yield the correct results even when randomness is ignored. where the angular brackets indicate ensemble averaging, 2 1 ∇ is the Laplacian with respect to coordinate r, 2 2 ∇ with respect to coordinate + r ρ , and Γ s is the spatiotemporal autocorrelation function of s.Since we assumed wide sense stationarity and statistical homogeneity, which gives Γ s dependence only on the differences ρ and τ, all the derivatives in Eq. ( 33) can be taken with respect to the shifts, i.e. (see pp. 194 After these simplifications, Eq. ( 33) can be re-written as ( ) ( ) where Γ U is the spatiotemporal autocorrelation of U, Eq. ( 35) is a fourth order differential equation that relates the autocorrelation of the propagating field, U, with that of the source, s.From the Wiener-Khintchine theorem, we know that both Γ U and Γ s have Fourier transforms, which are their respective power spectra, S U and S s .Therefore, we can solve this differential equation by Fourier transforming it with respect to both ρ and τ, In Eq. (36a), we used the differentiation property of the Fourier transform, . Equation (36b) gives an expression, in the ω − k representation, for the spectrum of the propagating field, S U , with respect to the spectrum of the source, S s .Note that here the function ( ) is a filter function (transfer function), which incorporates all the effects of free space propagation.Because the free space is isotropic, the transfer function is also isotropic, i.e., it depends only on the magnitude of the wavevector, k = k , and not its direction. A2. Coherence time and area Let us consider the fluctuations of a field observed at a given plane.The coherence time, τ c , and coherence area, A c , describe the spread (standard deviation) of the autocorrelation function, ( ) ,τ Λ ρ , in τ and ρ, respectively.Due to the uncertainty relation, τ c and A c are inversely proportional to the bandwidths of their respective power spectra, we assume that the statistical properties of the field are isotropic, meaning that the spatial coherence at a plane is characterized by a scalar function, A c .If this is not the case, i.e., when the field statistics depends on direction, the coherence area is no longer sufficient and the concept must be generalized to a tensor quantity, of the form, The two variances can be further averaged with respect to these variables, such that they become constant, , which is averaged over all temporal frequencies.In practice, we always deal with fields that fluctuate in both time and space, but rarely do we specify τ c as a function of k or vice-versa; we implicitly assume averaging of the form in Eq. (39a) and (39b). Fig. 1 . Fig. 1.Field propagation from an extended source.At the observation plane, field U, contains the maximum spatial frequency, k M , which is set by the angle θ subtended by the source. Fig. 2 . Fig. 2. (a) The optical path-length map of a tissue biopsy sample, imaged with SLIM using a 40X objective with 0.75 NA.A close-up of the sample is shown on the right as demonstration of the imaging ability (quantitative and high-resolution).(b) The scattering geometry for this experiment.(c) The spectrum measured from sample shown in (a) through 2D Fourier transform and radial averaging.(d) Normalized optical spectrum for light propagated after the tissue.(e) The variation of effective spectral width of the angular spectrum. . Note that in this definition, physical meaning of a ⊥ k -dependent coherence time is that each plane wave component of the field can have a specific temporal correlation and, thus, coherence time, , each monochromatic component can have a particular spatial correlation and, thus, coherence area, averaged over all spatial frequencies, while Eq.(39b) provides a coherence area, .
6,582.8
2013-09-09T00:00:00.000
[ "Physics" ]
IQL(2): A Model with Ubiquitous Objects (cid:3) Object-oriented databases have brought major improvements in data modeling by introducing notions such as inheritance or methods. Extensions in many directions are now considered with introductions of many concepts such as versions, views or roles. These features bring the risk of creating monster data models with a number of incompatible appendixes. We do not propose here any new extension or any novel concept. We show more modestly that many of these features can be formally and (we believe) cleanly combined in a coherent manner Introduction We propose an extension of IQL AK89], therefore the name 1 IQL(2), to encompass many new extensions to the core OODB models that have been considered separately in the past.The model is based on two not novel concepts: (i) contexts that are used to parameterized class and relation names; and (ii) views to de ne intensional data.This brings two kinds of ubiquity to objects, i.e., the same object may belong really or virtually to several classes at the same time.We propose a rst-order language with static type-checking, under certain restrictions on the schemas.Most of the examples are given using a more convenient OQL-like syntax. We brie y consider two technical issues: (i) quanti cation over contexts, and (ii) method resolution for ubiquitous objects.Quanti cation over contexts can be handled under some reasonable restrictions that we present.Uncontrolled ubiquity together with inheritance, leads to severe problems with respect to type checking and con ict resolution.We advocate here the use of strong restrictions so that standard resolution techniques can be used. As illustrated by examples, the model captures in a coherent framework many features that have been considered separately in the past: (i) a model with objects, classes, inheritance, methods ala IQL or O 2 BDK92]; (ii) a view mechanism ala O 2 Views SAD94]; (iii) a versioning mechanism with linear versions and also alternatives (see, e.g., KC88]); (iv) a mechanism for objects with several roles BD77, RS91] ala Fibonacci ABGO93]; (v) the means of specifying distribution of data in several sites; (vi) a mechanism for data and schema updates (see, e.g., Zic92]); (vii) speci cation of access rights (see, e.g., RBKW91]). Partially supported by Esprit Project GoodStep.y On leave from Departamento de Inform atica, Universidade Federal de Pernambuco, Brazil.Partially supported by CNPq grant number 200.803-92.1. 1No, Guido, this does not imply that there will be an IQL(3). The paper is organized as follows.In Section 2, we introduce some notation and auxiliary concepts.A restricted form of the model (without views and inheritance) is presented in Section 3. The language is presented in Section 4. Section 5 deals with inheritance and Section 6 with views.The last section is a conclusion.Additional examples are given in Appendix A. To conclude this section, we present in an example some of the features of the model.At one extreme, we may decide that one context is completely virtual and that no data is stored there.At another extreme, we can view the database as duplicated in contexts Paris and LA.Each object has a store in Paris and one in Los Angeles.An update method on an object o in Paris context would modify the store in Paris.It may call immediately a method on object o in LA to propagate the change, or one may prefer to propagate updates in batches using a program that is called regularly.2 Preliminaries In this section, we introduce some notation and some auxiliary concepts. We consider the existence of the following pairwise disjoint and in nite countable sets: 1. rel: relation names R 1 ; R 2 ; ::: 2. class: class names C 1 ; C 2 ; ::: 3. obj: object identi ers (oid's) o 1 ; o 2 ; ::: 4. dom: data values d 1 ; d 2 :::.The set dom is typically many sorted.It contains the sorts int, real, bool, string and a particular sort for context identi ers (cid's) that will be application dependent.The data sorts will be denoted d 1 ; d 2 ; :::.The values of sort d i are dom(d i ).The set of cid's will be denoted cid.Given a set O of oid's, the set of values that one can construct is denoted val(O): 1. val(O) contains O and dom; 2. val(O) is closed under tupling and nite setting.(Other constructors such as sequencing or multi-setting can be added in a straightforward manner and will not be considered here.)The cid's will serve many purposes.If we take cid's in 1::n], we model time versions.By organizing the cid's in a dag, we also model alternative versions.By taking cid's for instance in f London, Paris, LA, etc. g, we model distributed databases with the same object (with distinct repositories) possibly in many sites.By choosing cid's in f John, Peter, Max, etc. g, we model access rights for various users. In practice, one may want to use cid's with a richer structure, i.e., use complex values or objects to denote contexts.For instance, in a versioned and distributed database, one would like the domain of cid's to be the set of pairs (timestamp,location).We ignore this aspect here since this would unnecessarily complicate the model, and view the cid's as atomic elements.Indeed, in most of the discussion, we assume that the domain cid of the cid's is an initial fragment of the integers.However, in examples, we sometimes use a richer structure for cid's. We consider that the \names" of both the schema and the instance are indexed by the cid's.A class in our context is now C(n) for some cid n, and a relation becomes R(n).On the other hand, objects are not indexed by cid's.However, their values and behaviors depend on the roles that they are taking.For instance, a versioned object is the same object in all its di erent versions.Its value and behavior depend on the particular version that is considered. Given a set C of classes and the set cid of cid's, C(cid) denotes C cid. Starting from sets C, and cid, the types types(C(cid)) are de ned by the following abstract syntax: := d i j C(cid) j A 1 : ; :::; A n : ] j f g j + j ? where n 0, the A i 's are distinct and \+" is the union of types. An oid assignment is a mapping from C(cid) to 2 obj fin (the nite powerset of obj).It gives the population of each class in each context.(Note that class populations are not required to be disjoint and objects may be explicitly in many di erent classes.)The set of oid's occurring in is denoted O. The semantics of types is given with respect to an oid assignment : 3. nite setting and tupling are standard; 4. 1 + 2 = 1 2 ; 5. ?= ;.Given an oid assignment and the corresponding nite set O of objects, a value assignment is a mapping from O C(cid) to val(O); i.e., it associates to a triple (object,class,cid), a value.Remark 2.1 Observe that the value of an object is depending on two parameters: the context and the class.Suppose that we have two contexts business and personal, modeling respectively my business phone-book and my private one.Suppose that we have two classes Friend and Researcher.Suppose that Jones is a friend and a researcher.Then, I may have phone informations for Jones in both contexts and in both classes.The fact that some data is stored and some may be derived is irrelevant (so far). 2 Database Schema and Instance We de ne the schemas and the instances.We ignore rst an important aspect, namely, the speci cation of the \virtual database" ( below), which is the topic of Sections 5 (inheritance) and 6 (views). De nition 3.1 A database schema S is a tuple (R; C; cid; T; ) where: (i) R, C, are nite sets of relation and class names; (ii) cid is the nite set of contexts; (iii) T : R(cid) C(cid) !types(C(cid)); (iv) is a view program to be de ned later.This is a conservative extension of IQL.First, R is the set of names of roots of persistence, C the set of class names, cid (is new and) is the set of contexts, T is the typing constraint.In IQL, the view program is simply the inheritance hierarchy since there is no other mechanism for virtual data there. It is important to observe that we associate types to pairs involving a name (relation or class) and a cid.This captures the fact that the same name may have di erent types in di erent contexts.For instance, if the contexts are versions, the type of a class is allowed to evolve in time.Observe also that the type of a class or a relation in some context may refer to a class in another context. Example 3.2 We consider a database context Global that is the integration of the two local database contexts, LA and Paris. The schema is as follows: Let R = fR p ; R la ; R g g, C = fEmployeeg, cid = fParis; LA; Globalg and T be de ned by: class where O is the set of oid's occurring in . Ignoring the view mapping, we now specify the notion of well-formed instance: De nition 3.4 Let ( ; ; ) be an instance over a schema S. The instance is well-formed if the following typing constraints are satis ed: Two well-formed instances are given in Figure 1.Intuitively, instance I 2 is obtained from instance I 1 by deriving some new data. Instance I 1 Instance I 2 (Employee(Paris)) = fo 1 ; o 2 g (Employee(Paris)) = fo 1 ; o 2 g (Employee(LA)) = fo 1 g (Employee(LA)) = fo 1 g (Employee(Global)) = ; (Employee(Global)) = fo 1 ; o 2 g (R p (Paris)) = fo 1 ; o 2 g (R p (Paris)) = fo 1 ; o 2 g (R l (LA)) = fo 1 g (R l (LA)) = fo 1 g (R g (Global)) = ; (R g (Global)) = fo 1 ; o 2 g We now de ne a many-sorted rst-order calculus then give examples of queries in an OQL-like syntax.(As in IQL, we could have used here a rule based language but since recursion is not important here, we prefer to focus on a simpler language not to obscure the issue.)We rst consider \ xed contexts" in the sense that we disallow quanti cations over cid's. A Fixed Context Calculus The calculus is de ned as follows: Terms The terms of the calculus are: 1. d for each d in dom; 2. R(n) for R in R and n in cid (R(n) denotes the value of relation R in context n); 3. variables x where the type does not refer to the sort cid (the type is omitted when clear from the context); 4. constructed terms with tupling ( A 1 : t 1 ; :::; A n : t n ]), setting (ft 1 ; :::; t n g), projection (t:A for A an attribute), and dereferencing ( t for t denoting an object).The sorts of terms are de ned in the straightforward manner. Formulas, queries: Atoms are t = t 0 , t 2 t 0 for t; t 0 terms with compatible types, or x x 0 where x; x 0 are of resp.sorts C(n); C 0 (m).(This is interpreted as x and x 0 are the same object in di erent contexts.)Formulas are atoms, or L _ L 0 , L ^L0 , L ) L 0 , :L, 9x (L) or 8x (L) where L; L 0 are formulas.A query is an expression of the form fx j 'g where ' is a formula with only free variable x. Range-restriction As standard, we restrict our attention to range-restricted formulas and queries. The range-restriction we adopt here is standard.From this point of view, the only novelty is the use of that behaves exactly like equality for range-restriction.Contexts play no role for range-restriction since we assumed they are constant.From a language viewpoint, the only (relative) novelty is the use of .We illustrate it with an example.Suppose that the cid's are timestamps and that the last two versions are denoted by the constants previous and now.Let Persons be a set of objects of class Person.We can obtain the phone number of persons that have not changed phone number since last version: fP:phone j 9P 0 2 Persons(previous)(P 2 Persons(now) ^P P 0 ^P:phone = P 0 :phone)g; or using an OQL-like syntax: select P.phone from P in Persons(now) where P.phone in select P'.phone from P' in Persons(previous) where P' P. We could express the same query in a simpler manner if either (a) a eld previous (possibly virtual { see below) contains the previous state of each object or (b) using casting: select P.phone select P.Phone from P in Persons(now) from P in Persons(now) where P.previous.phone= P.phone where P.phone = P@Person(previous).phonewhere P@Persons(previous) denotes the casting of P to the same object in class Person(previous).Such casting can be viewed as syntactic sugaring.Another form of syntactic sugaring would be to permit to test whether an object is also in some di erent contexts.This allows us to rephrase (more carefully) the above query: select P.Phone from P in Persons(now) where P is also Person(previous) and P.phone = P@Person(previous).phoneRemark 4.1 To see a more complicated example with \structured" contexts, suppose that we are in a versioned database with one context for private data and one for professional one.To obtain the actual home phone numbers of friends who worked on OQL in 1990, we use: select P.phone from P in Persons(private,now), P' in Persons(prof,1990) where \OQL" in P'.works on and P P' where the domain of cid's is a set of pairs (context,timestamp). 2 Quantifying over Contexts We start with two examples and then consider some di culties that are raised.First, suppose that cid consists of two contexts, namely LA and Paris, and that we want to modify the salaries of employees by taking the maximum of the salaries in the two contexts.We may use one of the following programs: Observe that the second one, although clearly more desirable (imagine 20 sites!), uses cid variables, i.e., Site1, Site2, for specifying the context (whereas LA for instance is a constant).This is a quanti cation over some contexts. From the example, it is clearly convenient to be able to quantify over contexts.However, this complicates the type checking of programs as illustrated by the following example. Suppose that the context is 1..now] and that in Version 15, we added an attribute to class Person, e.g., an email address.Consider the following queries asking for the name of persons such that their stored value has been modi ed at least once (since Version 17): Query 1 Query 2 select = P.Name select P.Name from N in Contexts, P in Persons(N), from N in Contexts, P in Persons(N), P' in Persons(now) P' in Persons(now) where P is P' and not ( P = P') where P is P' and not ( P = P') and N > 17 where Contexts is a relation containing the set of valid contexts. Recall that \ " denotes dereferencing.Observe that Query 1 should raise an error since the type of a person now and say in Version 14 are di erent.The sorts of the values for a person now and at time 14 are not compatible and P = P 0 is incorrect.On the other hand, Query 2 should be acceptable as far as we test for N > 17 before testing other conditions.However, an issue also of Query 2 is type checking since because of the schema update, we cannot assign a type to P. A rst solution is to use dynamic type checking.Another one is to require that the quanti cation over N be outermost and apply the restrictions on context variables during type checking (i.e., at compile time). More formally, we require the formula to be of the form: where Q 1 ; :::; Q m are quanti cations over contexts, ' is a (range-restricted) formula that has no quanti cation over contexts, its only free-variables are contexts (' restricts the range of the contexts), is ^or ) and contains no quanti cation over contexts. Query 2 can be expressed in this form: fP:Name j 9N((Context(N) ^N > 17) ^9P; P 0 (Persons(N)(P) ^Persons(now)(P 0 ) ^P P 0 )) Intuitively, this suggests the following evaluation.First ' is evaluated.Since it has no quantication over context, its evaluation raises no issue.Then, based on the results of ', the global query is transformed into a boolean combination of queries with no quanti cation over context.Each of these queries can be typed checked and executed separately. Observe that this form is restrictive since it does not allow expressing queries of the form f: : : j 8x9n:::g where the value of context n depends on x.It is possible (although rather intricate) to nd natural examples of such queries (for instance, see the example above where the eld previous contains the previous state of each object). Inheritance In this section, we consider the addition of an inheritance relationship to the schema.Since classes in contexts play the role of standard classes, we need to consider statements such as C(n) isa C 0 (m) that possibly relates two distinct contexts.We assume that the inheritance hierarchy is a dag. 2. we access some method m.This is legal if for some C j (n j ) (j : 1::i), the resolution2 of m in C j (n j ) is de ned and is some class C 0 ; and for each C k (n k ) (k : 1::i); the resolution of m in C k (n k ) is also C 0 or is not de ned.Multiple roles do complicate a lot the issue.Consider a class C(n) with m subclasses.Then a variable of class C(n) may denote an object o such that the set of subclasses of C(n) where o is explicitly, may be any of the 2 m subsets of subclasses of C(n).This leads to two important issues: Problem (1): At run time, given an object o and a role C(n) for this object, nd fast the store for some attribute A and the code m for a method m.Problem (2): At compile time, statically type check a program.Both will be time consuming.Both can be simpli ed if we specify a compatibility relation that speci es where objects can be concurrently explicitly.More precisely, is an equivalence relation over C(cid), and C(n) C 0 (m) indicates that an object may belong explicitly to both classes concurrently, so that multiple instantiation is constrained to classes in the same partition w.r.t. to .Type checking can be eased if, in addition, we make antisymmetric by constraining types of classes related by to be comparable w.r.t. to standard subtyping.This would de ne a role hierarchy, but we adopt a more general approach where role hierarchies can be de ned, if necessary, through a view. To see an example, consider a database of boats and airplanes with three classes, Boat, AirPlane, Vehicle and the schema: class Boat : Name : string, Price : integer, Propeller : string] isa Vehicle AirPlane : Name : string, Price : integer, Speed : integer] isa Vehicle Vehicle : Name : string, Price : integer] If we know that the compatibility relation is empty, an access to the price of a vehicle is legal. Otherwise, there is a potential con ict since the same object may be in classes AirPlane and Boat explicitly. The use of is investigated next. A Trade-o It is standard to prohibit (or at least control) multiple-inheritance in the context of single-roles.We now add a condition to handle multiple roles. A schema is strict if for each C(n); C 0 (m), such that C(n) C 0 (m) and C(n); C 0 (m) are not comparable in the isa hierarchy, there is no C 00 (p) such that C(n) and C 0 (m) are both subclasses of C 00 (p) (i.e., C(n) and C 0 (m) have no common ancestor). For strict schemas, the resolution issues above disappear, i.e., it is easy to see that for each object o and role C(n), this leads to standard resolution for o in the unique class below C(n) where it belongs explicitly.This leads to resolution with a parameter, the class C(n) (i.e., Problem (1) disappears).For non-strict schemas, we can adopt multi-attribute resolution (to solve Problem (2) and techniques such as multi-attribute dispatch tables can be used AGS94] (to solve Problem (1)). Views In the previous section, we already considered the speci cation of view mappings, but we restricted our attention to a special class of view mappings related to inheritance only.In this section, we use the entire power of the rst-order language of the previous section to de ne view mappings. A view program allows to specify from the value of the database composed of explicit information (instance ( ; ; )), a well-formed virtual database (instance ( ; ; ) below). Queries are rst used to populate classes and relations as in: Employee(Global) w fx j Employee(Paris)g Employee(Global) w fx j Employee(LA)g R g (Global) w fx@Employee(Global) j x 2 R p (Paris)g R g (Global) w fx@Employee(Global) j x 2 R la (LA)g We use two queries to de ne Employee(Global) since a single one would be incorrectly typed.Note also that the above de nition does not not prevent the class Employee(Global) to have explicitly objects in it.Remark 6.1 In the presentation so far, we have implicitly assumed that the extensions of base classes are given and used to compute the extensions of derived classes.It is argued in SAD94] that in many applications, it is not desirable to maintain the extensions of classes.Furthermore, some systems (such as O 2 ) do not provide extensions for base classes, and it would be unnatural to maintain that of derived classes in such context.If class extensions are not maintained, the de nition of Employee(Global) is not necessary and can be viewed as \derived".2 Using such rules, it is easy to specify the values of and .For the speci cation of , we can use two approaches. In an explicit manner, we can specify or enrich the value of each object in its new class with rules of the form: var x : Employee(Global); x 0 : Employee(LA) define x:phone = uniquefx 0 :phone j x 0 xg This can also be achieved implicitly.We assume that by default, the values of objects are transmitted via derivations.For instance, if an object is in Employee(Global) because of its presence in Employee(LA), then it \inherits" its structure from that of the employee in LA.This implies some constraints on the types that are similar to constraints on types in presence of inheritance.(Recall that inheritance is just a special case of view.) A problem is that the presence of an object in some class C(n) may have its origin in the presence of the object in more than one other classes.For instance, an object may be in Employee(Global) because it belongs to Employee(Paris) and also because it belongs to Employee(LA). In such cases, the new value is obtained (a) by merging the values associated to the originating object/context pairs, and (b) projecting (casting) to the type that is expected.More precisely, suppose that we de ne the population of class C in context n as the union of ' i where for each i, ' i returns a set of objects of type C i (n i ).Then the value of an object o for C(n) is de ned by: (o; C(n)) = T (1 f (o; C i (n i )) j o 2 ' i g) where merge (1) and projection ( ) are de ned next. De nition 6.2 The merge of two data values is de ned by: 1. v 1 v = v for each v; 2. if t 1 ; t 2 are tuples, t 1 1 t 2 is the tuple t (if it exists) such that for each attribute A of t 1 and t 2 , t(A) = t 1 (A) 1 t 2 (A); and for each i; j; j 6 = i, if t i has attribute A and not t j , t(A) = t i (A); t has no other attribute; 3. otherwise v 1 v 0 is unde ned. Observe that two tuples with two non-merge-able values (e.g., integer 4 and 5) for the same attribute, are not merge-able.This does not prevent for instance an object o to have two distinct values, say 4 and 5, in two distinct classes.On the other hand, this cannot happen (in a correct instance) if these two versions of the same object are merged in a unique class. The projection of a value on a type (given an oid assignment ) is de ned recursively as follows: 1. if is C(n) and v = o is in (C(n)), then (v) is o; 2. if = A 1 : 1 ; :::; A m : m ] and v = A 1 : v 1 ; :::; A n : v n ] for m n and for each i m, i (v i ) is de ned, then (v) = A 1 : 1 (v 1 ); :::; A m : m (v m )]; 3. if = 1 + 2 and either (i) 1 (v) or 2 (v) is de ned and equal to v 0 but not both; or (ii) they are both de ned and equal to v 0 , then (v) = v 0 ; 4. otherwise, (v) is unde ned.To conclude this section on views, observe that we have two ways for an object to be virtually in a class.One is by inheritance and the other one is by the view mechanism.We advocated a strict policy for handling inheritance to simplify the treatment of inheritance con icts.The view mechanism is handled di erently.It may be more liberal at the price of being more costly. Conclusion In this paper, we have presented a model with many features that are usually considered separately.Our discussion on methods has been quite brief but we believe we covered the main issue, method resolution.Our treatment of views has also been rather short and many features of SAD94] such as imaginary objects were not considered here.However, they would only have made more complicated the model at the cost of clarity and do not present any new di culties. Figure 2 : Figure 2: Inheritance and Con icts Example 1.1 Consider a distributed database with two sites: Paris and Los Angeles.Paris and Los Angeles are two contexts of a unique database.Suppose that the database deals with persons, friends and researchers, i.e., we have classes Person, Friend, Researcher.Classes Friend and Researcher are subclasses of Person in both contexts.Let Dupond be an object.First, suppose that in Paris, Dupond is considered a friend, and in LA both a friend and a researcher, i.e., Dupond belongs to class Friend(Paris), Friend(LA) and Researcher(LA).By inheritance, Dupond is also in classes Person(Paris) and Person(LA) (with possibly di erent behaviors in each).Now, we may decide that the data on friends is recorded in LA.We therefore have a relation Friends(LA), and see relation Friends(Paris) as a view of Friends(LA).This would mean that the store for Dupond is in LA and that Dupond is only virtually in class Friends(Paris).This does not prevent Dupond from being really in Researcher(LA) with a speci c store there.
6,420.4
1995-09-06T00:00:00.000
[ "Computer Science" ]
Mind the gap: What explains the poor-non-poor inequalities in severe wasting among under-five children in low- and middle-income countries? Compositional and structural characteristics A good understanding of the poor-non-poor gap in childhood development of severe wasting (SW) is a must in tackling the age-long critical challenge to health outcomes of vulnerable children in low- and middle-income countries (LMICs). There is a dearth of information about the factors explaining differentials in wealth inequalities in the distribution of SW in LMICs. This study is aimed at quantifying the contributions of demographic, contextual and proximate factors in explaining the poor-non-poor gap in SW in LMICs. We pooled successive secondary data from the Demographic and Health Survey conducted between 2010 and 2018 in LMICs. The final data consist of 532,680 under-five children nested within 55,823 neighbourhoods from 51 LMICs. Our outcome variable is having SW or not among under-five children. Oaxaca-Blinder decomposition was used to decipher poor-non-poor gap in the determinants of SW. Children from poor households ranged from 37.5% in Egypt to 52.1% in Myanmar. The overall prevalence of SW among children from poor households was 5.3% compared with 4.2% among those from non-poor households. Twenty-one countries had statistically significant pro-poor inequality (i.e. SW concentrated among children from poor households) while only three countries showed statistically significant pro-non-poor inequality. There were variations in the important factors responsible for the wealth inequalities across the countries. The major contributors to wealth inequalities in SW include neighbourhood socioeconomic status, media access, as well as maternal age and education. Socio-economic factors created the widest gaps in the inequalities between the children from poor and non-poor households in developing SW. A potential strategy to alleviate the burden of SW is to reduce wealth inequalities among mothers in the low- and middle-income countries through multi-sectoral and country-specific interventions with considerations for the factors identified in this study. Introduction A key target of the United Nation's Sustainable Development Goal (SDG) 3 to "ensure healthy lives and promote well-being for all at all ages" is the reduction of childhood deaths [1]. Malnutrition among under-five children is a major impediment towards the attainment of SDG 3 in Low-and Middle-Income Countries (LMICs). Combating malnutrition has remained one of the greatest global health and social challenges. Malnutrition is a prominent part of a vicious cycle that consists of both poverty and disease [2]. The trio of malnutrition, poverty and disease are interlinked. Presence or absence of one directly affects the presence or absence of the others [3]. The marginalised and vulnerable population sub-groups are the most affected. They are impoverished and also lack access to education, information, financial resources and quality healthcare. The relationship between wealth and health services uptake and health outcomes in developing countries has been established in the literature [4][5][6][7][8][9]. Fagbamigbe et al. found that persons from wealthier households in Nigeria had a higher propensity of utilizing health services [6]. However, there could be other factors associated to health outcomes and health care utilization as documented in a Ghanaian study, wherein the authors ascertained that despite free antenatal care services in Ghana, its utilization remained poor [7]. The UNICEF framework for understanding the factors associated with malnutrition showed that economic, social, and political factors are interlinked [10]. Besides, poverty has two-edged sides to malnutrition. Poverty is a cause of malnutrition, on the one hand, and a consequence of malnutrition on the other hand [2]. Poor earnings, as a result of lack of education, joblessness or poor salary can lead to food shortages, poor sanitation and lack of health services and thereby cause malnutrition. Further, malnutrition, especially at an early age, can result in ill health and low education. Thus, malnutrition is a consequence of the factors that are closely related to one or more combinations of poor food quality, insufficient food intake, and severe and repeated infectious diseases. These conditions arise from the individual and societal standard of living, and the ability to meet necessities of life [3]. The literature is replete with the fact that malnutrition affects school absenteeism rates, cognitive development and intellectual capacity of children and thereby contribute to poor educational performances [24][25][26]. These outcomes can entrap individuals and societies for a long time in the cycle of poverty. An EU-WHO-TRD report on diseases of poverty otherwise referred to as the povertyrelated diseases had stated that ". . .poverty creates conditions that favour the spread of infectious diseases and prevents affected populations from obtaining adequate access to prevention and care. . ." [27]. It has been reported that poor living conditions, limited access to adequate hygienic food and potable drinking water, no medical care and lack of education promote the spread of infections. While there are a few reports on the country-level decomposition of socioeconomic inequalities in child nutrition [8,9] with documented evidence that poverty is associated with malnutrition [28][29][30], we are not aware of any research that has disentangled factors associated with wealth-related inequalities in the prevalence of severe wasting (SW) among under-five children in LMICs. Whereas, the disentanglement of compositional and structural risk factors of SW by wealth inequalities would enhance the understanding of the depth and contributions of the factors associated with SW and consequently provoke evidence-based interventions. There is a need to understand how the social determinants of health can be mixed to stop or at least reduce socioeconomic inequalities in the distribution of childhood malnutrition. It is therefore pertinent to decompose the wealth-related inequalities across the risk factors associated with SW and recommend potential strategies to overcome the challenges posed by this silent child-killer. This study aims to quantify the contributions of demographic, socioeconomic and proximate factors in explaining the wealth inequality in the distribution of SW in LMICs. We hypothesised that severe wasting will be lower among children from poor households than those from non-poor households in all countries. Our study will help widen the discussion on childhood nutrition and enhance knowledge and understanding of how the social, biological and political determinants of health can be exploited to reduce socioeconomic inequalities in malnutrition. Findings from our study are potential ingredients for global and national policy and intervention in child nutrition. Study design and data The Demographic and Health Surveys (DHS) data collected periodically across the LMICs was used for this study. The DHS are cross-sectional in design and are nationally representative household surveys. We pooled data from the most recent successive DHS conducted between 2010 and 2018 and available as of March 2019 and has under-five children anthropometry data. We included only the 51 countries that met these inclusion criteria. The final data consists of 532,680 under-five children living within 55,823 neighbourhoods in 51 LMICs. In all the countries, DHS used a multi-stage, stratified sampling design with households as the sampling unit [31,32]. The DHS computes sampling weights to account for unequal selection probabilities including non-response whose application makes survey findings to fully represent the target populations. The DHS used similar protocols, standardized questionnaires, similar interviewer training, supervision, and implementation across all countries where the survey held. DHS releases different categories of data focusing on different members of households among wish we used the children recode data for the current study. The data covered the birth history and health experiences of under-five children born to sampled women within five years preceding the survey date. The anthropometry measurements were taken using standard procedures [33,34]. The full details of the sampling methodologies are available at dhsprogram.com. Dependent variable. The outcome variable in this study is severe wasting. It is defined as "the presence of muscle wasting in the gluteal region, loss of subcutaneous fat, or prominence of bony structures, particularly over the thorax" [35] and approximated by "a very low weight for height score (WHZ) below -3 z-scores of the median WHO growth standards, by visible severe wasting, or by the presence of nutritional oedema" [12] more so, malnutrition has been recently described as "related to both deficiencies and excesses in nutrition, and then, therefore, it includes wasting, stunting, underweight, micronutrient deficiencies or excesses, overweight, and obesity" [36]. SW was a composite score of children weight and height. We generated z-scores using WHO-approved methodologies [37] and categorized children with zscores <-3 standard deviation as having SW (Yes = 1), otherwise as No = 0. Main determinant variable. In this decomposition study, household wealth status computed as a composite score of assets owned by households was used as a proxy for family income as DHS does not collect data on family earnings or expenditures. The methods used in computing DHS wealth index have been described previously [38]. Additional details of the methods and assets used for the computation of the wealth quintiles is available at dhsprogram.com. The DHS data had already generated and categorized household wealth quintile as a variable into 5 categories of 20% each: poorest, poorer, middle, richer and richest. For the decomposition analysis, we re-categorized household wealth quintile into two categories: poor (poorest, poorer) and non-poor (middle, richer and richest). A similar categorization has been used elsewhere [8,9,39,40]. Hence, we define "wealth inequality" as "the unequal distribution of assets". Independent variables. Keywords including low and middle-income countries, childhood morbidity, undernutrition, malnutrition, severe acute malnutrition, severe wasting, were used to search for factors associated with wealth-based inequality in SW across literature database such as PubMed, Medline, Hinari. The individual-and neighbourhood level factors were identified empirically from the literature [11][12][13][14][15][16][17][18][19][20][21][22][23]41] are: Individual-level factors. The individual-level factors are the sex of the children (male versus female): to determine if the biological differences could explain susceptibility to SW; children age in years (under 1 year and 12-59 months): SW has been reported to differ by children ages; maternal education (none, primary or secondary plus): better education could lead to better access to information and enhance earnings, and reduced risk of SW; maternal age (15 to 24, 25 to 34, 35 to 49): younger mothers may have limited education and earnings and thereby increase risk of SW among their children. Others are marital status (never, currently and formerly married): currently married may have spousal support that may reduce the risk of SW; occupation (currently employed or not): capability of providing necessary nutritional intakes; access to media (at least one of radio, television or newspaper): access to information could enhance prevention of SW; sources of drinking water (improved or unimproved), toilet type (improved or unimproved), weight at birth (average+, small and very small), birth interval (firstborn, <36 months and >36 months): children with short birth interval are at higher risk of SW and may have higher experience of wealth-related inequality in SW; and birth order (1, 2, 3 and 4+), children with high birth order are at higher risk of SW and experience higher wealth-related inequality in SW [11][12][13][14][15][16][17][18][19][20][21][22][23]41]. Neighbourhood-level factors. We used the word "neighbourhood" to describe the clustering of the children within the same geographical environment. Neighbourhoods were based on sharing a common primary sample unit (PSU) within the DHS data [31,32]. Operationally, we defined "neighbourhood" as clusters and "neighbours" as members of the same cluster. The PSUs were identified using the most recent census in each country where DHS was conducted. We considered neighbourhood socioeconomic disadvantage as a neighbourhood-level variable in this study. Neighbourhood socioeconomic disadvantage was operationalized with a principal component comprised of the proportion of respondents without education (poor), unemployed, living in rural areas, and living below the poverty level [11][12][13][14][15][16][17][18][19][20][21][22][23]41]. Statistical analyses In this study, we carried out descriptive statistics and analytical analyses comprising of bivariable analysis and Blinder-Oaxaca decomposition techniques using binary logistic regressions. Descriptive statistics was used to show the distribution of respondents by country and key variables. Estimates were expressed as percentages alongside 95% confidence intervals. We computed the risk difference in the development of SW between under-five children from poor and non-poor households. A risk difference (RD) greater than 0 suggests that SW are prevalent among children from poor households (pro-poor inequality). A negative RD indicates that SW is prevalent among children from non-poor households (pro-non-poor inequality). We estimated the fixed effects as the weighted risk differences for each of the country and the random effect as the overall risk difference irrespective of a child's country of residence. Lastly, the logistic regression method was applied to the pooled cross-sectional data from the 51 LMICs to carry out a Blinder-Oaxaca decomposition analysis (BODA). The BODA is an approach to examine differences in outcomes between groups is the decomposition technique developed by Oaxaca and Blinder [42,43]. This method aims to explain how much of the difference in mean outcomes across two groups is due to group differences in the levels of the independent variables, and how much the difference can be attributed to the differences in the magnitude of regression coefficients [42,43]. The method decomposes the differences in an outcome variable between 2 groups into 2 components so that the gaps between the two groups can be more visible. The first component of the decomposition is the "explained" portion of the gap that captures differences in the distributions of the measurable characteristics (also known as the "compositional" or "endowments") of these groups. The endowment effect captures differences in the outcome of interest that arises from observed differentials in the characteristics between the groups. Also, the second components of the analysis called the structural or coefficient or return effect, is unexplained and is attributed to differences in the returns to endowments between groups. Thus, each group receives different returns for the same level of endowments. In the analysis of health outcomes, the effect of the return may reflect the indirect effects of structural differences in health systems that affect the healthcare utilization between different groups. In recent time, the classical BODA has been extended from continuous outcomes to binary and other non-linear outcomes [40][41][42][43]. We, therefore, adopted this technique to enable the quantification of how much of the gap between the "advantaged" (non-poor) and the "disadvantaged" (poor) groups is attributable to differences in specific measurable characteristics. The non-linear decomposition model assumes that the conditional expectation of the probability of a child having SW is a non-linear function of a vector of characteristics. Using the generalized structure of the model, we fitted a model each for children born to poor and non-poor mothers. The methodologies of Blinder Oaxaca Decomposition Analysis (BODA). The BODA is a statistical method that decomposes the gap in the mean outcomes across two groups into a portion that is due to differences in group characteristics and a portion that cannot be explained by such differences. Therefore, Let A and B be two group names for children from households in poor and non-poor wealth quintiles. Also, let Ȳ A and Ȳ B be the mean outcomes for the observations Y in the groups so that the mean outcome difference (ↁȲ ) to be explained is the difference between Ȳ A and Ȳ B . Then the mean outcome for group G can be written as: where X is a vector containing the predictors and a constant, β contains the slope parameters and the intercept, and � is the error, the mean outcome difference can be expressed as the difference in the linear prediction at the group-specific means of the regressors. That is: Since assuming that E(β ℓ ) = β ℓ and E(� ℓ = 0). Then the contribution of group differences in predictors to the overall outcome difference can be identified by rearranging Eq 2 to give: In Eq (3), we have divided the outcome difference into three parts thus ↁȲ ¼ E þ C þ I in the viewpoint of group B so that the group differences in the predictors are weighted by the coefficients of group B to determine the endowment effects. is the part of the differentials due to group differences in the predictors that is the "endowment effect", C = E(X B ) 0 (β A − β B ), is the measure of the contribution of differences in the coefficients which includes the differences in the intercept and lastly, is the measure of the interaction term accounting for the fact that differences in endowments and coefficients exist simultaneously between the two groups. The E component measures the expected change in group B's mean outcome if group B had group A's predictor levels. Similarly, for the C component (the "coefficients effect"), the differences in coefficients are weighted by group B's predictor levels. That is, the C component measures the expected change in group B's mean outcome if group B had group A's coefficients [42,44,45]. In this study, we adopted an alternative (further) decomposition from the concept that there is a nondiscriminatory coefficient vector that should be used to determine the contribution of the differences in the predictors. We assumed β � to be a nondiscriminatory coefficient vector. The outcome difference can then be written as: which can be thought of as is the part of the outcome differential that is explained by group differences in the predictors (the "quantity effect"), and the second component, part. This part is attributed to discrimination, and also captures all the potential effects of differences in unobserved variables. The unknown nondiscriminatory coefficients vector β � can be estimated thereafter by assuming that β � = β A or β � = β B [42], wherein discrimination is directed against A and none against group B, then _β A can be used as an estimate for β � as: and vice-versa. The numerical details have been reported [44,45]. The DHS stratification and the unequal sampling weights of clusters, as well as household clustering effects, were considered. Hence we weighted the data and set significance to 5%. Data were analysed using R statistical software and STATA 16 (StataCorp, College Station, Texas, United States of America). The results of this study are presented in Tables and Figures. All our estimates were weighted. In Table 1, we present the proportion of children from households in the poor wealth quintiles and the prevalence of SW by countries. Also, we present the prevalence of SW among the children from households in the poor and non-poor wealth quintiles within each country. The distribution of the children by the characteristics studied the prevalence of SW by the levels of the characteristics and result of the test of association between the characteristics and the development of SW. Ethics approval and consent to participate. This study was based on an analysis of existing survey data with all identifier information removed. The survey was approved by the Ethics Committee of the ICF Macro at Fairfax, Virginia in the USA and by the National Ethics Committees in their respective countries. All study participants gave informed consent before participation and all information was collected confidentially. The full details can found at http:// dhsprogram.com. Sample characteristics In Table 1, we listed the year of the survey, the numbers of neighbourhoods where data was collected, the population of under-five children surveyed, the weighted prevalence of SW, percentage of children from poor households, and the prevalence of SW among children from poor and non-poor households by countries and the regions of the world. The proportion of children from poor households ranged from 37.5% in Egypt to 52.1% in Myanmar. The overall SW prevalence was 4.7% while the overall poor and non-poor dichotomy in SW prevalence was 5.3% versus 4.2%, with statistically significant differences as shown in Table 1 and Fig 1. The prevalence of SW among children from poor households ranged from 0.1% in Guatemala to 12.3% in Timor-Leste, while it ranged from 0.1% in Guatemala to 8.4% in Timor-Leste among children from non-poor households. Table 2 presents the descriptive statistics for the pooled sample of children across the 51 LMICs by their sociodemographic and reproductive characteristics. About 51% of the children were male while only 20% were infants. About 53% were from mothers aged 25 to 34 years old and about 41% had no formal education. Nearly one-third of the mothers were not working at the time of the survey. The overall prevalence of SW in the group of children from poor households was 5.3% compared with 4.2% among those from non-poor households. Prevalence of SW was consistently higher among children from poor households compared with those from non-poor households across all the background characteristics considered in this study. Magnitude and variations in poverty inequality in severe wasting. In Figs 1 and 2, we showed the risk difference of the level of inequality between children from poor and non-poor households across the 51 LMICs included in this study. Of the 51 countries, 21 countries showed statistically significant pro-poor inequality (i.e. SW was more prevalent among children from poor households). Only three countries showed statistically significant pro-nonpoor inequality (i.e. SW was prevalent among children from non-poor households) while 27 countries showed no statistically significant inequality. As illustrated in Fig 1, in Eastern Africa, the educational difference was largest for Mozambique (15.03 per 1000 children) and lowest for Malawi (−2.51). In Middle Africa, the largest risk difference was found in Cameroun (22.77) and least in Chad (-8.69). In Western Africa, the largest pro-poor difference was in Nigeria (30.71) and lowest for Gambia (-9.51). In South-Eastern Asia, the difference was PLOS ONE Decomposing poverty-related inequalities in severe acute malnutrition Table 2. Summary of pooled sample characteristics of the studied children in 51 LMICs. Statistically significant pro-poor inequality was found in five of the nine countries in Eastern Africa, 3 of the 6 countries in Middle Africa, two countries in Southern Africa. In Western Africa, 3 of the 13 countries showed statistically significant pro-poor inequality, 3 countries in Southern Asia and the two countries studied in South-Eastern Asia. Also, statistically significant pro-non-poor inequality was found in Chad in Western Africa, Egypt in the Northern African region, and Tajikistan in Central Asia. Characteristics Weighted n Weighted % Weighted (%) Poor (%) Weighted SW (%) Poor Weighted SW (%) non-poor Relationship between prevalence of severe wasting and magnitude of poverty inequality. Fig 3 shows the relationship between the prevalence of SW and the magnitude of inequality for each of the 51 countries in this study. We categorized the 51 countries into 4 distinct categories based on the level of SW (low/high) and level of pro-poor inequality. 1. High severe wasting and high pro-poor inequality such as Nigeria and Timor-Leste. 2. High severe wasting and high pro-non-poor inequality such as Chad and Egypt. 3. Low severe wasting and high pro-poor inequality such as Uganda and Namibia. Low severe wasting and high pro-non-poor inequality such as Tajikistan. Decomposition of socioeconomic inequality in the prevalence of severe wasting. In Fig 4, we showed the detailed decomposition of the part of the inequality that was caused by compositional effects of the determinants of SW among under-five children. Only 20 countries were identified to have statistically significant differences viz-a-viz the distribution of SW by pro-poor inequalities. Across the countries, there were variations in the effect of the factors associated with wealth inequalities. For the full details of the decomposition analysis, see S1 Table. In Fig 4, the values in the boxes represent the percentage gap (differences between the compositional 'explained' components and the structural 'unexplained' components) in the influence of the variables on poor-non-poor gaps across each country. The positive values in the boxes signify that the compositional 'explained' components exceeded the structural 'unexplained' components while the negative values show the reverse. For instance, the -871% for neighbourhood social-economic disadvantage in Lesotho showed that there was wide variation in the contribution of neighbourhood social-economic indicators to the distribution of having SW in Lesotho viz-a-viz the unexplained components in the poor-non-poor inequalities in SW. On average, neighbourhood socioeconomic status disadvantage and location of residence were the most important factors in most countries. In Senegal, the largest contributors to the socioeconomic inequality in the prevalence of SW as neighbourhood socioeconomic disadvantage, followed by the location of residence, maternal age and access to media. Maternal age and media access narrowed the inequality in the development of SW between children from non-poor and poor mothers in most countries. In India, birth interval and birth order contributed mostly to SW. In Namibia, maternal age, birth weight and access to media contributed mostly to SW. The sex and age of the child, marital status and source of drinking water did not show any significant contribution to socioeconomic inequality in the development of SW in any of the 20 countries identified to have significant compositional differences. The highest contributors to the inequality in Timor-Leste are toilet types, neighbourhood socioeconomic status, media access, maternal education and place of residence. Discussion Severe wasting is currently affecting millions of children across most LMICs and the burden persisted despite the attention it has attracted over the years. The protracted and precarious nutritional outcome among under-five children motivated this study. Using pooled data from DHS in 51 LMICs, we identified the pattern of SW among under-five children, its and the contextual and compositional factors associated with its socioeconomic inequality. In all, our findings showed that children from non-poor households had a lower likelihood of SW. This is consistent with previous reports [8,9,46]. We found wide variations in the prevalence of SW among the children from poor and non-poor women across the studied countries. The prevalence of SW among the children from poor and non-poor households ranged from 0.1% in Guatemala to 12.3% in Timor-Leste and from 0.1% in Guatemala to 8.4% in Timor-Leste respectively. It is worth noting that about 53% of their mothers were of active childbearing age (25-34 years) and nearly a third had no formal education and about 30% were employed as of the survey time whereas two-thirds reside in the rural areas. Each of these factors propels poor economic capabilities. Besides, we found a higher prevalence of SW among children from neighbourhoods with the highest socioeconomic disadvantage irrespective of whether the children are from poor households or not. Our analysis revealed significant and wide differentials in the poor and non-poor gap across various determinants of SW. Our finding is collaborated by earlier studies that reported education, age, media access, birth weight, child sex and place of residence among others as associated with SW [15,16,25,29,30,[47][48][49][50]. These factors provided a plausible explanation of the variations in the prevalence of SW among the children from poor and non-poor households. The prevalence of SW was consistently higher among children from poor households compared with those from non-households across all the background characteristics considered in this study. We also found disparities in the prevalence of SW by sex and age with the infants and male children at higher risk of SW. We found good evidence of inter-country differences in the risk-difference in the distribution of SW between the children from poor and non-poor households. The analysis of risk difference of SW between the children from poor and non-poor households in each country revealed the rather obscured variations in the differences. The largest disparities were in Nigeria where a difference of 30 children among 1000 who have SW were from poor households. Overall we found a risk difference of 6 children per every 1000 children to have SW between children from poor and non-poor households. This finding suggests a relationship between poverty and SW. Children from poor households have a higher likelihood of developing SW than children from non-poor households. In general, older mothers, higher maternal education, access to media, improved sources of drinking water and toilet types are associated with a lower risk of SW. Also, children with at least an "average" low birth weight, with over 3 years preceding birth intervals, and higher birth orders had a lower risk of SW. In the majority of the countries, the prevalence of SW was higher among the children from poor households than among those from non-poor households, with exceptions of pro-nonpoor countries (Egypt, Chad and Tajikistan). We had hypothesised that children nutritional outcomes would be better among children from poor households than those from non-poor households. However, our findings proved otherwise in 3 of the countries. This finding is of important concern. Literature check showed that Chad failed in her drive to achieve the millennium development goals on malnutrition [51]. This was partly attributable to barriers to optimal feeding practices [52]. Also, Chad ranked one of the least on the Global Hunger Index (the combination of wasting, stunting, undernourishment, and under-five mortality) [52,53]. Besides, Mcnamara et al. had noted that "interactions between food security and local knowledge negotiated along multiple axes of power" including political and economic systems, health beliefs and food taboos which influence household nutrition in Chad [54]. For Tajikistan, a country with the largest share of remittances to GDP in the world has very slow progress in halting its high levels of child malnutrition [55]. Coupled with migration [55], the country has been unable to match her vast poverty reduction from 83% in 2000 to 30% in 2016 [56] and with a projected fall to 26% by 2019 [57] to a significant reduction in child malnutrition. In Egypt, inadequate dietary intake as a result of poor infant and young child feeding practices birthed the reported consistent decline in exclusive breastfeeding from 34% in 2005 to 13% in 2014, food insecurity, unbalanced diet, and "poor dietary habits, lifestyle and lack of nutritional awareness across the population, as opposed to issues of food availability" [58] as well as poor environmental conditions with only a third having improved toilet types [58]. These factors might have put the children from non-poor households at higher risk of severe wasting in the 3 countries. Pro-poor inequality was more prominent in Eastern, Middle, Southern, Western Africa, Southern Asia and in the Caribbean than in other regions. The overall pro-poor inequalities across the studied children is a pointer that due attention has not been paid to wealth inequalities in child nutrition across the world. Therefore, there is a need to design malnutrition intervention(s) programmes with a focus on wealth-related inequalities if the problem of SW worldwide is to be tackled successfully. The countries that showed low yet significant pro-poor inequality were Cameroun, Lesotho, Ghana, Burundi, Haiti, Kenya, Zimbabwe, Uganda, Senegal, DRC and Mozambique while countries such as Pakistan, Ethiopia, Bangladesh, Mali, Niger, India, Nigeria, and Timor-Leste had high SW and high pro-poor inequality. Also, Tajikistan had low but pro-non-poor inequality whereas Chad and Egypt had high SW prevalence and high pro-non-poor inequality. It may be necessary for these countries to learn what works and what does not work in other countries that do not have high wealth inequalities to attain the SDG on health for all. It is striking that SW is more likely among children born to currently married and employed women as of survey time. The decomposition analysis to understand the factors that contribute to poverty inequality in the prevalence of SW by countries and to identify the relative gap between poor and nonpoor households showed that the contributions of the compositional 'explained' and structural 'unexplained' components varied across countries. Previous studies reported that malnutrition does not necessarily affect growth inequality in under-five children in some countries [46]. This is a pointer that other compositional effects contribute to SW inequalities. Compositional effects, majorly from neighbourhood socioeconomic status (SES) disadvantage, birth interval, birth order, Media access, maternal education, birth weight and maternal age were responsible for most of the inequality in SW between the children from poor and non-poor households. These compositional factors were most noticeable in Lesotho, Namibia, Kenya, Zimbabwe, Cameroun, Niger, Nigeria and India. However, in Lesotho and India, the structural effects were attributable to most of the socioeconomic inequality in SW between the children of poor and non-poor households. In India, birth interval and birth order were the major effects and they contributed to the compositional and structural components respectively in the country. In our analysis, Timor-Leste is an outlier at both the prevalence of severe wasting and in the decomposition analysis. Our finding is in tandem with earlier reports that Timor-Leste's under-five wasting prevalence was 11%, higher than 9% average in the developing countries [59]. This could be ascribed to the country's poor nutritional intakes as only 50% of infants had exclusive breastfeeding and a high burden of malnutrition among its adult population [59]. The decomposition analysis showed that the greatest contributors to pro-poor inequalities in severe wasting in Timor-Leste were poor media access, low birth weight, low maternal education, unimproved toilet type, residing in rural areas and neighbourhood socioeconomic disadvantage. Implementing necessary interventions with focus on the highlighted factors will help bridge the socioeconomic inequality gap and also reduce the prevalence of severe wasting in Timor-Leste. Neighbourhood SES disadvantage was associated with a high prevalence of SW in all the countries. Other major contributors to the inequality effects are media access, maternal age and parental education. This is consistent with reports from local, national and international studies on the effect of socio-economic status on nutritional outcomes among under-five children [46,[60][61][62][63]. The role of the media in nutrition cannot be over-emphasized. Access to media through television, radio or newspaper is very vital to avail the mothers the up-to-date information that can be useful in enhancing child nutrition. Access to media reflects the increasing recognition that there is a web of factors that influence health interventions including child nutrition. A child whose mother has better education, exposure, finance, and access to media has a lower likelihood of having SW. To reduce the disparity among poor and nonpoor households in access to quality information and health education, it may be necessary to widen child nutrition programme, by engaging healthcare workers to facilitate education on the importance of good nutrition as well as consequences of poor nutrition. Such education intervention might be in the form of door to door activities and peer and social network mobilization. The importance of maternal education in reducing the inequalities in SW should also be given prominence. Improving women education has been advocated both locally and globally as a channel of enhancing child health outcomes, especially in LMICs [8,16,18]. We found maternal age as an important contributor to poverty inequality in SW distribution with higher risk among children with poor mothers than those of non-poor mothers. This might have been affected by the societal values and disapprovals associated with childbearing outside marriage [40]. Such may negatively affect the type of support and help offered to mothers and their children. A special intervention focussing on mothers with no education should be put in place so that the poverty-related inequalities in the distribution of SW can be eliminated. Study limitations and strengths The variations in the compositional and structural effects of the factors associated with poverty inequality in SW across the countries showed that different factors are specific to each country. Some of these factors, such as economic and political instability, war, famine, conflict and climate change, are outside the scope of the current study. This is one of our study limitations. Also, Blinder-Oaxaca decomposition does not address causality but rather quantifies contributions of associated factors to inequalities. Nonetheless, our study has strengths. We have used nationally representative data involving over half of a million in 51 countries. Our findings are generalizable in all the countries involved in this study. LMICs should put in place multi-sectoral country-specific intervention to ease the burden of SW. This intervention is very important as the cultural and social barriers faced by different population sub-groups can adversely affect health outcomes with dire consequences for their health, which may further perpetuate their disproportionate levels of poverty and lead to cycles of poverty [2]. Conclusion This study identified a wide gap between the propensity of children from poor and non-poor households to develop severe wasting. We decomposed the determinants of this crucial health outcome into two groups based on the wealth quintiles of the households from which the children come from. While different determinants are specific to different countries both in the compositional and structural components, some determinants are specific to certain neighbourhoods. Neighbourhood socioeconomic disadvantage, media access, as well as maternal age and maternal educational attainment created widest gaps in the inequalities between the children from poor and non-poor households in developing SW. Policy and program implications Poverty, the principal cause of malnutrition must be tackled headlong, especially in the propoor countries. There is a need for a policy on education for the populace, especially for the women, as well as on the reduction of unemployment and enhancement of means of livelihoods. Combating poverty inequality in the development of severe wasting is a war that could only be won if confronted with multi-sectoral and country-specific interventions in low-and middle-income countries with considerations for the factors identified in this study. An efficient and effective severe wasting prevention strategies will aid healthy living, lower opportunity infections and reduce childhood mortalities and thereby contribute to the attainment of the SDG 3. There are needs for the stakeholders and government of the countries with high pro-poor inequality and high prevalence of severe wasting to design policies and programs aimed at simultaneously lowering the occurrence of severe wasting and reducing socioeconomic inequalities among children from poor and non-poor households. These countries may need to understudy what has been done in countries with lower prevalence and low inequalities. Whereas the countries with high rates of severe wasting and high pro-non-poor inequalities should formulate and implement policies aimed at lowering the prevalence while necessary education on children diets should be in place. Also, there are needs for policies and programs targeted at reducing pro-poor inequalities in the countries with high pro-poor inequality but low prevalence of severe wasting. There is a need too for countries with low severe wasting and high pro-non-poor inequality to develop policies targeted at the households in the richer wealth quintiles to embrace better feeding habits for under-five children. Implications for future research While this study is a good start in identifying factors that contribute to socioeconomic inequalities in severe wasting, there are needs for further dialogue and research about social and cultural issues that may be associated with severe wasting. A qualitative study may help elucidate these. Besides, it may be necessary to study what is been done right in those countries with a low prevalence of severe wasting and low-risk differences and the lessons learnt can be adopted in countries with a high prevalence of severe wasting and high-risk differences. Also, there is a need to research into the factors that contributed to pro-non-poor inequalities in severe wasting in Chad, Egypt and Tajikistan. Supporting information S1
8,946
2020-11-03T00:00:00.000
[ "Economics", "Medicine" ]
Multifocal stimulation of the cerebro-cerebellar loop during the acquisition of a novel motor skill Transcranial direct current stimulation (tDCS)-based interventions for augmenting motor learning are gaining interest in systems neuroscience and clinical research. Current approaches focus largely on monofocal motorcortical stimulation. Innovative stimulation protocols, accounting for motor learning related brain network interactions also, may further enhance effect sizes. Here, we tested different stimulation approaches targeting the cerebro-cerebellar loop. Forty young, healthy participants trained a fine motor skill with concurrent tDCS in four sessions over two days, testing the following conditions: (1) monofocal motorcortical, (2) sham, (3) monofocal cerebellar, or (4) sequential multifocal motorcortico-cerebellar stimulation in a double-blind, parallel design. Skill retention was assessed after circa 10 and 20 days. Furthermore, potential underlying mechanisms were studied, applying paired-pulse transcranial magnetic stimulation and multimodal magnetic resonance imaging-based techniques. Multisession motorcortical stimulation facilitated skill acquisition, when compared with sham. The data failed to reveal beneficial effects of monofocal cerebellar or additive effects of sequential multifocal motorcortico-cerebellar stimulation. Multimodal multiple linear regression modelling identified baseline task performance and structural integrity of the bilateral superior cerebellar peduncle as the most influential predictors for training success. Multisession application of motorcortical tDCS in several daily sessions may further boost motor training efficiency. This has potential implications for future rehabilitation trials. To evaluate if successive sessions of monofocal M1 stimulation led to additive effects, we contrasted (subtraction) the individual learning trajectories of the monofocal M1 group with the mean trajectory of the sham group. The analysis indicated a significant SESSION effect (χ 2 (3) = 9.67, p = 0.022), suggesting additive effects of successive monofocal M1 stimulation sessions. This finding was further strengthened by a significant (p = 0.024) post hoc contrast comparing the last session on day 2 (D2S2) to the first session on day 1 (D1S1). Please see also Fig. 1b. Emerging features in the analyses of temporal components were that online learning in D1S1 was significantly larger than in the other training sessions-contrasts to: D1S2 p = 0.041, D2S1 p = 0.0032, D2S2 p = 0.013. Furthermore, offline learning overnight was significantly larger than within day 1 (p = 0.036). Please see also Table 1 for full analysis of temporal components. RQ2: Multisession cerebellar stimulation. We chose the left CB as target of interest with the aim to modulate cerebellar representations of the ipsilateral (left) training hand. The learning trajectory of the monofocal CB group was compared with the sham group. Statistical analysis revealed a significant effect of BLOCK (χ 2 (23) = 374.52, p < 0.001), but not of CONDITION (χ 2 (1) = 0.03, p = 0.86) or CONDITION × BLOCK interaction (χ 2 (23) = 18.19, p = 0.75), demonstrating manifest skill learning, but no stimulation-associated effects on the training phase. Furthermore, motor training (SESSION: χ 2 (4) = 19.58, p < 0.001), but not stimulation (CONDI-TION: χ 2 (1) = 0.01, p = 0.92) had a positive effect on simple task performance, see Supplementary Table S1. The Analysis of temporal components of learning revealed a significantly larger online learning in D1S1 than in the first session on day 2(D2S1) (p = 0.016). Moreover, overnight offline learning was significantly larger than offline learning between the sessions on day 2 (p = 0.0098), see also Table 1. RQ3: Sequential multifocal motorcortico-cerebellar stimulation. To assess for potential beneficial additive effects of sequential multifocal stimulation, we compared the multifocal M1-CB group with the monofocal M1 group. The analysis demonstrated a significant effect of BLOCK (χ 2 (23) = 439.89, p < 0.001), but not of CONDITION (χ 2 (1) = 0.65, p = 0.42) or a CONDITION × BLOCK interaction (χ 2 (23) = 26.27, p = 0.29), and hereby was not supportive of the hypothesis of additive effects of multifocal stimulation. Please see also Fig. 3a. Subsequently, we further investigated, if participants in the multifocal M1-CB group presented reduced learning in sessions in which they received cerebellar stimulation (D1S2 and D2S2) by comparing the slope of the learning curves in the delimited training sessions with the monofocal M1 group, please see Fig. 3b. Statistical analysis revealed no significant CONDITION × SESSION interaction (χ 2 (3) = 1.60, p = 0.66), thus rejecting this hypothesis. Additionally, motor training (SESSION: χ 2 (4) = 46.37, p < 0.001), but not stimulation (CONDITION: χ 2 (1) = 1.84, p = 0.18) enhanced simple task performance, for post hoc testing see Supplementary Table S1. Analysis of temporal components identified greater online learning in D1S1 in comparison with the other sessions as significant apparent feature-contrast to: D1S2 p = 0.024, D2S1 p = 0.012, D2S2 p = 0.0023. Please see also Table 1. Post hoc testing indicated a significant group difference for the multifocal M1-CB to sham (p = 0.0078) and the multifocal M1-CB to monofocal CB (p = 0.013) contrasts, suggesting pronounced inhibition throughout the course of learning in the multifocal stimulation group. Auxiliary analysis did not reveal a significant modulation The analysis of the modulation of intracortical facilitation at rest (ICF rest ) compared to baseline indicated a strong trend for SESSION (χ 2 (1) = 3.81, p = 0.051), pointing towards a reduction of facilitation after the follow-up session. There was no significant effect of CONDITION (χ 2 (3) = 2.02, p = 0.57) or of a CONDITION × SESSION interaction (χ 2 (3) = 5.47, p = 0.14), please see Fig. 4b. Auxiliary analysis did not reveal a significant modulation from baseline for all assessed time points (one sample t-test, Bonferroni-corrected: for all comparisons p > 0.05). Spearman's rank correlations revealed no significant associations between training gain (r s = 0.19 p = 0.26) or retention at FU20 (r s = 0.18, p = 0.28) with the modulation of ICF rest . Table 1. Temporal components of learning. (a) Analysis of online learning, operationalised as difference between the last and the first block of a given session. (b) Analysis of offline learning defined as difference between the first block of the subsequent session and the last block of the preceding session. M1, monofocal M1 stimulation group; Sham, sham group; CB, monofocal cerebellar stimulation group; M1-CB, multifocal motorcortical-cerebellar stimulation group; M, mean; SEM, standard error of the mean. * depicts p < 0.05. RQ5: Prediction of training gain. In addition to behavioural performance parameters and ppTMS surrogates of GABAergic and glutamatergic neurotransmission 18 , we assessed the potential of MRI-derived parameters to predict training gain. Specifically, (1) we computed mean fractional anisotropy (FA) of the bilateral superior cerebellar peduncle (SCP) with diffusion-weighted MRI to characterize the microstructural integrity of the cerebellothalamic fibres 27 . Furthermore, based on prior research linking resting-state functional connectivity (FC) in the cerebello-cortical loop with the gain in motor sequence learning 28 , we computed (2) region of interest (ROI) to ROI based FC between the left cerebellum and the right M1 employing the resting-state fMRI (rs-fMRI) data, for an overview of the selected ROIs please see Fig. 5a. The results are depicted in Fig. 5b,c. Subsequently, we applied a stepwise multiple linear regression analysis to find the best fitting model to predict training gain by applying a backward selection procedure (for further details please see methods section below). The full model included the following predictors: (1) BASELINE-performance in the sequential finger tapping task (SFTT) at baseline, (2) SICI rest at preD1, (3) ICF rest at preD1, (4) SICI move modulation at preD1, (5) mean FA within the SCP mask, (6) FC between the left cerebellum and right M1. The final model was significant (F(2,37) = 4.08, p = 0.025), but explained only a limited proportion of variance (adjusted R 2 = 0.14). Remaining and significant predictors were BASELINE (p = 0.035) and FA (p = 0.039). For the effect plot see Fig. 6. Subsequently, we assessed with the same statistical approach, if it was also possible to predict responsiveness towards monofocal M1 tDCS by contrasting the individual training gain of the M1 group with the mean training gain of the sham group. The approach failed to predict this surrogate of responsiveness towards M1 tDCS (stepAIC approach identified the intercept only model as winning model), thus did not enable us to further differentiate responders from non-responders to M1 stimulation. www.nature.com/scientificreports/ Discussion The main findings of the current study were: (1) that anodal tDCS applied to M1 enhances the acquisition of a novel motor skill and its repeated application results in additive effects, (2) the study failed to reveal any stimulation-associated effects on motor skill learning by monofocal cerebellar stimulation, (3) multifocal stimulation of the cerebro-cerebellar loop did not lead to additional additive effects, when studying a cohort of young healthy participants. Our finding of beneficial effects of monofocal M1 stimulation (RQ1) extends the available literature investigating the potential of tDCS-based interventions to augment motor learning, for review please see Buch and Values are related to baseline by computing the ratio between postD2 and preD1, respectively postFU20 and preD1. (a) SICI rest related to baseline (preD1), * depicts p < 0.05 for post hoc mean-separation testing across significant main effect CONDITION applying a Tukey adjustment. The SICI rest data was log-transformed for statistical analysis to meet the normality of residuals assumption. (b) ICF rest related to baseline (preD1). (c) SICI move modulation, expressed as absolute value of the delta between 90 and 20% of reaction time (RT) datapoints, related to baseline (preD1). www.nature.com/scientificreports/ colleagues 3 . However, the robustness of the approach has been recently called into question 3,9 . One matter of debate are the potential underlying mechanisms of action, such as the susceptible temporal components of learning. In this regard, Reis and colleagues indicated in their seminal work that the beneficial effect of anodal M1 tDCS was mainly mediated by an enhancement of offline effects 1 , when studying a sequential visual isometric pinch task. As a potential underlying mechanism the authors propose a delayed enhancement of learning-related protein synthesis rather than immediate LTP-like effects 1 . Our data failed to reveal stimulation-associated effects on decomposable temporal components of learning, when studying a different learning task (SFTT). This discrepancy might be explained by task-specific effects 17 . We speculate that probably both discussed mechanisms have been engaged in mediating the tDCS-related effects in the present experiment. A further challenge of the field is to develop novel strategies, which might improve protocol robustness. In this regard, the present study design extends prior work by testing a twice daily application architecture. The results indicate a steady increase in group difference up to the last training session pointing towards potential additive effects of the repeated tDCS applications. We speculate that these additive effects were potentially mediated by conjointly engaging fast and slow learning processes and the respective underlying brain plasticity 29 . It is of note that the optimal timing of protocol application might be of crucial importance. Our rational to choose a 90 min inter-session-interval for the within-day sessions was to stimulate in a phase of still anticipated enduring effects 30 , but avoiding too short inter-session-intervals, which have been linked to unfavourable homeostatic interactions 31 . Further systematic investigations on optimal session architecture of spaced application protocols 32 in combination with motor learning constitute a promising direction for future research. In a second optimization approach (RQ2), we evaluated potential effects of modulating a different key area of the motor learning network by means of monofocal cerebellar stimulation. Our data failed to reveal stimulationassociated effects for anodal cerebellar tDCS studying the SFTT. This is in contrast to prior proof-of-principle work indicating positive effects on motor learning, when studying visuomotor adaptation 33 , implicit motor learning 34 , or motor skill learning 12,13 . Several reasons may explain the null results. At first, the SFTT seems to be less dependent on cerebellar resources as classical cerebellum-dependent learning tasks, as for instances motor adaptation paradigms. The learning and organization of novel sequential finger movements relies also on other brain structures, such as M1 7 , the striatum 35 , or the supplementary motor area 36 . In addition to cerebellumdependent sensory prediction error-based learning, other learning strategies are likely of importance for successful skill acquisition in the SFTT. Moreover, recent follow-up studies have raised questions on the reliability of cerebellar tDCS protocols to enhance motor learning 37,38 . Discussed reasons for the limited robustness of monofocal cerebellar tDCS protocols are the high susceptibility towards variations in task parameters 37 , the large inter-individual differences in the lobule-specific distribution of the applied electric field 39 , and the individual brain-derived neurotrophic factor genotype 40 . One future approach to potentially overcome this current limitation of monofocal cerebellar tDCS might be protocol personalization based on computational modelling approaches, such as electric field dosimetry 41 . Importantly, alternative non-invasive brain stimulation techniques, such as cerebellar Theta Burst Stimulation, has shown to enhance visuo-motor adaptation in healthy subjects 42 and beneficial effects on gait and balance functions in chronic stroke survivors 43 , and should be considered as an alternative cerebellar neuromodulation strategy. In a third optimization approach (RQ3), we tested a multifocal motorcortico-cerebellar stimulation protocol applied concurrently to the motor training. Our rational to test the daily application sequence of M1 stimulation www.nature.com/scientificreports/ followed by cerebellar stimulation spaced by a 90 min inter-session interval was based on our recent data indicating enhancement of mainly online learning components by M1 tDCS 2,24 and offline components by cerebellar tDCS 13 . The 90 min inter-session-interval was chosen to apply the cerebellar stimulation in a time window of anticipated enduring M1-stimulation-induced effects and for avoiding potentially interfering homeostatic interactions (as discussed above). We did not choose a simultaneous dual-site stimulation approach as at the time point of study implementation, the effects of electric field interferences by concurrent conventional M1 and cerebellar tDCS were unpredictable. The present data failed to reveal an additional benefit of multifocal stimulation of the cerebro-cerebellar loop, when compared with monofocal M1 stimulation, on learning a novel motor skill based on the sequential execution of fine finger movements in a cohort of young healthy participants. Visual inspection indicated that in fact the largest difference in variability between groups occurred in the first training session on day one, in which both interventional groups received comparable anodal M1 tDCS. However, response to multifocal M1-CB stimulation protocols might be different in conditions with pathological disbalanced cerebro-cerebellar interactions, such as after a stroke 44 . A further approach could be to target the cerebrocerebellar loop in an inverted order of stimulation targets-CB first and contralateral M1 second. This approach could be particularly promising when studying adaptation to a novel visuomotor transformation, for which CB stimulation has shown to enhanced movement error reduction during the adaptation phase and M1 stimulation has shown to increase retention of the newly learnt visuomotor transformation 33 . This opposite functional dissociation in comparison to our prior work studying the acquisition of novel sequential finger movements 2,13 points towards the importance of considering task-specific effects. The discussed alternative cerebello-cerebral tDCS protocol has been tested in first proof-of-principle work and has shown beneficial effects on upper limb tremor, hypermetria, and long-latency stretch reflexes in patients with cerebellar ataxia 45,46 . Another possibility would be to further implement simultaneous multifocal tDCS stimulation approaches, as tested in first studies recruiting patients with psychiatric disorders 47,48 . Both alternative stimulation approaches were outside the scope of our current research work, however should be further addressed in future. The analyses of non-stimulation-associated features of learning revealed that motor sequence learning transferred to improved motor performance, when tested via an untrained motor sequence. This points towards a partial generalization of the acquired motor memory trace. Secondly, largest online learning occurred in the first session on day one, which may be explained by a partial saturation of LTP-like and ceiling effects. Lastly, overnight offline learning, when compared with within day offline learning tended to be of larger magnitude. However, the present study design does not allow to disentangle, if this was due to sleep-dependent consolidation effects 49 or to simple passage of longer time (circa 90 min versus 24 h). Regarding the ppTMS assessments, the data suggested a pronounced SICI rest (GABA A -ergic) after task performance in the training and follow-up sessions in the multifocal stimulation group. At first sight, this seems to be unexpected, when considering previous findings. Based on the seminal work from Galea and colleagues 50 , the conventional view is that anodal cerebellar tDCS increases cerebello-brain inhibition (CBI) and stronger CBI has been linked to reduced SICI potentially via a reduced thalamocortical facilitation of inhibitory interneurons in M1 51 . We speculate that the preceding anodal M1 tDCS session, based on prior work 52 , might have reduced SICI earlier in the daily course of the experiment and hereby may have primed a greater susceptibility for inducing inhibitory net effects via homeostatic-like interactions 53 in the successive training session. This pattern of inhibitory balance might have been re-established by subsequent task performance at the follow-up. It is of note that in well functioning young participants this slight change in inhibitory net balance was not associated with measurable behavioural consequences. For ICF rest , the data pointed towards a reduced facilitation after task performance at the follow-up sessions. This finding might be explained by a saturation of glutamatergic plasticity mechanisms in an advanced learning stage 54,55 . The analysis of SICI move modulation suggested a trend for more modulation of GABA A -ergic circuits during movement preparation in the immediate post training phase, when compared with the evaluation after the follow-up sessions. This might be interpreted as a state of increased plasticity in GABA A -ergic circuits shortly after the motor training 56,57 . Visual inspection suggests that this tendency seemed pronounced in the M1 tDCS group showing a noteworthy dispersion of data at postD2. However, no clear association with behaviour (training gain) was present. It is important to note, that when applying other tasks and stimulation paradigms no effect of training phase 58 or opposite tendencies 59 on SICI move modulation have been reported. In future work, it would be interesting to also study additional aspects of neurotransmission with TMS-based techniques for instance assessing long-interval intercortical inhibition (LICI-GABA B -ergic) or short-latency afferent inhibition (SAI-acetylcholinergic) to disentangle potential other underlying mechanisms of tDCS and motor learning 18 . Lastly, multiple linear regression modelling identified baseline task performance and mean FA in the bilateral SCP as most influential predictors for the dependent variable training gain, with the final model explaining a limited proportion of variance (circa 14%). Baseline performance was negatively associated with the training gain, indicating that lower baseline performance was related to larger improvements in skill during the training phase. This could be explained by an increased exploratory behaviour in participants with lower baseline performance. Indeed, previous research have linked the amount of motor variability with training success 23 . To further substantiate this argument, we performed an exploratory analysis comparing the evolution of the coefficient of variance (CV) in the four training sessions for the primary outcome, number of correctly performed sequences, splitting the participants in a low and high baseline performer group via a median split. The analysis revealed a group difference (GROUP: χ 2 (1) = 17.10, p < 0.001) with a higher CV in the low performer group supporting the above discussed argument. An alternative explanation of the negative association between baseline and training gain are potentially emerging ceiling effects in the good baseline performer group. However, the continued tendency to improve at the follow-up sessions argues against this explanation. Secondly, mean FA in the bilateral SCP was negatively associated with training gain. One possible explanation for this finding is that a pronounced cerebello-cortical output tract, constitutes a structural correlate of relatively www.nature.com/scientificreports/ exaggerated error processing, within the natural variability of a cohort of well-functioning healthy individuals. Yet, this interpretation remains highly speculative. It is of note, that conversely to the argument above higher FA values in the white matter adjacent to the dentate nuclei have been positively associated with the magnitude of motor skill learning in earlier work 60 . Overall, the assessed multimodal regression modelling approach indicated some potential for predicting motor training success and may complement available unimodal approaches 61,62 . There are some limitations of the current work worthwhile to discuss. Conventional tDCS protocols lack spatial focality, which may have led to stimulation of adjacent, non-target brain areas. However, the functional consequences might be mitigated by concurrent task application, as task performance is assumed to partially channel the activation towards functionally relevant brain circuits. Secondly, our study failed to achieve the desired level of blinding. At whole group level, the participants guessed the nature of the applied tDCS (active versus sham) better than random chance (exact binomial test, p < 0.05). However, the sham group identified the correct stimulation type at chance level (exact binomial test, p = 1.00). Thirdly, the sample size of the current study is rather small, however by obtaining a significant result in a small sample suggests that the reported intervention effect is of a greater magnitude than an equivalent result obtained from studying a larger sample 63 . Conversely, the resulting low power may have hindered us for detecting small differences in-between groups. To inform future replication studies testing the effect of the here studied tDCS protocols on motor training, we now simulated power curves for increasing the sample size, see Supplementary Information and Supplementary Fig. S1. For RQ2 and 3, for which the applied tDCS protocols did not indicated a significant stimulation (CONDITION) effect in the current study, the simulation indicated that a sample size increase to N = 100 would not critically increase the level of power. Fourthly, as discussed above, alternative multifocal stimulation strategies of the cerebro-cerebellar loop, either sequentially stimulating first CB and then the contralateral M1 or simultaneous application strategies are promising, however were outside the scope of the current research work, and should be addressed in future. To conclude, the present study contributes to the available literature indicating the potential of anodal M1 tDCS to enhance motor skill learning and further suggests a benefit of a twice daily application protocol. The data failed to reveal stimulation-associated effects of monofocal cerebellar tDCS or an additive effect of multifocal cerebro-cerebellar tDCS application, when studying a sample of well-functioning, young, healthy participants. However, both approaches should be tested and may have potential in conditions with disbalanced cerebrocerebellar interactions, such as after a stroke 44 . Methods Participants. Forty young, healthy, right handed participants were recruited for the study (age 25.93 ± 3.47, 23 female). All participants were screened for and did not have any contraindications for non-invasive brain stimulation. The study was conducted in accordance to the Declaration of Helsinki 64 . All participants gave their written informed consent. The study protocol was approved by the local ethics committee of the Medical Association of Hamburg (PV3777). In the present study, we studied electrophysiological mechanisms in a cohort of young healthy study participants and not a healthcare-related intervention, for that reason the study was not conducted in the format of or registered as a clinical trial. Experimental design. The participants carried out a motor training (for details on the specific task see below) on two daily 20 min sessions on two consecutive days. On each training day, the sessions were separated by a circa 90 min break. Baseline performance was assessed in a baseline block (Base) on day 1 prior to the start of the training phase. Task retention was assessed circa 10 and 20 days after the training phase (FU10 and FU20). TDCS was applied in a double-blind, sham-controlled, parallel design simultaneously to the training sessions. The motor learning sessions were embedded in ppTMS-based assessments before motor training on day 1 (preD1), after motor training on day 2 (postD2), and after the last follow-up session (postFU20). Furthermore, the participants were characterized with multimodal MRI-based neuroimaging before the start of the motor learning protocol or after its completion, in the second case respecting a wash-out period of at least 10 days after FU20 (median: 24 days, min: 10 days, max: 108 days). Please see also Fig. 7a for a depiction of the timeline. Motor learning task. The participants trained a modified version of the sequential finger tapping task 65,66 (SFTT) with their non-dominant, left hand, please see also Fig. 7c. The training sessions lasted circa 20 min consisting of seven 90 s blocks (including the intermingled performance probe block, see below) separated by breaks. The follow-up sessions consisted of three blocks of 90 s also separated by breaks. The instruction was to repeatedly execute a nine elements motor sequence as rapidly and as accurately as possible on a four button keyboard, with each key 2 to 5 assigned to one finger, index (2) to little finger (5). To reduce the working memory load, the target sequence was displayed on a screen placed in front of the participants. A dot displayed below the chain of numbers served as a bookmark of current sequence position, but no feedback on task performance was provided. Different, but complexity-matched (Kolmogorov Complexity 67 ), sequences were applied for baseline and the intermingled blocks serving as performance probes, for further details on the applied sequences please see Supplementary Table S2. The conduction of the SFTT was implemented in Presentation software (Neurobehavioral Systems Inc, Berkely, CA, United States). Transcranial direct current stimulation (tDCS). Anodal tDCS was applied via a DC-stimulator (neu-roConn, Ilmenau, Germany) using 5 × 5 cm sponge covered, conductive rubber, square electrodes soaked in saline solution. The stimulation was applied in a sham-controlled, double-blind, parallel design administering one of the four following pseudorandomly assigned conditions (please see also The electric field distribution of the above described cerebellar electrode montage has been recently re-evaluated applying finite element modeling analysis 39 . The analysis suggested that the applied electric field mainly affects lobules Crus I/II, VIIb, VIII, and IX 39 , and hereby reaches areas crucially involved in motor control located in the posterior cerebellar lobe (lobule VIII) 69 . The site-specific dose adjustment-1 mA for M1 and 2 mA for CBwas chosen based on our prior work documenting behavioral effects of both protocols. Furthermore, a higher stimulation intensity was chosen for CB to account for the higher scalp to cortex distance 70 and modelling work suggesting a maximum electric field strength in CB of about half the magnitude as in M1, when stimulated with the same current intensity 71 . The blinding procedures were carried out by a researcher (blinding assistant), not involved in other study-related assessments, data acquisition, or analysis. The randomization list was kept in a sealed envelope only accessible to study staff executing the stimulation protocols. Unblinding was done after all data was preprocessed and analyzed. Paired-pulse transcranial magnetic stimulation (ppTMS). TMS was used to study short intracortical inhibition (SICI) and intracortical facilitation (ICF) 18,72 . The procedures are described in detail in our prior published work 24,73 . Monophasic pulses were delivered via two Magstim 200 2 stimulators connected via a BiStim 2 module and discharged through a figure-of-eight D70 alpha flat coil (Magstim Co Ltd, Whitland, United Kindom). The coil was placed over the motor hot spot for eliciting constantly the largest muscle responses in the first dorsal interosseous (FDI) muscle of the non-dominant, left hand. The coil was oriented that the handle pointed backwards with approximately a 45 degrees angle to midsagittal line. This resulted in a posterior-toanterior induced currents in the underlying brain tissue. The coil position was kept in constant position for the further assessments. The conditioning pulses (CP) was adjusted to 80% of resting motor threshold (RMT) 74 and the test pulses (TP) to an intensity that elicited motor evoked potentials (MEPs) of an ~ 1 mV peak-to-peak amplitude. TP and CP intensity were readjusted before each session to assess SICI and ICF in the stable range of their respective recruitment curves 18 . SICI was studied at an inter-stimulus-interval (ISI) of 3 ms at rest (SICI rest ) and in premovement state (SICI move ), see below. ICF was assessed at an ISI of 10 ms at rest (ICF rest ). Eighteen trials were recorded per condition in a random order with inter-trail-jitter of 6 to 8 s for the rest assessments and in a pseudorandom order with inter-trial-jitter of 6-10 s for SICI move . Furthermore, SICI was tested in the premovement phase (SICI move ) of a simple reaction task in the time zones around 20% and 90% of individual reaction time (RT) 73 . During the simple reaction time task, the participants were asked to perform left index finger abductions in response to a visual cue. The electromyography signal was sampled using disposable surface electrodes placed of the FDI in belly tendon montage via a 1902 amplifier (Cambridge Electronic Design Ltd, Milton, United Kingdom) at a sampling rate of 5 kHz and applying a 50 Hz to 1 kHz bandpass filter. . Further assessments included ppTMS (short intracortical inhibition-SICI at rest and during movement preparation, intracortical facilitation-ICF at rest) and multimodal MRI (T1-weighted anatomical, diffusion-weighted, rs-fMRI gradient-echo EPI images). (c) As a motor learning task, participants executed a modified version of the sequential finger tapping task (SFTT) 65 Data processing. Behavioural data of all 40 participants from all time points were acquired and entered the final analysis. Behavioural data were analysed with an in-house script scoring the correctly performed motor sequences averaged per block. Our primary outcome was the number of correctly performed sequences normalized to baseline. Training gain was quantified by the ratio of the last block of D2S2 and the first block of D1S1. Skill retention was determined via a retention index defined as the ratio of the average of the correctly performed sequences per block of a respective retention session normalized to the last training block in D2S2 13 . Temporal components of learning were operationalised by computing the differences (1) between the last block and the first block of a given training session for online learning and (2) between the first block of the subsequent session and the last block of the preceding session for offline learning 1,75 . TMS data were acquired from 39 participants (in one participant no stable data could be obtained due to high thresholds). TMS data were analysed with an automated script implemented in Signal software (Cambridge Electronic Design Ltd, Milton, United Kingdom) quantifying the peak-to-peak MEP amplitude in a response window of TMS pulse plus 20 ms to 50 ms. All trials were visually inspected. Trial rejection criteria were: trials with documented failure of proper coil placement, muscle preactivation > 25 µV from baseline for rest and > 50 µV for event related trials in the time window of 100 ms before the TMS pulse, clear preactivation outside the critical window of 100 ms before the TMS pulse, no MEP defined as peak-to-peak amplitude < 0.05 mV for TP only and ICF trials, overlap of the MEP with voluntary muscle contraction for the event-related trials. Amplitudes were averaged per condition and assessment time point. Magnitude of SICI and ICF was related to TP only trials as follows: SICI move modulation was expressed as follows (post: postD2 or postFU20, pre: preD1): Data points of a given subject with less than 8 valid trials were excluded from further analysis (this case did not emerge for the SICI rest and ICF rest data, 12 out of 120 data points of the SICI move data were excluded for the main reason of MEP overlap with the voluntary muscle activation). MRI data were sampled from 35 participants (reasons for not acquiring MRI data were: N = 3 participants scheduling difficulties, N = 2 participants MRI contraindications). Diffusion-weight MRI data were analysed by means of MRtrix3 software (https ://www.mrtri x.org/) 76 , FSL software package 5.0 (https ://fsl.fmrib .ox.ac.uk/ fsl/fslwi ki/FSL) and FreeSurfer software package 6.0 (https ://surfe r.nmr.mgh.harva rd.edu/). Cleaning of the images included: denoising, removal of Gibbs ringing artefacts, head motion and eddy currents correction. Brains were then skull stripped, fractional anisotropy (FA) maps were computed and registered to the Montreal Neurological Institute (MNI) standard space. Subsequently, the superior cerebellar peduncle (SCP) region was defined by using the Bayesian segmentation algorithm, based on a probabilistic atlas of the brainstem, available on FreeSurfer 77 and registered to the MNI standard space. FA values were extracted within this region and their FA average was finally computed. Rs-fMRI data were preprocessed using the tools in SPM12 (http://www.fil.ion.ucl.ac.uk/spm/) by following the order of spatial realignment for correcting for head movement, normalization into the same coordinate frame as the template brain in the MNI standard space, spatial smoothing with a Gaussian kernel of 8 mm full width at half maximum, linear detrending for removing systematic signal drift, regressing out the effects of head movement and non-neuronal fluctuations, and band-pass filtering at 0.01-0.08 Hz for removing physiological noise. From the preprocessed data, signals were extracted as the singular value decomposition of voxel-wise signals for the left cerebellum and the right M1. Functional connectivity between the left cerebellum and the right M1 was estimated by computing the correlation of signals and converting the correlation coefficient into a normally distributed value using the Fisher transformation. Statistical analysis. The statistical analysis was implemented in R (R Core Team, 2020) 78 . Linear mixed effects models were fitted using the lmer() function of the lme4 package 79 . As random effects, we added intercepts for participants. To address RQ1-2, the respective active stimulation group of interest was compared to the sham group in an pairwise approach. For RQ3, the active stimulation group of interest multifocal M1-CB stimulation was compared to the conventional monofocal M1 stimulation group. The models were build up hierarchically using a multilevel approach starting from the null-model (intercept only model) and subsequently adding at first level CONDITION, at second level BLOCK (respectively SESSION), and at third level the CON-DITION × BLOCK (respectively SESSION) interaction 80 . In the majority of cases, the residuals did not show www.nature.com/scientificreports/ obvious deviations from normality, defined as a skewness in-between -2 and 2 81 , in other cases (retention monofocal M1 vs. multifocal M1-CB, SICI rest , training gain, CV of training data) we performed a log-transformation of the dependent variable to meet this assumption. Statistical significance testing was done by applying likelihood ratio tests comparing the full model including the effect in question with the reduced model without the effect in question 82 . The cut-off for statistical significance was set at p < 0.05. For specific post hoc comparisons we conducted pairwise comparisons with least square means-lsmeans() function 83 -by applying a Tukey-correction for multiple comparisons. For the ppTMS data we performed an auxiliary analysis to assess for potential modulation from baseline by calculating one sample t-tests with Bonferroni correction. To assess for specific association of two variables of interest we calculated Spearman's rank correlations. Multiple linear regression analysis (lm() function) was used to predict the outcome variable training gain. Missing values (6.25% of cases) were imputed with median imputation. To allow direct comparisons of beta coefficients, the predictor variables were converted to z-scores. The final predictive model was determined by stepwise backward selection based on the Akaike information criterion (AIC). The selection process was implemented via the stepAIC() function 84 . The same approach was applied in an attempt to predict responsiveness for anodal M1 tDCS.
8,254.4
2021-01-19T00:00:00.000
[ "Psychology", "Biology" ]
On the growth and zeros of polynomials attached to arithmetic functions In this paper we investigate growth properties and the zero distribution of polynomials attached to arithmetic functions $g$ and $h$, where $g$ is normalized, of moderate growth, and $0<h(n) \leq h(n+1)$. We put $P_0^{g,h}(x)=1$ and \begin{equation*} P_n^{g,h}(x) := \frac{x}{h(n)} \sum_{k=1}^{n} g(k) \, P_{n-k}^{g,h}(x). \end{equation*} As an application we obtain the best known result on the domain of the non-vanishing of the Fourier coefficients of powers of the Dedekind $\eta$-function. Here, $g$ is the sum of divisors and $h$ the identity function. Kostant's result on the representation of simple complex Lie algebras and Han's results on the Nekrasov--Okounkov hook length formula are extended. The polynomials are related to reciprocals of Eisenstein series, Klein's $j$-invariant, and Chebyshev polynomials of the second kind. Here, q := e 2πiτ , Im (τ ) > 0 and r ∈ Z. The coefficients are special values of the D'Arcais polynomials P n (x) [DA13,Ne55,Co74,We06]. It has been recently noticed that the growth and vanishing properties of these polynomials have much in common with properties of other interesting polynomials [HLN19,HN20B]. These include special orthogonal polynomials as associated Laguerre polynomials and Chebyshev polynomials of the second kind. Also included are polynomials attached to reciprocals of the Klein's j-invariant and Eisenstein series [HN20A,HN20C]. In this paper we investigate growth properties and the zero distribution of polynomials attached to arithmetic functions g and h inspired by Rota [KRY09]. This definition includes all mentioned examples. Before providing examples and explicit formulas for these polynomials, we give one application for the coefficients of the Dedekind η-function. Let g(n) = σ(n) := d|n d, h(n) = id(n) = n and a n (r) be defined by (1), the nth coefficient of the rth power of the Dedekind ηfunction. Han [Ha10] observed that the Nekrasov-Okounkov hook length formula [NO06,We06] implies that a n (r) = 0 if r > n 2 − 1. This improves previous results by Kostant [Ko04]. In [HN20B] we proved that (3) a n (r) = 0 holds for r > κ · (n − 1) where κ = 15. Numerical investigations show that κ has to be larger than 9.55 (see Table 5). In this paper we prove that (3) is already true for κ = 10.82. Since the definition of P g,h n (x) is quite abstract, we provide two examples of families of polynomials, to familiarize the reader with the types of polynomials we are studying. At first, they appear to have nothing in common. Let us start with the Nekrasov-Okounkov hook length formula [NO06]. Let η(τ ) be the Dedekind η-function. Let λ be a partition of n and let |λ| = n. By H(λ) we denote the multiset of hook lengths associated with λ and by P, the set of all partitions. The Nekrasov-Okounkov hook length formula ([Ha10], Theorem 1.2) states that The identity (4) is valid for all z ∈ C. Note that the P σ n (x) are integer-valued polynomials of degree n. From the formula it follows that (−1) n P σ n (x) > 0 for all real x < −(n 2 + 1). The second example is of a more artificial nature, discovered recently [HN20A], when studying the q-expansion of the reciprocals of Klein's j-invariant and reciprocals of Eisenstein series [BB05,BK17,HN20C]. Let denote Klein's j-invariant. Asai, Kaneko, and Ninomiya [AKN97] proved that the coefficients of the q-expansion of 1/j(τ ) are non-vanishing and have strictly alternating signs. This follows from their result on the zero distribution of the nth Faber polynomials ϕ n (x) and the denominator formula for the monster Lie algebra. The zeros of the Faber polynomials are simple and lie in the interval (0, 1728). They obtained the remarkable identity: Let c * (n) := c(n)/744. Define the polynomials Q j,n (x) by We have proved in [HN20A] that Q j,n (x) = Q γ 2 ,n (x) + 2xQ ′ γ 2 ,n (x) + x 2 2 Q ′′ γ 2 ,n (x), where Q γ 2 ,n (x) are polynomials attached to Weber's cubic root function γ 2 of j in a similar way. We have also proved that Q γ 2 ,n (z) = 0 for all |z| > 82.5. Hence, the identity restates and extends the result of [AKN97]. Now, let g(n) be a normalized arithmetic function with moderate growth, such that ∞ n=1 |g(n)| T n is analytic at T = 0. Then the illustrated examples are special cases of polynomials P g n (x) and Q g n (x) defined by Note that P id n (x) = x L (1) n−1 (−x) are associated Laguerre polynomials (see [HLN19]). Letting g(n) = σ(n), then we recover the polynomials provided by the Nekrasov-Okounkov hook length formula. The polynomials Q id n (x) are related to the Chebyshev polynomials of the second kind [HNT20]. It is easy to see that P g n (z) and Q g n (z) are special cases of polynomials P g,h n (x) defined by the recursion formula (2). Here, P g n (x) = P g,id n (x) and Q g n (x) = P g,1 n (x). In the next section, we state the main results of this paper. Statement of main results Let g, h be arithmetic functions. Assume that g be normalized and 0 < h(n) ≤ h(n + 1). It is convenient to extend h by h(0) := 0. 2.1. Improvement A. The following result reproduces our previous result (9), if we choose ε = 1 2 . Theorem 1. Let 0 < ε < 1. Let R > 0 be the radius of convergence of Tε . Then This result can be reformulated in the following way, which is more suitable for applications to growth and non-vanishing properties. We note that the smallest possible κ is independent of the function h(n). It is also possible to provide a lower bound for the best possible κ. Proposition 1. The constant κ ε obtained in Theorem 1 has the following lower bound: As a lower bound independent of ε we have 4 |g (2)|. Proof. If we consider only the first order term of the power series (1−ε)ε in the proposition depending on ε. The minimal value of this lower bound is at ε = 1 2 because of the inequality of arithmetic and geometric means ( Theorem 3. Let 0 < ε < 1. Let R > 0 be the radius of convergence of Theorem 4. Let 0 < ε < 1. Let R > 0 be the radius of convergence of Let 0 < T ε < R be such that G 2 (T ε ) ≤ ε and if |x| > κ h(n − 1) for all n ≥ 1. Corollary 2. Let κ be chosen as in Theorem 3 or as in Theorem 4. Then . Proposition 2. The constant κ ε obtained in Theorem 3 has the following lower bound: As a lower bound independent of ε we have 3 Proof. If we consider only the second order term of the power series Applying the last inequality now to It is clear that To estimate κ ε independent of ε we consider the right hand side of the last inequality as a function in ε. Thus, we are interested in the minimal value of this function for 0 < ε < 1. The inequality of arithmetic and geometric means yields We obtain 3 2 3 (g (2)) 2 − g (3) + |g (2)|. 2.3. Comparing Improvement A and Improvement B. Let 0 < ε 1 < 1 and T ε 1 as in Theorem 1. For all T ≥ 0 we have that Let ε 2 be such that This shows that we can choose T ε 2 = T ε 1 . Let κ 1,ε and κ 2,ε be the respective constants from Theorems 1 and 3. Then This shows that the minimal value of the κ 2,ε is never larger than the minimal value of the κ 1,ε . In the previous proof we showed that G 2 (T ε ) < 1 982 < 1 250 for T ε = 87 20000 and κ 2 < 240. This leads to the following The lower bound is quite close to the optimal value e π √ 3 = 230.764588 . . .. Associated Laguerre polynomials and Chebyshev polynomials of the second kind. We briefly recall the definition of associated Laguerre polynomials L (α) n (x) and Chebyshev polynomials U n (x) of the second kind [RS02,Do16]. Both are orthogonal polynomials. We have The Chebyshev polynomials are uniquely characterized by (25) U n (cos(t)) = sin((n + 1)t) sin(t) (0 < t < π). The Chebyshev polynomials are of special interest in the context of applications, since they are the only classical orthogonal polynomials whose zeros can be determined in explicit form (see Rahman and Schmeisser [RS02], Introduction). Let g(n) = id(n) = n. Then The generating series of the Chebyshev polynomial of the second kind is given by |x|, |q| < 1. With this we can prove equation (27). We have From this we obtain the following values: Table 3. Case g(n) = n If we consider the special case ε 1 = 1/2 in Improvement A, we can chose T ε 1 = 2/11 and finally get κ 1 = 11. This leads to several applications. For example, let |x| > (20/3) n then L (1) n (x) = 0 and the estimates hold 3.4. Powers of the Dedekind η-function. Let us recall the well-known identity: The q-expansion of the −zth power of the Euler product defines the D'Arcais polynomials where P σ 0 (x) = 1 and P σ n (x) = x n n k=1 σ(k)P σ n−k (x), as polynomials. Note that these polynomials evaluated at −24 are directly related to the Ramanujan τfunction: τ (n) = P σ n−1 (−24), which gives also a link to the Lehmer conjecture [Le47]. Note only minor further improvements can be achieved. c) Corollary 3 improves our previous result [HN20B], where κ = 15. Proof of Theorem 1 and Theorem 2 Proof of Theorem 1. The proof will be by induction on n. The case n = 1 is obvious: P g,h 1 (x) − x h(1) P g,h 0 (x) = 0 < ε |x| h(1) P g,h 0 (x) for |x| > κ h(0). Let now n ≥ 2. Then The basic idea for the induction step is to use the inequality We estimate the sum by the following property for 1 ≤ j ≤ n − 1: for |x| > κ h(n − 1). Thus, Further, we have for |x| > κ h(n − 1) ≥ κ h(n − k) for all 2 ≤ k ≤ n by assumption. Using this, we can now estimate the sum by Estimating the sum using the assumption from the theorem we obtain Tε which is equivalent to (1−ε)|x| h(n−1) > 1 Tε and G 1 increases on [0, R) as |g (k + 1)| ≥ 0 for all k ∈ N. Proof of Theorem 2. Consider the following upper and lower bounds: Applying (10) leads to the desired result. Proof of Theorem 4. This basically follows from Theorem 3 (see also the proof of Theorem 2).
2,650.2
2021-01-12T00:00:00.000
[ "Mathematics" ]
Light-Weight Integration and Interoperation of Localization Systems in IoT As the ideas and technologies behind the Internet of Things (IoT) take root, a vast array of new possibilities and applications is emerging with the significantly increased number of devices connected to the Internet. Moreover, we are also witnessing the fast emergence of location-based services with an abundant number of localization technologies and solutions with varying capabilities and limitations. We believe that, at this moment in time, the successful integration of these two diverse technologies is mutually beneficial and even essential for both fields. IoT is one of the major fields that can benefit from localization services, and so, the integration of localization systems in the IoT ecosystem would enable numerous new IoT applications. Further, the use of standardized IoT architectures, interaction and information models will permit multiple localization systems to communicate and interoperate with each other in order to obtain better context information and resolve positioning errors or conflicts. Therefore, in this work, we investigate the semantic interoperation and integration of positioning systems in order to obtain the full potential of the localization ecosystem in the context of IoT. Additionally, we also validate the proposed design by means of an industrial case study, which targets fully-automated warehouses utilizing location-aware and interconnected IoT products and systems. Introduction Over the past few years, we have been experiencing a fast emergence of the Internet of Things (IoT), where things and objects become smart and connected [1]. By providing access to and interaction with a broad range of devices and systems, the IoT is fostering the development of numerous applications and services in many different domains, such as industry, building automation, smart grids, smart cities, healthcare, wearables and many others [2]. In many cases, these applications can greatly benefit from or cannot properly function without location awareness and the ability to identify the location of objects (sensors, machines, wearable devices). Besides IoT, indoor localization is another growing trend in our hyper-connected society with a large amount of research and industrial focus [3]. Moreover, the demand for Location-Based Services (LBSs) has been rapidly expanding in many fields, such as goods/robot tracking in industry or indoor navigation for people with visual impairments, and is supported by the emergence of several powerful commercially-available localization solutions [4]. Due to their diverse positioning techniques and technologies, each localization solution has unique capabilities, limitations and also varying costs. For instance, although GPS is the de-facto positioning solution for outdoor environments, Ultra-Wideband (UWB) or LoRa-based technologies can be the first choice for applications that only have high positioning accuracy or low accuracy, but long-range and low-energy requirements, respectively [4][5][6]. Despite this plethora of technologies and the remarkable interest in the localization domain, the interoperation and orchestration of these technologies and their integration into other ecosystems is still an open issue that requires innovation. In this sense, we see several standardization initiatives and research efforts targeting interoperability and Machine-to-Machine (M2M) understandability in the IoT domain. These efforts aim to create more loosely-coupled systems, better interoperable and connective devices via common interfaces and data models. For instance, oneM2M [7], the Open Mobile Alliance (OMA) [8] and the Internet Protocol for Smart Objects (IPSO) Alliance [9] are some of the leading global organizations that deliver specifications and architectures for creating resource-efficient M2M communication and global interoperability for the IoT. However, there is no effort available that extensively targets the interoperability of localization technologies by considering their specific characteristics or constraints. In this work, we study the first design for the interoperation and integration of positioning systems by means of IoT application protocols and data models, more specifically, the ones defined by OMA Lightweight Machine-to-Machine (LwM2M) [10] and IPSO. For this purpose, we create a uniform and well-defined representation for the location semantics combined with interaction models that can be used for dealing with various localization technologies and system interactions. We show that these IoT protocols and specifications, by design, offer the necessary mechanisms and features in order to obtain syntactic and semantic interoperability between positioning systems and also IoT applications. In addition, we also validate our approach by means of an industrial use case and system implementation, which targets fully-automated warehouses by means of location-aware and interconnected IoT products and systems. We believe that such an integration will enable a seamless and spontaneous interoperation of localization technologies independent of their specific characteristics, hardware and software components. This will also enable IoT applications to simultaneously use a multitude of diverse positioning solutions in order to face various application needs or improve overall localization performance. The remainder of this paper is organized as follows. Section 2 provides detailed background on the localization technologies and the LwM2M protocol in order to understand the current state of the art. The vision and the necessary enablers for the integration of IoT and localization technologies are presented in Section 3. The mechanisms that constitute the fundamentals of the light-weight integration and interoperation of the localization technologies are described in Section 4. Section 5 describes how the proposed solution can be exploited in order to achieve interoperability for different interaction models in the localization ecosystem, which is followed by a case study, in Section 6, that validates the proposed solution and illustrates the main principles and application potential. Finally, Section 7 concludes the paper. Localization Systems: The World of Diversity Localization is the process of determining the position of equipment, people or any other object (called the tag in this paper). In the past decade, it has become an active research area. Up to now, a large variety of localization technologies have been proposed, but no one-size-fits-all solution has emerged [3]. Therefore, today, we face an enormous and heterogeneous ecosystem with varying localization technologies, techniques, architectures and designs. Although there are various ways to classify these technologies, this section will only provide their key aspects that are relevant to the focus of this paper. First of all, there is a wide variety of technologies on which localization systems can be based, such as WiFi, BLE, RFID, UWB, LoRa, GPS, ultrasound and vision [3]. Some of these technologies are already available on several commercial devices, whereas some of them are composed of extremely specialized and expensive components. Secondly, the environment in which the localization is conducted is one of the most important aspects of these systems. Although there are two types of solutions (indoor vs. outdoor), even the characteristics of the indoor environment play a key role in the performance of the localization technologies [4]. Thirdly, localization systems use various measurement types and positioning techniques in order to calculate the position of the target object. Independent of their technology, Time of Arrival (ToA), Angle of Arrival (AoA), Time Difference of Arrival (TDoA) and Received Signal Strength (RSS) are the most commonly-used techniques applied by localization technologies. Finally, we also see variation in the type of system architecture. Some of the technologies use a self-positioning architecture, in which objects calculate their position themselves. There are also systems that include an infrastructure or backend server that calculates the location of the targets. Finally, we see self-oriented infrastructure-assisted architectures where a backend system determines the position and then informs the tracked object in response to its request [4]. On the other side, different applications may also ask for different localization requirements in terms of various quality metrics, such as accuracy, precision, scalability, update frequency, etc. [5]. This diversity of technologies and the variation in the application requirements prevents the emergence of a single solution that is a silver bullet for creating a localization solution that meets all possible needs. Therefore, for each application, a suitable localization technology should be selected wisely in order to meet application needs while maintaining the right balance between the system cost, complexity and performance. Often, the most efficient solution is a combination of technologies and techniques especially for complex applications with various needs. Regarding the interoperation of localization systems, we recently have started to see a number of research efforts that are investigating semantic location models, mathematical methods, ontologies and structures for location-based applications with no resource constraints, especially related to location navigation and browsing [11][12][13][14]. However, in pursuit of the interoperation of this large variety of localization systems, one needs to consider flexible and powerful mechanisms that consider all aspects and variations of the localization technologies. Lightweight Machine-to-Machine Protocol LwM2M, specified by the OMA Alliance, is a secure, efficient and deployable client-server protocol with several functionalities for managing resource-constrained devices on a variety of networks. Besides fundamental management functionalities such as bootstrapping, client registration and firmware updates, LwM2M also defines efficient interactions for remote application management and the transfer of service and application data [10]. For this purpose, LwM2M provides several interfaces built on top of the Constrained Application Protocol (CoAP) [15], which is a REST-based application protocol for constrained Internet devices. Figure 1 presents the LwM2M interaction model related to device management and information reporting. As it makes use of light and compact application protocols, management mechanisms and an efficient resource data model, the LwM2M protocol has already attracted much attention from the research community. According to LwM2M, a client consists of one or more instances of objects, which are typed containers that define the semantic type of instances. Each object is a collection of mandatory and optional resources, which are atomic pieces of information that can be read, written or executed. These objects, instances and resources are mapped into the URI path hierarchy with integer identifiers and can be accessed via simple URIs in the form of /ObjectID/InstanceID/ResourceID [10]. For instance, a device model number can be read via a GET request to the URI "/3/0/1". At the time of writing, there are more than 100 objects registered, by OMA Working Groups, third party organizations, vendors and individuals, via a registration process with the OMA Naming Authority (OMNA) [16]. Among these object types, there are two location-related objects: location object from OMA and GPS location from IPSO. However, since both of these objects can only be used to represent GPS location data, these models are not sufficient for use by all localization systems, especially for non-spatial data (X,Y,Z). Next to OMA, the IPSO alliance performs a similar and adherent effort in order to provide a common design pattern and an object model based on the OMA LwM2M specification [9]. By using reusable object and resource design, IPSO targets high level interoperability between smart object devices and connected software applications on other devices and services [17]. Interfaces for Thanks to the use of open IoT standards, unified information and interaction models and powerful management functions, LwM2M is a very promising candidate to achieve global interoperability within the IoT Ecosystem, especially when constrained devices are involved. Therefore, within the IoT ecosystem, there are ongoing efforts defining mechanisms and protocols in order to realize semantic and structural interoperability. However, there is no effort available that extensively targets the interoperability of localization technologies by considering their specific characteristics or constraints. Therefore, in this work, we tried to leverage the LwM2M protocol to target interoperability and integration of localization systems in IoT. IoT Interoperability for Localization Systems The successful integration of IoT and localization technologies is mutually beneficial and even essential for both fields. Such an advancement would enable numerous new IoT applications and products with location awareness and location-based reasoning. On the other side, multiple localization systems would communicate and interoperate with each other in order to obtain better context information, improve localization accuracy, resolve positioning errors or conflicts and activate/inactivate each other in order to save resources in various conditions. Ultimately, all these new applications and features will result in a broader and smarter ecosystem, as illustrated in Figure 2, with significant potential: smarter "smart cities", "smart factories", "smart buildings", etc. However, despite the plethora of technologies and the remarkable progress made in these domains, their integration, interoperation and harmonization is still an open issue. Considering the diversity of localization technologies, as described in Section 2.1, and the scalability and efficiency concerns in IoT, the success of such an integration becomes a very hard mission, and it can only become a reality with the design of flexible information and interaction models and powerful and efficient management functionalities. In this context, we defined the following functionalities as vital enablers for the realization of full structural, syntactical and semantic interoperability between localization systems and also IoT applications, as illustrated in Figure 3, with minimized integration cost. First of all, the concerns about the security and privacy of localization data must be sufficiently addressed, so only authenticated and authorized parties can reach location data and perform trustworthy operations in a security domain. Especially, considering large-scale IoT scenarios, a vast number of localization devices and systems needs to be authenticated and authorized along with a wide range of smart objects. Secondly, the vision of the seamless interconnection of localization technologies necessitates mechanisms for automatic discovery of devices, resources, their properties and capabilities, as well as the means to access them. Therefore, any application or device can discover these devices (localization tags, systems) with positioning capabilities, along with their types and settings. Furthermore, such discovery mechanisms also depend on other services like configuration management, registration and un-registration of self-descriptive devices, systems and resources. Moreover, considering that the majority of IoT devices are expected to be severely constrained in terms of memory, CPU and power capacities, the interoperability solution must embody a light, compact, efficient and scalable nature. Therefore, it can easily adapt to constrained environments and large-scale deployments (support for tracking of thousands of devices). Next, there has to be a common dictionary (uniform data models) describing formally-relevant concepts, resources, attributes and relations without ambiguity in order to achieve semantic interoperability and global understandability. This will enable systems to perform machine computable logic, knowledge discovery, data federation and semantic-based reasoning. In addition, along with essential location data, further additional information (reference point, orientation, etc.) has to be reachable for other applications and devices, so interested parties can correlate positions from different localization technologies and map them to other coordinate systems. Finally, in pursuit of the interoperation of the large variety of localization systems, the interoperability solution needs to consider all aspects and variations of the localization technologies (e.g., spatial vs. non-spatial data) and provide support for all localization architectures (self-positioning, infrastructure-based or infrastructure-assisted localization). For this purpose, we study a design for the interoperation and integration of positioning systems by means of IoT application protocols and data models defined by OMA LwM2M and IPSO. We show that, with extensions of uniform and well-defined data and interaction models targeting localization systems, these IoT protocols and specifications, by design, offer the necessary mechanisms and features in order to obtain syntactic and semantic interoperability between positioning systems and also IoT applications. Light-Weight Integration and Interoperation of Localization Systems in IoT In this section, we first present the mechanisms in the LwM2M protocol that constitute the fundamentals (defined in the previous section) of the full integration and interoperation of the localization technologies, including bootstrapping, resource registration, operations and common system architectures. Then, we describe the designed uniform object models, based on LwM2M/IPSO specifications, that can be used to inclusively represent the location-related data from various and diverse localization technologies. Finally, for each aspect of the architecture, we describe the overall flow of the interoperation process based on the proposed solution. Bootstrapping, Registration and Discovery Bootstrapping is an LwM2M functionality that is used for server configuration, security and credential management, as well as provisioning of access control lists [10]. For client-based bootstrapping, a dedicated LwM2M bootstrap server is used, which is a specific server that is contacted by the client during its boot-up and prepares the client for communication with regular LwM2M servers. Considering the privacy and security concerns regarding the localization technologies [18], the LwM2M bootstrap interface offers key functionalities, by means of credentials and access control management, in order to prevent unauthorized operations on positioning data. Device registration is another LwM2M feature that allows a LwM2M client device to inform an LwM2M server about its existence and register its capabilities and resources [10]. This way, the LwM2M server can act as a lookup server, enabling any application to perform queries and discover all devices with positioning capabilities. This not only allows device registration, but also enables a discovery of the application/device to understand what kind of location-related objects a device holds and which resources are exposed by the particular object. Further, by identifying object IDs and/or reading the descriptive resources (e.g., application type), additional and detailed information can be retrieved. Next, it can start reading the location data, retrieving the information it is interested in and also performing all the operations defined in the following section. In Figure 4, the LwM2M registration process is illustrated with a flow diagram. Besides the registration operation, an LwM2M client can also update its registration or perform a de-registration when shutting down or discontinuing use of an LwM2M server [10]. Light-Weight and Efficient Operations Since LwM2M relies on the CoAP application protocol, the CoAP methods constitute the fundamentals of the LwM2M interactions and operations. A minimal CoAP request consists of the method to be applied to the resource, the identifier of the resource, a payload and metadata about the request [15]. CoAP supports the basic methods of GET, POST, PUT and DELETE. The CoAP GET method is the fundamental information retrieving method, whereas the PUT method is used to update the resource identified by the requested URI, and the POST method usually results in a new resource being created or the target resource being updated [15]. Besides these basic CoAP methods, there have been recent efforts to define new CoAP methods in order to create CoAP applications with improved functionalities. The newly-specified FETCH, PATCH and iPATCHmethods allow accessing and updating parts of a resource [19]. In addition to the basic methods, these new methods can be very beneficial for dealing with location data. For instance, an application can read X, Y and Z coordinates of a localized object with a single FETCH request, upon which it will only receive the related data aggregated in a single packet. Although the current LwM2M specification does not support CoAP PATCH/FETCH functionality, the preview of the next version of the LwM2M specification declares that these methods will be included in the near future [20]. The LwM2M protocol also defines an information reporting interface (based on the CoAP observe mechanism), which can be used to achieve object tracking by means of the repeating retrieval of the location data [21]. Using this mechanism, a device, called the observer, can indicate that it is interested in observing a location object instance and to be notified about any state change of the relevant data. This way, the device will periodically notify the observer with a single message that contains the value of the observed resource or the set of resource values for the observed object instance aggregated in one packet. LwM2M Position Object Models As we mentioned in the previous sections, LwM2M relies on uniform object models, which are collections of mandatory and optional resources representing atomic pieces of information. Therefore, we create powerful location-related objects and resources that can be used to represent spatial and non-spatial location data in various technologies. In Table 1, the list of location-related object models is provided. The first object, GPS location (3336), is the location object defined by the IPSO alliance to represent GPS localization data, such as latitude, longitude, uncertainty and velocity. The details and defined resources for this object are listed in Table 2. This object provides the model for spatial location data, but it is too limited considering the indoor localization technologies because most of the indoor localization technologies provide a relative position (in coordinates) with respect to a reference point. Therefore, we define three new object models, namely position object, localization relay object and localization server object. The position object (3360) provides the necessary resources to represent coordinate-based localization data, whereas an instance of the localization relay object (3361) will expose all necessary resources to associate and link the tag with a localization server, which can provide its location data. Finally, the localization server object (3362) can be used to expose the localization server itself as an LwM2M device. The last two object models can be used for infrastructure-based localization systems. The details of these models are provided in Tables 3-5, respectively. In these models, the IDs for the proposed/created objects and resources (3360, 3361, 5552, 5553, etc.) are not standardized, but these IDs have been selected during the design as they were not assigned according to the LwM2M object and resource registry [16]. However, a final ID assignment should go via OMA registration procedure. In addition, as multiple instances can be created by client devices, these devices can expose multiple position objects through several instances. For each resource, the 'Operations' field indicates the supported operations by this specific LwM2M resource. As can be seen in Table 3, unlike the IPSO GPS location object, the resources in the position object support both read and write operations. This feature is essential in order to enable localization servers to write the position data to the client device when using infrastructure-assisted localization technologies. By means of the LwM2M bootstrap interface and the security features of LwM2M, only the authorized servers and devices that have the right device management credentials can perform such write operations on the position object resources. As is presented in Table 3, the position object is composed of several location-related resources, which are mostly defined in the IPSO specification. For this object, 'X value' and 'Y value' are mandatory resources and have to be defined in any position object. The 'Z value' is optional due to the existence of 2D localization systems. The optional minimum and maximum values for the X, Y and Z coordinates can be used to define the measurement area in which localization is performed. 'Sensor unit' defines the unit in which the coordinate measurements are expressed. The 'uncertainty' can be used to deliver the accuracy of the latest localization measurement, so third parties can evaluate the location data accordingly. 'Timestamp' exposes crucial information regarding the age of the location data. As the tracked objects are mobile, location data are valid for a certain amount of time. The timestamp is also useful when someone tries to combine or match location data from two or more different resources in order to improve location accuracy. The position object also offers the 'latitude', 'longitude', 'altitude', 'compass direction' and 'elevation direction' resources, which can be used in order to specify the actual position of the reference point and the relative orientation of the measurement area with respect to this reference point. By mapping reference points of the localization systems, any location data can be translated between two systems, which enables the interoperation of multiple localization systems. 'Server URI' can be used to link the position object with the corresponding localization server, if existing. Then, the client application can retrieve detailed information about the localization technology and system setup, such as the number of anchors, the number of tracked objects and any supported feature (maximum update rate, accuracy, etc.). The 'target ID' resource can be used to read the unique ID exposed by the tag or assigned by the localization server, while the 'update flag' is used to notify that the position data have been last updated by a localization server. Finally, 'application type' defines a string resource where system developers can embed any kind of information regarding the localization system. For instance, "cm-level UWB localization technology" would show that this application is exposing really accurate location data for tracked objects. The localization relay object (3361) is an object we created in order to expose the necessary resources to associate and link the target tag with a localization server. This object is necessary for the localization systems where the location is calculated by a backend server. This way, the tag can relay any request to the corresponding backend server by means of created 'server URI' and/or 'tag ID' resources. The 'server URI' resource can provide an URI (in string format) that identifies the backend server itself or any instance or resource available on the server. Any client first needs to retrieve the server URI and/or the tag ID, and then, it can start retrieving the location data directly from the localization server. The localization server object (3362) is another proposed object model used to represent a single localization server, which tracks several objects, but only exposes proprietary API. Apart from the 'application type', this object exposes only a 'location server' resource, which can be used to describe a proprietary API via a string written in JSON format. This server object could also encompass technology-related resources (5508 to 5515, 5552, 5701, etc.) in order to expose more information about the localization system. The Realization of Interaction Models for Localization Systems As is mentioned in the Background section, the architecture of localization systems is one of the features that the localization technologies divert. There are technologies that use a self-positioning architecture, in which objects calculate their positions by themselves. Besides GPS, which is the best-known self-positioning technology, there are also several indoor localization technologies where the tag positions itself by interacting with fixed anchor devices that have known positions, such as the Pozyx UWB Accurate Positioning System [22]. On the other hand, there are also systems that include an infrastructure or backend server that calculates the location of the targets. Most of the TDoA-and AoA-based localization technologies apply this architecture, such as Quuppa's BLE-based positioning system [23]. Finally, we also see self-oriented infrastructure-assisted architectures where a backend system determines the position and then informs the tracked object. In the following subsections, we describe how our LwM2M-based approach can incorporate any of these localization system architectures. Self-Positioning In this interaction model, the target device is calculating its own position, and it can directly expose any measured data via the proposed position object model (3360) in case the technology is not GPS. Any device interested in this information can directly interact with the tag device and read any relevant location information after the resource discovery. If the device exposes multiple location data with different technologies, then the application can reach the data via different position instances (3360/0, 3360/1, 3360/2, etc.), and it can also read the 'application type' or 'server URI' resources to find out the corresponding localization system. In order to track target objects, the application server can send an observe request to the target tag and receive notifications whenever the tag calculates a new position. Besides the data delivery and monitoring, the LwM2M device management interfaces can also be used to configure the tags with the necessary information (e.g., position of anchors) in order to calculate their own positions. Again, with uniform data models, such an interface can offer a generic solution for the tag configuration; however, this is not in the scope of this paper. Self-Oriented Infrastructure-Assisted There are also applications where the tag needs the location information, but the location data are measured at the localization system infrastructure. In this case, the authorized localization server needs to determine the position and then inform the tracked object. In such systems, the localization server first discovers the tracked object with the unique 'tag ID' and retrieves its IP address. Then, whenever a new location is detected, the localization server (with access rights) can write to the particular position object resources exposed by the tag and set the 'update flag'. Our position object model enables this operation by exposing all resources as both writable and readable. At the same time, a third party application can read or observe these resources on the tag whenever the server updates the location data. Infrastructure Backend System For infrastructure-backend localization systems, there can be four approaches for using our LwM2M object and resource model in order to expose location data. Using the first approach, the infrastructure server exposes position data for every tag that is being localized via a dedicated instance of a position object. The link between this object and the tag is created by including the 'target ID' resource in this position object instance, with the ID being unique for each tag. At the other side, each tag exposes a localization relay object that contains the 'server URI' resource, which constitutes a one-to-one link to the position object instance at the server side. During the bootstrapping process, the infrastructure server needs to create and assign position object instances for the new target tags and inform each tag about the address and URI of the specific position object instance. Whenever an application server would like to read the location of a certain tag, it first has to send a request to the tag in order to retrieve the 'server URI' in the localization relay object and learn the address of the relevant location resources at the backend server. After that, it can start retrieving or observing the position-related resources available at this server. The interaction model for this architecture is provided in Figure 5. The second approach is based on a localization server that is exposing a single position object instance, which will be used for all of the tags; whereas, on the tag side, the location relay object with both the 'server URI' and 'tag ID' resources is used. The server URI holds a link to the position object instance in the localization server. Therefore, this model creates a an N-to-one link between tags and the localization server. Any client or application that reads the server URI and tag ID can interact with the localization server and retrieves the position data of the tag. In the case of a request on the position instance, the localization server will return the latest position update of whichever the positioned tag is. This enables very efficient operations when an application would like to track all of the tags. In this case, the tag ID can be used in order to match position information with the corresponding tag. Using the third approach, the support for several localization servers is considered. In this case, the backend server exposes several (k) position object instances, while each target only needs to expose a location relay object containing the 'tag ID' resource. This approach creates a loosely-coupled architecture, where the 'tag ID' resource available at the tag is the only information available in order to match the tag and the position data obtained from different servers. With this approach, an application that is interested in tag locations can discover all tracked tags and find out their 'tag ID', as well as additional information exposed via other LwM2M resources. In a similar way, it can also discover all localization servers, as these behave as LwM2M devices, as well. Next, it can start retrieving notifications for a tag from several localization servers. Then, by combining the discovered tag information with the location update data, the application has all required information for further processing and taking actions. Lastly, if one would like to make a localization server or localized tags, with a proprietary API, discoverable for interested third parties, then localization relay and localization server object (3362) can be used. The localized tags will expose a localization relay object instance, which is linked to a localization server object instance at the backend server. Within this localization server object instance, the exposed localization server resource will be used to describe the proprietary server API in JSON format. Any client application can discover these localization servers and tags, learn the used API and start retrieving the position data with a certain level of integration cost. Although these four approaches use the same object, the relay object, at the tag, they differ in the sense of the resources they expose. Therefore, a client that discovers the resources at the tag can understand which model is used. If the tag exposes a server URI resource, linked to a position instance, but not tag ID, then it is based on the one-to-one approach. In case it exposes tag ID, but not server URI, then the loosely-coupled approach is followed. Thirdly, if it exposes both resources, this means the N-to-one approach is implemented by the localization system. Finally, if the tag exposes server URI linked to a localization server instance, then it is based on a proprietary API. Optionally, the first three approaches can also expose the localization object instance in order to advertise their existence or more information about the localization system by means of technology-related resources (5508 to 5515, 5552, 5701 etc.). The objects and necessary resources used for each approach are presented in Table 6. In this table, the number after '*' represents the number of instances for the given object that needs to be exposed by the corresponding device. In addition, N is the total number of the Tags, while k represents only a portion of these Tags. Table 6. LwM2M objects and resources exposed for each interaction model. Tag Localization Server Object ID Server URI Tag ID Object ID Tag ID Case Study: Hybrid Connected Warehouses In the HyCoWareproject [24], we target the interconnection of heterogeneous systems in warehouses, encompassing systems of multiple vendors, aiming to create IoT readiness for industrial warehouses. For that purpose, we aim to make use of open IoT technologies to ease the deployment and interconnection between different solutions for connected goods, transport systems and operators. Since the targeted use cases require location information in order to further automate the warehouse operations or increase the visibility of certain objects, localization technologies and their integration into upcoming IoT solutions are key in the HyCoWare project. • Hybrid tag: The first target system encompasses the design of a hybrid tag, which will be used to improve the visibility of industrial trolleys. During warehouse operations, the transport trolleys, with tags attached, will be monitored (position, temperature, humidity, etc.) by a control unit, which is closely interconnected with other systems. For this purpose, the tag needs to be equipped with wireless communication and indoor/outdoor localization technologies. • Connected operator: Another target product is the connected operator, which consists of an operator interface and navigation system to enable an operator to dynamically monitor and interact with other connected products. The location of the operators will be also tracked in order to navigate and direct them to the most relevant operations. Ideally, commercially-available PDAs or smart glasses are used as operator devices. Therefore, the operator needs to be located using technologies that are already part of these products, such as GPS, WiFi or BLE. • Connected conveyor: The last system is the chain conveyor system, which enables a finer tracking of pallets, carts and roll containers within the warehouse, as well as active involvement of the connected operators in the material flow. For this purpose, the transported pallets and carts have to be tracked by a localization system in addition to already existing RFID technology. Generally, warehouses consist of various zones or areas with different purposes, features or characteristics that mandate different requirements, such as location accuracy, update rate and many others [25]. For instance, at warehouse gates or docks, local presence detection is sufficient for incoming or outgoing goods, whereas accurate localization is needed inside critical handling and storage zones. On the other hand, coarse localization can be used to track goods outdoors or detect on or off premises. This diversity of warehouse sections is represented with a sample floor plan in Figure 6, which includes various zones with different accuracy requirements (illustrated with color density) and the gates between these zones and at the entrance and the exit. Overall System Architecture Taking the application requirements and the characteristics of warehouses as the input, we came up with the system design and architecture, with various localization technologies, given in Figure 7. For the connected operator case, the targeted location technologies are BLE, WiFi and GPS, as these technologies are already available in many commercial PDAs. While GPS is going to provide outdoor location, BLE-and WiFi-based localization technologies will be used in order to obtain the indoor location with different accuracies. For the hybrid tag, the combination of BLE (optionally UWB), LoRa and RFID localization technologies will be used in order to track trolleys inside and outside venues of the target warehouse. For the chain conveyor system, UWB-based localization technology, jointly with RFID, will be used to track carts and pallets moving around the warehouse. Hybrid Tag Chain Conveyor Connected Operator Modeling as LwM2M Devices Thanks to powerful LwM2M management functionalities, LwM2M-compliant applications will be able to access not only position data, but also other device-(e.g., security, connectivity) and application-(e.g., temperature, humidity) related resources exposed on the same interface. However, due to the focus of this paper, we only provide the utilized location-related LwM2M models for the target devices described in the previous section, in Table 7. As is presented in this table, the connected operator devices expose a single instance of the position object for WiFi-based localization, an IPSO GPS location object for GPS data and finally a location relay object for BLE-based localization technology in order to expose a web link to the localization server, which holds a position object for the operator. While hybrid tags have three instances of the localization relay object for all localization technologies (BLE, LoRa, RFID), it holds. Finally, the connected conveyors expose a position object instance and a localization relay object instance for UWB and RFID technologies, respectively. On the other side, as the location is calculated by a backend server for LoRa, BLE-and RFID-based localization technologies, the backend server of each technology exposes necessary location object instances. For LoRa-and RFID-based technologies, we applied the infrastructure backend loosely-coupled interaction model; therefore, they expose several instances of the position object; while for BLE, the localization server uses the one-to-one interaction model and again exposes multiple position object instances for tracked devices. Implementation and System Realization In order to demonstrate the targeted interoperability functionalities and realize the integrated ecosystem defined in the case study, we prepared a demonstration and validation setup along with the prototypes of the target products with localization capabilities. The resulting system architecture and design are provided in Figure 8. For the implementation of the LwM2M client, we used an open source standard-compliant implementation, Anjay [26], that implements the LwM2M APIs for bootstrapping, registration, etc., as well as several standardized data models, and we extended this implementation with custom object models (defined in this paper) related to location data. For the LwM2M and bootstrap server implementation, we used another open source server implementation in Java, Leshan [27], and extended it again with the mentioned location-related objects. Initially, the Warehouse Management System (WMS) and Device Management System (DMS) (for more information, refer to [28]) constitute the beating heart of the integrated warehouse ecosystem. This proprietary management system is able to discover and interact with all of the LwM2M-compliant devices by means of an embedded LwM2M server and eventually retrieve location or other application data. By combining location information with management capabilities, the WMS and DMS allows users to track and visualize actors and states, create tasks based on events and schedule them based on the state and location of actors, send commands and receive feedback from operators, calculate and visualize pathfinding data and navigate the actors accordingly. Regarding other members of this ecosystem, we used a newly-developed hybrid tag prototype, illustrated in Figure 9a (for more information, refer to [29]), in order to monitor the transport trolleys, with tag attached, during warehouse operations. As is illustrated in Figure 7, to overcome the integration of these hybrid tags, which cannot support LwM2M connectivity, we introduced the idea of device virtualization where we create LwM2M-compliant APIs. These virtualized devices are realized as Docker containers, which can be easily deployed on any operation system. Inside these containers, there are Anjay LwM2M clients running and exposing device-and location-related resources, which are received from a proprietary notification server and exposed to the LwM2M world. Similarly, for the localization servers (infrastructure backend localization systems), we deployed Lwm2M clients for each technology, which also receive the location data from the same notification server and expose it as LwM2M resources (3360). For the connected operator, an operator interface is developed by means of a tablet and, alternatively, virtual reality glasses, as presented in Figure 9b (for more information, refer to [28]), which enables an operator to dynamically monitor and react to tasks, monitor and interact with other connected products. Finally, for connected conveyors, we developed a prototype, presented in Figure 9c, which can be used to track pallets and carts in an accurate way, by means of an attached Pozyx tag, and provide virtual commands and messages via an attached LED matrix. Within this integrated ecosystem, all of the applications (location, light, text display, etc.) and management (device, server, etc.) traffic is modeled and realized by means of LwM2M interconnectivity. The designed interconnected system and products are installed in a real flower auction warehouse, namely Euroveiling Flower Auction Center in Brussels, Belgium, and we demonstrated the interoperation functionalities with real warehouse scenarios, which include various zones and areas with different purposes, features or characteristics that mandate different localization requirements. The demonstration area is provided in Figure 10a, and two screenshots from location-aware warehouse operations are provided in Figure 10b,c, which includes position data from different technologies and devices. In Figure 10b, green circles represent the operators, blue shows the connected conveyor cart, red ones are transport trolleys and yellow circles are again transport trolleys, which require an action from the operators. Furthermore, the blue line is the calculated route for Operator 2 to navigate him to the next task (Trolley 19). Finally, Figure 10c represents the translated/mapped position data to GPS coordinates based on the reference point resources exposed by each localization technology on the top demonstration building. System Validation In order to validate the interoperation of localization technologies in the context of IoT, we present three use cases from the HycoWaRe project where we illustrate how the targeted functionalities can be achieved via the LwM2M interoperability model that we have defined. Use Case I: Fine Tracking for Trolleys In this use case, we face a scenario where mobile trolleys must be tracked inside the warehouse by means of attached hybrid tags and monitored by a control and monitoring application. As is described in the previous section, the hybrid tag supports three different localization technologies with different localization capabilities. Initially, RFID readers are used to detect the arrival or departure of the trolleys for a certain zone in the warehouse. LoRa-based localization is used for non-critical areas, which require relatively low localization accuracy and update rate. Finally, the BLE-based localization technology is only deployed inside the buffer zone, where sub-meter accuracy is required and achieved by means of angle-of-arrival BLE localization. We use the interaction and data models provided in the previous section for the hybrid tag. After commissioning a hybrid tag, the control application initially discovers the localization relay objects for each localization technology and the localization servers that expose the location data. After this discovery process, the application sends an observe request for the corresponding location resources and starts receiving position updates for all of the tags attached to the trolleys. We considered a scenario where a trolley is entering the warehouse and moves across the auction room, buffer zone and finally the distribution zone, with the resulting trajectory shown in Figure 11a. During this process, the trolley is being localized by all localization technologies, and these localization measurements are provided in Figure 11b. As this figure presents, the BLE-based localization measurements are only available in the buffer zone, while LoRa measurements are obtained for the whole warehouse, including the buffer zone. In the buffer zone, where trolleys are tracked by more than one localization technology, the cooperation of multiple localization technologies can improve location accuracy. In such a scenario, these localization technologies can cooperate and exchange data in order to improve location accuracy, overcome temporary failures or omit incorrect location data. Algorithm 1 roughly provides a simple algorithm, which can be used to combine the measurements of various localization technologies. The filtered and post-processed location data are provided in Figure 11c. Another useful feature in this scenario is that different localization technologies can activate or deactivate each other in order to save resources. For instance, an RFID reader at the gate of the buffer zone or the LoRa-based low power localization technology detects an object that starts moving in the buffer zone, subsequently enabling a more accurate localization system (BLE in our scenario) for more precise tracking. Use Case II: Position Translation and Mapping The second use case is targeting the position translation and coordinate mapping across multiple technologies. For this use case, we consider a scenario where a connected operator would like to monitor the location of other operators, connected conveyor carts and trolleys in the warehouse. However, as is described above, all of these devices use a variety of indoor and outdoor localization technologies, which probably have different coordinate systems. Therefore, in order to realize the target scenario, there is a need for a mapping and translation process not only between non-spatial and spatial data, but also between different coordinate systems. For instance, Figure 12 presents the output for three independent localization technologies. In this sense, we can define a reference coordinate system (or use one of the coordinate systems from any of the localization technologies) and translate all of the non-spatial or spatial location data into this coordinate system by using the provided resources in position object (latitude, longitude, altitude, compass direction, elevation direction) in order to specify the actual position of the reference point and the relative orientation of the measurement area with respect to this reference point. The pseudo algorithm for the coordinate translation between the two coordinate systems is provided in Algorithm 2. In this algorithm, initially, the great-circle distance based on the Haversine formula [30] and the bearing [31] between the two reference points are calculated, which are then translated into the x and y offset values. The compass direction difference between the two reference points is also calculated. After that, the mapped coordinate values are obtained by using the compass direction difference and offset coordinate values of the reference points. The outcome of a mapping operation for the three location data from Figure 12 is provided in Figure 13a. ∆λ ← λ ref * π/180 − λ base * π/180 8: ∆γ ← γ ref * π/180 − γ base * π/180 9: β = sin 2 (∆λ/2) + cos(λ base * π/180) * cos(λ ref * π/180) * sin(δγ/2) * sin(∆γ/2) 10: = 2 * atan2( 2 β, 2 1 − β) 11: distance ← earthRadius * * 1000 13: x offset ← distance * sin(bearing)) 14: y offset ← distance * cos(bearing)) 15: ∆θ ← θ ref − θ base 16: end if 17: x mapped ← x new * cos(∆θ) − y new * sin(∆θ) + x offset 18: y mapped ← x new * sin(∆θ) + y new * cos(∆θ) + y offset 19: plot(x mapped , y mapped ) Mapped/translated location data 20: return The last use case is about the easy combination of position data with other application-and device-related semantic data. As mentioned before, the usage of LwM2M enables our applications and devices to access not only position data, but also other device-(e.g., security, connectivity, battery level) and application-(e.g., temperature, humidity) related resources exposed on the same interface. For instance, if a control application would like to know all of the operators in a certain zone, it can retrieve the device information and position information via LwM2M and filter only the operator devices within the target zone; or a monitoring unit can monitor the position, temperature and humidity of a transport trolley at the same time without any need for an extra interface. A sample preview of the interconnected data plane is provided in Figure 13b. Discussion about the Case Study As mentioned, the objective of this case study is to enable fully-automated and smart warehouses by means of the interconnection of these and any other heterogeneous systems of multiple vendors. For that purpose, we make use of open IoT technologies to ease the deployment of and interconnection between different solutions and multiple localization technologies, which are able to track thousands of objects and also interoperate seamlessly and spontaneously. The outcome of the case study shows that such an integration and interoperation will enable location-aware applications (in the context of IoT) to improve location accuracy, overcome temporary failures or omit incorrect location data, to translate several pieces of location information into a reference coordinate system and finally combine LwM2M semantic capabilities with location information without any extra effort. Conclusions Despite their significant potential in IoT applications, the amount of research targeting the integration of indoor localization technologies in real-life IoT applications is limited. In this work, we investigated the semantic interoperation and integration of different positioning systems in the context of IoT and validated our approach based on a real IoT case study. In order to overcome the seamless and spontaneous interoperation of localization systems and their interconnection with minimized integration cost, we focused on open IoT technologies and defined interfaces for these products based on LwM2M/IPSO specifications. Our design is clean and able to support all interaction models we have encountered in today's localization systems. In addition, it is able to handle a wide variety of connected actors, as illustrated by our use case, as well as localization technologies, despite any kind of variation in their nature. Therefore, we can conclude that IoT protocols already offer powerful and efficient mechanisms in order to realize all of these functionalities. We believe this work can provide a baseline, for the localization system developers, about how to use IoT protocols and platforms in order to integrate their products in IoT applications. It can also help IoT system providers to understand the characteristics and needs of different localization technologies in order to realize their semantic and structural interoperability in the context of IoT. Author Contributions: The work was realized with the collaboration of all of the authors. A.K. carried out the research study, prepared the proof of concept evaluation and wrote the paper. J.H. and I.M. organized the work, provided the funding and defined and supervised the research. P.S. and W.J. supported the work by their valuable feedback about the system design and also research outcome. Funding: This research was funded by the Flanders Innovation and Entrepreneurship Agency (VLAIO) Grant Number 150808.
11,824.4
2018-07-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Learning induces coordinated neuronal plasticity of metabolic demands and functional brain networks The neurobiological basis of learning is reflected in adaptations of brain structure, network organization and energy metabolism. However, it is still unknown how different neuroplastic mechanisms act together and if cognitive advancements relate to general or task-specific changes. Therefore, we tested how hierarchical network interactions contribute to improvements in the performance of a visuo-spatial processing task by employing simultaneous PET/MR neuroimaging before and after a 4-week learning period. We combined functional PET and metabolic connectivity mapping (MCM) to infer directional interactions across brain regions. Learning altered the top-down regulation of the salience network onto the occipital cortex, with increases in MCM at resting-state and decreases during task execution. Accordingly, a higher divergence between resting-state and task-specific effects was associated with better cognitive performance, indicating that these adaptations are complementary and both required for successful visuo-spatial skill learning. Simulations further showed that changes at resting-state were dependent on glucose metabolism, whereas those during task performance were driven by functional connectivity between salience and visual networks. Referring to previous work, we suggest that learning establishes a metabolically expensive skill engram at rest, whose retrieval serves for efficient task execution by minimizing prediction errors between neuronal representations of brain regions on different hierarchical levels. The study is a logical next step for this research team in the investigation of relationship between FDG PET and BOLD MRI during resting and activated state of brain functioning using their original approach. The basic methodology relies on performing quantitative (with arterial sampling for input function) FDG PET scan using continuous infusion of 18F-FDG in parallel with BOLD fMRI on PET/MR scan. Initial ~8 minutes of scanning are done at resting state (eyes opened fixed at crosshair) followed by 4 periods (6 min each, 2 easy and 2 hard) of task (playing adopted version of video game "Tetris") intermittently with periods of resting state (5 min length each) following tasks. Thus, the activation paradigm is more typical for fMRI studies, and it is applied to FDG scan assuming (based on the previous findings) that continuous inhalation technique allows careful separation of resting and activated states within one FDG PET imaging session. The main strategy here was combining MRI-derived connectivity information with FDG-derived metabolic data to create "metabolic connectivity mapping". The focus of this project was to evaluate the effects of learning (4 months of active playing adopted Tetris game in active group and not doing it in the control group) on the resting state and activated (with the same Tetris game) metabolic connectivity. As the result of training, task performance improved (especially on hard task) and even stayed improved 4 weeks after the end of training. Moreover, training improved performance of several cognitive tests involving mental rotation and visual search, but not spatial planning performance. Brain regions with increased CMRGlc, CBF (ASL) and BOLD during task served as regions for the assessment of learning-induced changes in metabolic connectivity. With occipital cortex as the target regions (the only one which was increasing in all modalities), learning-induced changes in metabolic connectivity were observed in the dorsal anterior cingulate cortex and insula (part of salience network). Post hoc analyses demonstrated that metabolic connectivity connections from dorsal anterior cingulate and insula toward occipital cortex increased at resting state, but decreased during execution of hard level of Tetris game in learning group compared to controls. These differences in metabolic connectivity values between the rest and hard task correlated with the Tetris score. Additional simulation analysis suggested that learning specific changes in metabolic connectivity in resting state were dependent on CMRGlc (and not BOLD), whereas in the hard task condition, they were driven by BOLD and not CMRGlc. Overall, the approaches and findings are of interest to the audience of the Communication Biology. The manuscript is well written, thoughtful and provide detailed information on the experiments and analyses, and exhaustive discussion. Findings are supported by proper illustrations and cited references from existing literature. Few comments. 1. Four hours of fasting before FDG scan is rather short (typically, these are 5-6 hours). It will be helpful to provide blood glucose levels at the beginning of FDG scan, and confirm that they were not substantially different at the end of FDG infusion. 2. "…we combined the imaging parameters of glucose metabolism (CMRGlu), blood flow (CBF) and the BOLD signal for a functional delineation of brain regions with increased metabolic demands during task performance". Why ASL (CBF) was used in addition to CMRGlc and BOLD? The whole story is about the relationship between CMRGlc and BOLD. Moreover, ASL was not involved in any of further analyses. Will the results be different if ASL is not used? 3. Intensive playing of video game (Tetris) improved performance of several mental tests including mental rotation and visual search, the effect stayed for 4 months after the end of training, and general resting and activated state estimates of brain metabolic activity and connectivity were modified. "Tetris" is a challenging game but not the only one, which some people play on the regular basis. Now younger adults spend a lot of time playing video games and some of these games are quite effortful. Did you check whether participants from control group do not play other mentally effortful games on the regular basis, which could change some psychometric, metabolic and functional MRI parameters? The control group did not improve on cognitive tests evaluated including mental rotation and visual search, however they could have other effects which may be important for brain functioning and metabolism. Moreover, one of the important next goals of this research is the investigation of effects of aging and neurodegenerative disorders. However, older adults (and especially symptomatic individuals) are usually not playing those "hard level" games. Please suggest on how this potential cohort effect could be controlled. 1.1. In this study, video game Tetris was used as a cognitive task for learning, and the brain region with mutual task-specific effects across imaging modalities was shown as the occipital cortex. These results may not appear in general for learning, but may appear only in learning specialized for visuospatial tasks. Please, discuss the limitations associated with this and incorporate them into your conclusions Response: We agree with the reviewer that this requires further attention. We have now adapted the relevant sections of the text and included this in the limitation as detailed below. Moreover, we elaborate this aspect in section 1.4, during the revision of the introduction and discussion. Abstract, page 3, lines 60-62 Accordingly, a higher divergence between resting-state and task-specific effects was associated with better cognitive performance, indicating that these adaptations are complementary, and both required for successful visuo-spatial skill learning. 1.2 In this study, brain regions with mutual task-specific effects across imaging modalities comprised the occipital cortex, intraparietal sulcus, and frontal eye field, but the intraparietal sulcus, and frontal eye field was not used as the target region in evaluating learning-induced adaptations in metabolic connectivity mapping. It is need to provide additional analyzes or discuss reasons for not analyzing them. Response: We thank the reviewer for highlighting this aspect. We have indeed evaluated learning effects with all three regions as targets, i.e., FEF, IPS and Occ, but the former two regions did not show any significant training effects. We have made several changes to the text to make this clear. 1.3. In this study, to evaluate learning-induced adaptations in metabolic connectivity mapping, network changes after practicing the same task by computing the association between CMRGlu and BOLD-derived FC were investigated. The rationale for this should be discussed in depth in the discussion section. Response: We apologize for the lack of clarity regarding the specific opportunities arising from the framework of MCM. To convey these aspects more clearly, we have adapted the text in several positions. For convenience, we have also indicated the text that was removed. However, we would also like to emphasize that an in-depth discussion for computing the association between BOLD-derived functional connectivity and glucose metabolism has already been provided in previous work from others [Riedl et al., 2016] and our own group [Hahn et al. 2020]. To avoid a repetition, we highlight the most compelling aspects in the introduction of the current manuscript and kindly refer the reader to previous work for more details. Introduction, pages 4-5, lines 85-125 However, most of the previous work only employed a single imaging modality at the same time, thus impeding to draw conclusions about how the different parameters of brain function act together in the process of learning. In addition, neuroplastic effects were investigated either in a general manner at resting state (e.g., gray and white matter structure 1 , network adaptations) or specifically during task execution (e.g., metabolic demands 6 , neuronal activation), while the direct comparison between the two states largely remains missing 7 . In sum, it is not clear whether intrinsic resting-state or task-related effects drive the improvement in cognitive performance after learning. Furthermore, the interaction between different indices of brain function and network adaptations is poorly understood. 9 . The underlying rationale is that the integration of metabolic information identifies the target region of a connection, since the majority of energy demands emerge post-synaptically 10-12 . The two imaging parameters are also tightly linked on a physiological basis through glutamate-mediated processes that occur upon neuronal activation. Glutamate release increases cerebral blood flow via neurovascular coupling 13,14 , which in turn affects the blood oxygen level dependent (BOLD) signal used for the assessment of functional connectivity. On the other hand, glutamate release also triggers glucose uptake into neurons 15 and astrocytes 16 , to meet increased energy demands for the reversal of ion gradients 17,11,18 . MCM thus constitutes a validated framework to investigate the associations of glucose metabolism and functional connectivity and decipher hierarchical interactions across brain regions by assigning directionality to connections. For an in-depth discussion on the rationale and the underlying biological mechanisms of MCM the reader is referred to previous work 9, 19 . Furthermore, the use of functional PET (fPET) imaging allows to investigate metabolic demands at rest and during task execution in a single measurement 19 . Combining these methodological advancements we have recently Using MCM, we have recently demonstrated that first-time performance of a cognitive task strengthened the interplay of functional connectivity and glucose metabolism, specifically for feedforward connections to higher-order cognitive processing areas 19 . These data indicated that most of the metabolic cost originates from the switch of the resting-state to the task-related network interactions, which extended previous work showing that acute task performance itself leads to pronounced functional network reorganizations 20 and increases in metabolic demands 21 . However, the corresponding effects induced by prolonged training of a task remain unknown. In the current work we aimed to address the open questions outlined above, namely i) the interaction of training-induced changes between functional connectivity and glucose metabolism, ii) the neurobiological contributions of resting-state and taskspecific effects that drive improvements in cognitive performance and iii) the hierarchical interplay across brain regions involved in the learning process. We investigated learning-induced neuronal adaptations in functional brain networks and the underlying energy demands with MCM before and after healthy volunteers practiced a challenging visuo-spatial task for 4 weeks. Proceeding from the convergence of functional connectivity and glucose metabolism already during the first execution of a novel task 19 we expect that after continuous skill learning this taskspecific association is consolidated also at resting-state. The descriptions of the introduction and discussion are generally scattered and too much, making it difficult to understand the main research results and related issues. It seems that the main points need to be sorted out. Response: We thank the reviewer for the valuable remark that the main points of the introduction and the discussion need to be emphasized to improve comprehensibility and clarity. We therefore accentuated the main points, shortened explanations and rephrased the transitions between sections. Furthermore, we aimed to improve readability by moving paragraphs and clearly separating our interpretation of the underlying neurobiological mechanisms from the other sections. Again, text that was removed is shown in the response letter for convenience. Please see our response to issue 1.3. for detailed changes of the introduction. Discussion, page 11, lines 238-242: We employed brain network analyses of simultaneous PET/MR imaging to investigate learning-induced neuroplastic changes in functional network reorganization and the underlying metabolic demands that relate to cognitive performance improvements. MCM served as a suitable multimodal approach to assess task-specific and restingstate adaptations, which earlier have been examined independently. Discussion, pages 11-12, lines 263-267: As part of the SN, the right anterior insula mediates switching between task-irrelevant networks and the activation of task-specific networks that convey externally oriented attention [27][28][29][30] Discussion, page 13, lines 296-299: We observed adaptations of MCM at both resting-state and task execution. Although these effects diverged they seem to constitute complementary adaptations of the learning process. We thereby provide a unified interpretation of skill learning as an advancement of neuronal representations of the task. The hierarchical interaction across brain regions as assessed with MCM revealed divergent yet complementary effects of the learning process at resting-state and during task execution. This approach further enabled us to provide a unified interpretation of skill learning as an advancement of regionally specific neuronal representations of the task. Discussion, page 13, lines 306-308: The initial learning stage comprises interactions with unknown task demands and high perceptional load, thus requiring adaptation of attention due to limited processing resources 49,50 . Response: Thank you for the attentive remarks. We have adapted the text accordingly, so that "learning group" now consistently reads "training group", and "passive control group", as "control group" only. Moreover, we have pointed out that M1 and M2 refer to first and second measurement and scanned the manuscript once again for typos. Methods, page 18, line 411: In this longitudinal study participants were randomly assigned to the training or passive control group […]. Reviewer #2: Remarks to the Author: The study is a logical next step for this research team in the investigation of relationship between FDG PET and BOLD MRI during resting and activated state of brain functioning using their original approach. The basic methodology relies on performing quantitative (with arterial sampling for input function) FDG PET scan using continuous infusion of 18F-FDG in parallel with BOLD fMRI on PET/MR scan. Initial ~8 minutes of scanning are done at resting state (eyes opened fixed at crosshair) followed by 4 periods (6 min each, 2 easy and 2 hard) of task (playing adopted version of video game "Tetris") intermittently with periods of resting state (5 min length each) following tasks. Thus, the activation paradigm is more typical for fMRI studies, and it is applied to FDG scan assuming (based on the previous findings) that continuous inhalation technique allows careful separation of resting and activated states within one FDG PET imaging session. The main strategy here was combining MRI-derived connectivity information with FDG-derived metabolic data to create "metabolic connectivity mapping". The focus of this project was to evaluate the effects of learning (4 months of active playing adopted Tetris game in active group and not doing it in the control group) on the resting state and activated (with the same Tetris game) metabolic connectivity. As the result of training, task performance improved (especially on hard task) and even stayed improved 4 weeks after the end of training. Moreover, training improved performance of several cognitive tests involving mental rotation and visual search, but not spatial planning performance. Brain regions with increased CMRGlc, CBF (ASL) and BOLD during task served as regions for the assessment of learning-induced changes in metabolic connectivity. With occipital cortex as the target regions (the only one which was increasing in all modalities), learning-induced changes in metabolic connectivity were observed in the dorsal anterior cingulate cortex and insula (part of salience network). Post hoc analyses demonstrated that metabolic connectivity connections from dorsal anterior cingulate and insula toward occipital cortex increased at resting state, but decreased during execution of hard level of Tetris game in learning group compared to controls. These differences in metabolic connectivity values between the rest and hard task correlated with the Tetris score. Additional simulation analysis suggested that learning specific changes in metabolic connectivity in resting state were dependent on CMRGlc (and not BOLD), whereas in the hard task condition, they were driven by BOLD and not CMRGlc. Overall, the approaches and findings are of interest to the audience of the Communication Biology. The manuscript is well written, thoughtful and provide detailed information on the experiments and analyses, and exhaustive discussion. Findings are supported by proper illustrations and cited references from existing literature. Response: We thank the reviewer for the thorough evaluation of our manuscript and the encouraging feedback on our work. 2.1. Four hours of fasting before FDG scan is rather short (typically, these are 5-6 hours). It will be helpful to provide blood glucose levels at the beginning of FDG scan, and confirm that they were not substantially different at the end of FDG infusion. Response: We agree with the reviewer that further details on blood glucose levels are required. The minimum fasting time of 4 hours was actually set to the arrival at the university hospital. Notably, until the application of the radioligand another 1.5 h passed by, which yields a minimum fasting time of 5.5 h. We have provided blood glucose levels before the PET/MRI scan in the manuscript, however, we did not obtain these values after the PET/MRI. Usually, this is not acquired in PET experiments, since the radioligand should not change blood glucose levels due to different metabolism and the low amount of injected radioligand (µg range). "…we combined the imaging parameters of glucose metabolism (CMRGlu), blood flow (CBF) and the BOLD signal for a functional delineation of brain regions with increased metabolic demands during task performance". Why ASL (CBF) was used in addition to CMRGlc and BOLD? The whole story is about the relationship between CMRGlc and BOLD. Moreover, ASL was not involved in any of further analyses. Will the results be different if ASL is not used? Response: We thank the reviewer for the interesting remark. The rationale to use all three imaging modalities was to ensure comparison to our previous work and to maximize the specificity regarding the functional definition of the target region. The BOLD signal represents a complex composite from several sources. On the other hand, ASL and fPET measurements provide more straightforward estimates of CBF and CMRGlu, respectively. The combination of BOLD, CBF and CMRGlu thus seems to provide the most robust functional delineation of target regions. We acknowledge that this approach may be conservative. Still, the trimodal combination ensures to include only voxels in the target region which are truly activated from as many neurobiological measurements as possibly obtained in the current study. To provide a full picture of our results we have repeated the calculations when defining the target region without CBF, as suggested by the reviewer. As expected from supplementary figure S2, the target region of the occipital cortex increased in size (by 29%), whereas the FEF and IPS were rather stable (increase by 4% and 0.2%, respectively). Importantly, this increased occipital target region did not change the main findings, i.e., interaction effects and post-hoc differences remained stable for the insula and dACC. This is now included in the manuscript and supplementary figure S3. Methods, page 25, lines 602-605: The three different indices of task-specific metabolic demands (CMRGlu, CBF, BOLD) were combined to obtain a robust estimate of regions involved in task processing. This approach was chosen to enable comparison to our previous work 19 and to maximize the specificity of the MCM target regions. Still, we also compute the main MCM training effects for the combination of CMRGlu and BOLD only (see supplementary figure S3). Results, pages 9-10, lines 208-210: Moreover, the results remained stable when defining the target region only from task-specific CMRGlu and BOLD changes (i.e., without CBF, supplementary figure S3). Intensive playing of video game (Tetris) improved performance of several mental tests including mental rotation and visual search , the effect stayed for 4 months after the end of training, and general resting and activated state estimates of brain metabolic activity and connectivity were modified. "Tetris" is a challenging game but not the only one, which some people play on the regular basis. Now younger adults spend a lot of time playing video games and some of these games are quite effortful. Did you check whether participants from control group do not play other mentally effortful games on the regular basis, which could change some psychometric, metabolic and functional MRI parameters? Response: We agree with the reviewer that Tetris® is one of many video games that young participants might play regularly. Being well aware of this critical point, we instructed participants not to play and especially not to learn any (new) video games while participating in the study. Furthermore, with the exclusion criteria, we also controlled for similar video games mainly involving visuospatial skills like "Candy Crush" and others within the last three years. Methods, page 21, lines 504-512: Exclusion criteria were current and previous somatic, neurological or psychiatric disorders (12 months), substance abuse or psychopharmacological medication (6 months), current pregnancy or breast feeding, previous study-related radiation exposure (10 years), body weight of more than 100 kg for reasons of radiation protection, MRI contraindications and previous experience with the video game Tetris® within the last 3 years. Experience with and regular playing of similar video games, specifically games primarily involving visuospatial skills like "Candy Crush," was another explicit exclusion criterium. Furthermore, participants of both groups were instructed not to play and especially not to learn any (new) video games while participating in the study. The control group did not improve on cognitive tests evaluated including mental rotation and visual search, however they could have other effects which may be important for brain functioning and metabolism. Response: We acknowledge the possibility that other effects might have evolved in the control group between the two PET/MRI scans. As Tetris® mainly involves visuospatial skills, we aimed to specifically test for the involved skill domains like mental rotation and visual search. Testing for other effects would have exceeded the scope of our study. We, therefore, included a corresponding remark in our limitations that the investigated effects were confined to the most relevant cognitive domains. Limitations, outlook and conclusions, page 16, lines 388-391: Although we have carefully assessed the cognitive domains relevant for the Tetris® task, we acknowledge the possibility that further aspects may have an effect when investigating brain function longitudinally. However, testing these would have exceeded the scope of this work. 2.5. Moreover, one of the important next goals of this research is the investigation of effects of aging and neurodegenerative disorders. However, older adults (and especially symptomatic individuals) are usually not playing those "hard level" games. Please suggest on how this potential cohort effect could be controlled. Response: We can follow the reviewers concerns that older individuals or patients suffering e.g., from neurodegenerative disorders may experience difficulties to perform cognitively challenging tasks such as Tetris®. However, easier tasks that involve similar cognitive skills are already available in the clinical routine for cognitive training of the specific patient cohorts. Often, even only one button is needed to complete these tasks. Regarding the use of more complex tasks like Tetris®, we would also consider it feasible to assess individual cognitive abilities before the scan and adapt the task according to the respective individual abilities. This would also allow comparison between the aged and younger individuals. Limitations, outlook and conclusions, page 16, lines 397-401: Disentangling the metabolic and functional requirements for neuroplasticity might prove beneficial to differentiate between different forms of neurodegenerative diseases and evaluate the severity of tissue damage in traumatic brain injury, feasibly with the use of cohort-specific or individually adapted tasks.
5,569.2
2021-11-12T00:00:00.000
[ "Psychology", "Biology", "Computer Science" ]
Analysis in Issues of Infromationization Process in Medium-sized and Small Enterprises and Their Countermeasures The rapid development of information technology enables informationization to become the necessary means for medium-sized and small enterprises to improve their competitive advantages. With efforts for several years, establishment of informationization in medium-sized and small enterprises has achieved some achievements. However, because they started later, together with disadvantages of the enterprises themselves, the overall application level of informationization is still low. Through analysis of the important significance of informationization implementation in medium-sized and small enterprises, this article summarized primary issues existing in the process of informationization in medium-sized and small enterprises and their causes, and proposed suggestions with operability on that basis. The conclusion of this article has extremely important significance to practices of informationization in medium-sized and small enterprises in China. Introduction Medium-sized and small enterprises are an important component of the national economy.with development of computer, Internet and communication technology, informationization has become a necessary means for medium-sized and small enterprises to obtain and maintain their competitive advantages, and is an important direction for development of enterprises.However, at present, informationization in medium-sized and small enterprises is encountered with great hindrance in terms of capital, personnel, technology and management foundation, etc, which causes the process of informationization to lag behind.In this article, the author is going to make an in-depth analysis in causes of this phenomenon and put forward corresponding countermeasures and suggestions. Significance of implementation of informationization in medium-sized and small enterprises Informationization means that enterprises spread and apply modern information technology in all aspects of production and operation, explore and employ internal and external information resources to the full, set up an organizational mode corresponding with these resources, and, therefore, improve efficiency, level and operation benefit in the processes of production management and decision making, etc, and strengthen the progress of competitiveness.Implementation of informationization mainly has the following significance to development of medium-sized and small enterprises. Informationization enables costs of medium-sized and small enterprises to be reduced and efficiency of operation to be increased. At the time when informationization improves efficiency of operation of enterprises, informationization also makes cost of production, cost of transaction and cost of management reduced by means of accelerating flow of information and improving utilization rate of information resources.Informationization enables the basic management of enterprises to be completed conveniently online and to realize paperless office, facilitates simplification of organizational institutions and business flow of enterprises, makes enterprises come to know inventory information within the enterprises timely, conveniently and accurately via Internet, compresses stocks, reduces occupation of working capital of inventory, acquires information of the market through Internet so as to promote their own products and services and steps over regional and spatial limitations to conduct a transaction with customers and suppliers, etc. Information may improve management in medium-sized and small enterprises. The essence of informationization is to improve competitive capacity of enterprises and it requires enterprises to establish a set of operation and management systems in accordance with informationization.Application of information technology involves economic activities of the whole enterprise, and combines information technology and management science of enterprises by means of electronization of data and flow.Reintegration of the current management flow of enterprises and improvement of the management of system of enterprises can more effectively change the status quo of incomplete management of enterprises themselves. 2.3 Informationization may reinforce the rapid reaction capacity of medium-sized and small enterprises to the market. Faced up with endless emergence of new products in the market and individualization and diversification of demands of customers, competition among enterprises also becomes increasingly fierce.Informationization may break through temporal and spatial limitations, shorten distance between enterprises and consumers and improve the ability of enterprises to react rapidly to demands of the market.Informationization can enable enterprises to collect effective information of the market in a timely, accurate and complete way, and help decision makers to make correct decisions on the basis of a scientific analysis in the information.At the same time, through circulation of information within enterprises, informationization can rapidly split these decisions and put them into each aspect of production and operation of the enterprises. Informationization may help enterprises expand the room for subsistence and development Under the current circumstance of the market, as a result of limitations of technical resources of medium-sized and small enterprises themselves and dependence on external resources, enterprises have to build a flexible and extensive cooperative partner relationship with external organizations.Through Internet and computer, the enterprises can conveniently communicate with external organizations and seek for cooperative partners to maintain their relations with cooperative partners.Informationization can enable enterprises to better participate in various forms of cooperation and operation, such as strategic alliances, innovation of Internet and supply chain, etc, resort to the cooperative system of social division of labor based on specialization and employ to the full internal and external resources to bring opportunities for exploration of new products in the enterprises. The status quo of development of informationization in medium-sized and small enterprises With efforts for several years, informationization in China has made progresses and attained achievements.Besides, when such international giant manufacturers as SAP and ORACLE declared to entirely enter the informationization market of medium-sized and small enterprises in China, informationization of medium-sized and small enterprises in China had entered a stage of rapid development.For the time being, a large majority of medium-sized and small enterprises have started informationization contact and exploration, and employment of information technology and equipment by medium-sized and small enterprises has witnessed great increase.Medium-sized and small enterprises have improved their recognition of informationization, and informationization in medium-sized and small enterprises has seen its initial fruit.Informationization has brought or is bringing positive influences to enterprises.However, since informationization has started later in China, together with disadvantages of enterprises themselves, the overall level of informationization application is still low.At present, establishment of informationization in many medium-sized and small enterprises is still restricted to propaganda of the corporate image and information inquiry, and application of computer still remains in the stages of office automation and labor and personnel management, such as word processing and financial management, etc.Not only internal network interworking is not fulfilled, but assistance of computer in the process of business is seldom used.According to the survey, transaction conducted through Internet in medium-sized and small enterprises accounts for less than 20% of the total and there are less than 10% enterprises that have completely fulfilled computer aided design system, office automation system and information management system.There have been merely 2.9% of enterprises that have employed the most central ERP system of informationization.Application of informationization which involves the mode of collaborative commerce is still in a burgeoning stage, and its utility ratio is less than 1%.What's more, there are almost half of enterprises that haven't been equipped with computers.Compared with first-class enterprises in the world, level of informationization in medium-sized and small enterprises in China at least lags behind 10 to 20 years. Primary factors that hinder informationization of medium-sized and small enterprises There are various reasons which cause the serious laggard phenomenon of informationization in medium-sized and small enterprises in China, not only including reasons on the part of enterprises themselves, but also including reasons on the part of the market and the government.There are mainly the following several aspects: Laggard idea of informationization The idea of informationization affects the progress and level of informationization establishment in medium-sized and small enterprises.Since informationization in enterprises is a long term and enormous project which requires continual investment of the enterprises, and, meanwhile, the risk of informationization is relatively high, the enterprises may not obtain benefits within a short period of time, and may also fail due to various causes.In addition, some entrepreneur don't have a clear idea about challenges and opportunities brought by the informationization society, don't have an in-depth understanding in significance of the informationization, have lots of worries about the prospect of informationization and are worried that the cycle and profit of return of investment in IT information system established with quite a large amount of resources may be limited, some enterprises still take a wait-and-see attitude towards whether they should conduct informationization. Lack of system planning and blind following suits Some enterprises are relatively blind in the process of informationization, without accurate informationization target, which leads to serious phenomenon of following suits.Some enterprises have misunderstanding in the essence of informationization, and mistake informationization simply for allocation of computer, for which they set up an information management system or corporate website.The phenomenon of emphasizing equipment and ignoring information is extremely serious.Quite a lot of enterprises make a heavy investment into basic network facilities, whereas their investment is obviously insufficient in development of basic information database and application information system, which restricts the role of value of hardware devices.Establishment of informationization in a great deal of medium-sized and small enterprises is confined to the form, and these enterprises do not select a system suitable for their own development on the basis of an overall and systematic analysis of characteristics of corporate operation, advantages and disadvantages of the enterprises.They are lacking in a feasible and long term plan for development of informationization of the enterprises, which leads to waste of man power, material resource and financial resource and even brings about serious constraint and risk to development of the enterprises. Inadequate capital for establishment of informationization Establishment of informationization system is an enormous project, with long cycle and huge investment.Investment of capital is indispensable from purchase of hardware equipment, such as desktop computer and notebook computer through construction of the Internet and purchase of large-scale management software.In addition to initial purchase, subsequent running and maintenance is also a long term project, which requires continuous investment and support of a large amount of man power, material resource and capital.According to statistics, annual maintenance expense of informationization system in the enterprises accounts for 10% to 20% of the total construction expense of the entire system.Medium-sized and small enterprises have low survival rate, low credit degree and few pledged assets, so it is extremely difficult for them to ask for financing from commercial banks.Expenditure of informationization in medium-sized and small enterprises mainly relies on internally accumulated capital.However, due to small operation scale and high operation risk, earning of medium-sized and small enterprises is limited and their cash flow is generally strained, with relatively inadequate equity fund.As a consequence, investment of capital by the enterprises in basic construction of Internet, system development and maintenance, etc.Thus, some medium-sized and small enterprises shrink back at the sight of informationization.And even if they have finished initial investment, inadequate subsequent capital may indirectly cause waste of the initial investment. Lack of technical talents and weakness of technical strength Establishment of informationization of enterprises is a complicated and systematic project, with high technical content, and calls for a batch of inter-disciplinary talents who are not only proficient in corporate management, but also in information technology, which can guarantee a benign running state of the information system.Professional IT personnel tend to need more space to learn, exchange and apply.At present, restrained by funds and management level, medium-sized and small enterprises are lacking in highly effective talent conservation and training mechanism, scientific and perfect incentive mechanism and rational performance evaluation system.On one hand, it is difficult for enterprises to absorb talents for system management and network management with high level.On the other hand, even if such inter-disciplinary talents are trained within the enterprises, it is difficult to retain them.Lack of technical talents and weakness of technical power lead to low efficiency of application of information system in medium-sized and small enterprises and difficult comprehensive and in-depth application of information resources. Irregular internal flow Informationization of the enterprises is not purely application of information technology, but involves a series of problems in the enterprises, such as business and management flow, organizational structure and management system, etc.It is required that basic management of the enterprises should be based on standardization and routinization of the system.However, at present, there are a lot of medium-sized and small enterprises that are evolved from previous family business, so they are rooted in the social relationship, and their management obviously with the characteristics of "governance by people".There lacks scientific planning and system within the enterprises that is relevant with informationization, which results in great arbitrariness in management.The management system, management means and management technology of the enterprises still rest on the traditionally less advanced stage, which does not correspond with requirements of informationization.Thus, informationization of the enterprises can not be integrated with specific management mechanism of the enterprises, and informationization of the enterprises is difficult to fulfill. Unsound social supporting system For the time being, there are various social intermediary services and supporting institutions in China that are unsound, with low service level, such as, modern credit service system and logistic distribution and delivery system, etc.At the same time, social economic order is not standardized.All the above makes it difficult for enterprises to sell online, purchase online and financial settlement online, etc. Informationization in a large majority of enterprises is merely confined to information issue, collection and communication, whereas formal signing of an order and a contract, payment and logistic delivery can not be completed online.Establishment of informationization public support environment laggs behind application system construction and development, together with incomplete and obsolescent database information of relevant industries and departments, which makes it difficult for informationization of medium-sized and small enterprises to be promoted towards a higher level. Suggestions for informationization in medium-sized and small enterprises Considering issues above, construction of informationization in medium-sized and small enterprises should start from the following several aspects: To update the idea, with participation by all employees The concept obstacle and artificial obstacle encountered in elimination of informationization construction will turn out to be the atmosphere and common view for positive participation in informationization of the enterprises and may promote rational and ordered development of informationization construction.We should strengthen learning of relevant knowledge about informationization, change our ideas and concepts and recognize the necessity and urgency of implementation of informationization in medium-sized and small enterprises in a correct way.Change of the concept does not only occur among the executive level of the enterprises, but also include professional technical personnel and common employees of the enterprises. To make clear the target and to rationally make the plan We should make clear that the target of informationization is to improve the level of operation.We should plan the progress of informationization in the enterprises according to the demand and capacity of the enterprises themselves and the long range perspective and target of the enterprises.We should start from resolution of prominent issues of the enterprises, analyze the status quo of the enterprises in a systematic way and carry out an overall plan with both strategic vision and operability, which corresponds with practice of the enterprises.In the process of implementation, we should act according to our own abilities, follow in a proper sequence, and conduct the implementation by steps, which can guarantee that advancement and sequencing of all systems coordinate with production and operation of the enterprises.We have to avoid purchase a large amount of expensive software and hardware facilities at one go. To invest in a rational way and to expand the financing channel The enterprises themselves have to employ limited funds to appropriate places, bearing in mind the principle of what to do and what not to do.Also, the enterprises may expand the financing channels by means of financial lease, etc. employment of existing funds has to be based on rational planning.According to practical situation, the enterprises can purchase at any time what they are in need of, and apply investment in tranches.Enterprises may develop new technologies by means of mutual cooperation to introduce new resolutions for informationization, which can in turn reduce too much investment of funds in informationization and can enable these enterprises to rationally explore their own For instance, rent of the excellent service offered by ASP not only requires investment in infrastructure, but also doesn't require development and maintenance of application software. To strengthen training of employees in the enterprises Talents are the most active factor in the enterprises, and quality of personnel determines the effect of any tool employed and the efficiency it should bring into play.Medium-sized and small enterprises should set up perfect informationization talent recruitment and training mechanism.In the process of informationization of the enterprises, training is a sort of basic task, which combines in an organic way "people" and "business flow" in the enterprises.By means of multiple training methods, such as on-the-job training and intensive training, the enterprises should make a plan to organize learning and skill training of the basic knowledge of informationization among employees of the enterprises, so as to improve the informationization quality and creative awareness of the whole employees. To intensify basic management and to standardize business flow Establishment of informationization should be combined with reinforcement of basic management of the enterprises and based on solid basic management work of information resources.All departments of the enterprises should make an effort to realize accurate, complete, objective and timely data in each aspect of the production and operation of the enterprises, and provide sufficient fundamental support for implementation of informationization.The enterprises should bring in modern management ideas in combination with the progress of informationization, optimize the organizational structure and standardize the process of business flow of management information, vigorously carrying out the system of the enterprises, the technology and management innovation. To make perfect the social supporting system Relevant department in charge should unite strengths from all aspects, try to set up the supporting system of informationization in the enterprises, positively develop all sorts of modern professional intermediary services, establish perfect "expert service" system, vigorously develop newly emerging professional intermediary services and enhance modification strength on traditional intermediary services, so as to better and faster promote establishment of informationization in the enterprises. Conclusion Through analysis, this article points out, informationization of medium-sized and small enterprises can reduce costs of these enterprises and improve their operation efficiency, improve management in medium-sized and small enterprises, intensify the ability of medium-sized and small enterprises to react rapidly to the market and help enterprises expand the room for subsistence and development.All the above has extremely important practical significance to enterprises.However, in the process of informationization of medium-sized and small enterprises, there exist the phenomena of laggard idea of informationization, lack of system planning, blind following suit, inadequate capital for establishment of information, lack of technical personnel, weakness of technical strength, irregular internal flow and unsound social supporting system, which constrain effective development of informationization.With regard to these issues, this article has proposed corresponding countermeasures and suggestions.
4,347
2010-06-17T00:00:00.000
[ "Business", "Computer Science" ]
Pushing Photons Light in LEDs is generated in the semiconductor material when excited, negatively charged electrons traveling along the semiconductor’s crystal lattice meet positively-charged holes (an absence of electrons) and transition to a lower state of energy, releasing a photon along the way. Over the course of their measurements, the researchers found that a significant amount of these photons were being generated but were not making it out of the LED. Pushing Photons UC Santa Barbara researchers continue to push the boundaries of LED design a little further with a new method that could pave the way toward more efficient and versatile LED display and lighting technology. In a paper published in Nature Photonics, UCSB electrical and computer engineering professor Jonathan Schuller and collaborators describe this new approach, which could allow a wide variety of LED devices -from virtual reality headsets to automotive lighting -to become more sophisticated and sleeker at the same time. "What we showed is a new kind of photonic architecture that not only allows you to extract more photons, but also to direct them where you want," said Schuller.This improved performance, he explained, is achieved without the external packaging components that are often used to manipulate the light emitted by LEDs. Light in LEDs is generated in the semiconductor material when excited, negatively charged electrons traveling along the semiconductor's crystal lattice meet positively-charged holes (an absence of electrons) and transition to a lower state of energy, releasing a photon along the way.Over the course of their measurements, the researchers found that a significant amount of these photons were being generated but were not making it out of the LED. "We realized that if you looked at the angular distribution of the emitted photon before patterning, it tended to peak at a certain direction that would normally be trapped within the LED structure," Schuller said."And so we realized that you could design around that normally trapped light using traditional metasurface concepts." The design they settled upon consists of an array of 1.45-micrometer long gallium nitride (GaN) nanorods on a sapphire substrate.Quantum wells of indium gallium nitride (InGaN) are embedded in the nanorods to confine electrons and holes and thus emit light.In addition to allowing more light to leave the semiconductor structure, the design polarizes the light, which co-lead author Prasad Iyer said, "is critical for a lot of applications." Nanoscale Antennae The idea for the project came to Iyer a couple of years ago as he was completing his doctorate in Schuller's lab, where the research is focused on photonics technology and optical phenomena at subwavelength scales.Metasurfaces -engineered surfaces with nanoscale features that interact with light -were the focus of his research. "A metasurface is essentially a subwavelength array of antennas," said Iyer, who previously was researching how to steer laser beams with metasurfaces.He understood that typical metasurfaces rely on the highly directional properties of the incoming laser beam to produce a highly directed outgoing beam. LEDs, on the other hand, emit spontaneous light, as opposed to the laser's stimulated, coherent light. "Spontaneous emission samples all the possible ways the photon is allowed to go," Schuller explained, so the light appears as a spray of photons traveling in all possible directions.The question was could they, through careful nanoscale design and fabrication of the semiconductor surface, herd the generated photons in a desired direction?"People have done patterning of LEDs previously," Iyer said, but those efforts invariably split the into multiple directions, with low efficiency."Nobody had engineered a way to control the emission of light from an LED into a single direction." Right Place, Right Time It was a puzzle that would not have found a solution, Iyer said, without the help of a team of expert collaborators.GaN is exceptionally difficult to work with and requires specialized processes to make high-quality crystals.Only a few places in the world have the expertise to fabricate the material in such exacting design. Fortunately, UC Santa Barbara, home to the Solid State Lighting and Energy Electronics Center (SSLEEC), is one of those places.With the expertise at SSLEEC and the campus's world-class nanofabrication facility, the researchers designed and patterned the semiconductor surface to adapt the metasurface concept for spontaneous light emission. "We were very fortunate to collaborate with the world experts in making these things," Schuller said. About UC Santa Barbara The University of California, Santa Barbara is a leading research institution that also provides a comprehensive liberal arts learning experience.Our academic community of faculty, students, and staff is characterized by a culture of interdisciplinary collaboration that is responsive to the needs of our multicultural and global society. All of this takes place within a living and learning environment like no other, as we draw inspiration from the beauty and resources of our extraordinary location at the edge of the Pacific Ocean.
1,117.2
2011-05-01T00:00:00.000
[ "Physics" ]