text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Evidence of extensive non-allelic gene conversion among LTR elements in the human genome Long Terminal Repeats (LTRs) are nearly identical DNA sequences found at either end of Human Endogenous Retroviruses (HERVs). The high sequence similarity that exists among different LTRs suggests they could be substrate of ectopic gene conversion events. To understand the extent to which gene conversion occurs and to gain new insights into the evolutionary history of these elements in humans, we performed an intra-species phylogenetic study of 52 LTRs on different unrelated Y chromosomes. From this analysis, we obtained direct evidence that demonstrates the occurrence of ectopic gene conversion in several LTRs, with donor sequences located on both sex chromosomes and autosomes. We also found that some of these elements are characterized by an extremely high density of polymorphisms, showing one of the highest nucleotide diversities in the human genome, as well as a complex patchwork of sequences derived from different LTRs. Finally, we highlighted the limits of current short-read NGS studies in the analysis of genetic diversity of the LTRs in the human genome. In conclusion, our comparative re-sequencing analysis revealed that ectopic gene conversion is a common event in the evolution of LTR elements, suggesting complex genetic links among LTRs from different chromosomes. Human Endogenous Retroviruses (HERVs) are inherited DNA proviruses arising from retroviral infections of germ-line cells and subsequent integration into the host genome 1,2 . After integration, the provirus may experience numerous amplifications. The human genome is estimated to contain thousands of copies of these interspersed repetitive elements. In fact, they make up a significant fraction (~8%) of its sequence content [3][4][5] . Many families of HERVs exhibit high transcriptional activity in different human tissues, and both beneficial [6][7][8] and detrimental effects [8][9][10] have been described. Similarly to other endogenous proviruses, they consist of a 5-10 kb sequence which code for viral proteins, flanked by two Long Terminal Repeats (LTRs) (0.3-1.6 kb in length) that contain regulatory elements for viral protein expression 11 . Although the majority of HERV genes are highly defective due to several inactivating mutations in their coding sequences (ranging from single nucleotide changes to large deletions), LTR elements still retain their regulatory activity participating in the regulation of cell-type specific gene expression in mammals [12][13][14][15] . Furthermore, LTRs flanking a HERV might recombine with each other leading to the excision of the coding region, thus becoming solitary elements ("Solo LTRs") 16,17 . Solo LTRs are quite common in the human genome and play important roles in genome evolution 18,19 . In general, it has been hypothesized that LTRs may be important contributors to the genome plasticity of the host species due to their capability to undergo ectopic recombination events such as unequal crossing over 19 . Recent data suggested that ectopic (non-allelic) gene conversion might have played a role in the evolution of flanking LTRs in humans 20 . Gene conversion mediates the transfer of genetic information from a "donor" sequence to a highly similar "acceptor" 21 and this can have two major effects: on one hand, it can act as a homogenizing force, increasing similarity between paralogous sequences, while on the other hand, it can generate an excess of genetic diversity among allelic copies of the "acceptor" sequence. Although episodes of non-allelic gene conversion between flanking LTRs have been previously hypothesized through an inter-species comparative Results Sequence diversity in LTR elements. To determine whether different LTRs (both "solo" elements and HERV-flanking ones) have been engaged in ectopic gene conversion events during recent human evolution, we studied the genetic diversity of the human Y chromosome in 52 LTRs from 14 different subfamilies (Supplementary Table S1). Overall, 61 Kb of the MSY were re-sequenced, including LTR elements (about 52 Kb) and their flanking regions (about 9 Kb), in 16 Y chromosomes from different haplogroups of the Y phylogeny. We found a total of 131 SNPs and 3 indels (V350.2; V418 and V435) of 14, 19 and 120 bp, respectively (Supplementary Table S2). These variants do not include 2 nucleotide positions (28,828,795 in LTR 41 and 24,394,612 in LTR 51) that were invariant in the entire sample set but different from the reference sequence, a finding that can be interpreted as a consequence of reference-specific mutations (in both cases a G to A transition). The nucleotide diversity for LTR elements was estimated as π = 3.6 ± 0.4 × 10 −4 , which is significantly higher both than that observed for their surrounding regions (π = 2.2 ± 0.4 × 10 −4 ) and that previously observed for portions of the X-degenerate region of the MSY (π = 1.5 ± 0.3 × 10 −4 ) 24 . This finding suggests that LTR elements might have had a peculiar evolutionary history, which have led to an increase in their genetic diversity. The 134 polymorphisms here identified did not appear to be equally distributed, with some LTRs showing an extremely high density of SNPs, while others (12/52) were found to be invariant. More specifically, about 26% (35 out of 134) of the variants were found in only two elements, LTR 2 and LTR 24, (27 and 8 polymorphisms, respectively). In order to evaluate the relative contribution of each sequenced LTR to the diversity of the global sample, we obtained estimates of the π value by considering all the elements separately (Fig. 1). Figure 1. Nucleotide diversity of the LTR elements analysed in the present study. Only LTRs with π > 0 are shown. Dotted line represents the mean value of π . The π value of the ARSDP was calculated from data reported in Trombetta et al. 24 . The highest nucleotide diversity was observed for LTR 2 (π = 2.2 ± 0.5 × 10 −3 ), a value that is about an order of magnitude higher than the average of the X-transposed region and comparable to the estimates obtained for other regions of the Y chromosome in which ectopic gene conversion has already been reported 22,24 . In particular, three elements (LTR 2; LTR 12 and LTR 24) showed comparable π values to the most diverse ectopic gene conversion hotspot so far identified on the X-degenerated portion of the MSY (the ARSDP hotspot) 24 (Fig. 1). The high nucleotide diversity here observed among allelic copies could be the consequence of a high mutation rate due to ectopic gene conversion. Occurrence of multiple, phylogenetically equivalent and closely spaced SNPs. If gene conversion is operating between LTRs, we would expect to observe a higher number of co-converted sites organized as clustered SNPs (i.e. multiple phylogenetically equivalent SNPs at closely spaced positions) within LTR elements. As expected, the overall pattern of nucleotide diversity was dominated by "conventional" SNPs (i.e. solitary variants on different branches of the Y tree). Nevertheless, we also identified a total of 25 SNPs, grouped into 9 clusters (with each cluster made up of 2 or more mutations), occurring in the same phylogenetic context and at closely spaced positions (less than 50 bp) (Fig. 2). These clusters of SNPs may arise through two possible mechanisms: multiple random mutations or a single ectopic gene conversion event in which a stretch of DNA is replaced by the sequence of a non-identical paralogous region so creating co-converted sites. Assuming a random distribution of the 134 variants here identified, there is a significant excess (P = 2.3 × 10 −20 ) of clustered SNPs that have arisen in the same phylogenetic context. Moreover, the proportion of clustered SNPs (25 out of 134, 19%) was significantly higher (P < 1 × 10 −16 , Fisher's exact test) than that obtained (about 0.8%) from two recent high-coverage NGS studies of the Y chromosome 28,29 (Table 1). The nine clusters of SNPs were restricted to 3 LTRs (LTR 2, LTR 19, LTR 24) covering a total of 4.9 Kb (8% of the sequenced region) (Fig. 2). Overall, these findings (a high nucleotide diversity and the presence of clustered SNPs) suggest the involvement of ectopic gene conversion events in shaping the genetic landscape of at least some of the LTR elements here analysed. To further explore the intensity of ectopic gene conversion in shaping the LTR sequence diversity, we re-sequenced the two elements that showed both the highest diversity and the highest number of clustered SNPs (LTR 2 and LTR 24) in a wider sample set. Re-sequencing LTR 2 (1550 bp) in 111 unrelated samples, we found 51 variants, about 70% of which (36/51) were organized in clusters ( Fig. 3 and Table 2). We found a total of 13 clusters ranging from 2 bp to 85 bp with a mean length of 23 bp. The longest cluster (cluster 1, 85 bp) was made up of six SNPs found in a single branch of the Y tree (E-V257* ) ( Fig. 3 and Table 2). Our analysis highlighted that this region was particularly rich in homoplasic variants. Indeed, we found that 14 SNPs (27%) showed evidence of recurrent mutations. The proportion of recurrent events is significantly higher (P < 1 × 10 −4 , Fisher's exact test) than that reported in Scozzari et al. (four out of 2386 positions, 0.2%) 30 (Fig. 3). Interestingly, a single cluster (cluster 3), which arose in three different branches of the phylogeny, (E-M2; E-V2009; E-V16) was entirely made up of homoplasic variants. By exploiting the Y phylogeny and taking into account both recurrent and non-recurrent mutations, we counted a total of 85 putative mutational events. Interestingly, the mutational events appeared to be unevenly distributed, with clustered SNPs and recurrent mutations mainly found in the regulatory region (U3) of the element (Fig. 4). The re-sequencing of LTR 24 (1828 bp) in 51 unrelated samples showed a similar although less extreme picture. We found a total of 23 SNPs (Fig. 5 and Supplementary Table S2), about 56% of which (13 out of 23) were in three clusters ( Fig. 5 and Table 2). One of these (cluster 14) is composed of eight SNPs and it covers a total of 63 bp in the I-M170* haplogroup. About 8.6% (2 out of 23) of the variants was recurrent: one SNP (V460) back-mutated in the E-V257* branch and the other (V454) recurred in three different branches of the phylogeny (A0-V150, A1-M31 and I-M170* ). In conclusion, the high number of clustered SNPs here observed cannot be simply explained by random mutations. In principle, the unusual mutational patterns here observed could be explained by double crossovers between paralogs with reciprocal exchange of slightly different segments of DNA. However, due to the short length of the observed sequence changes, the most parsimonious explanation remains ectopic gene conversion 21 . Donor sequences involved in ectopic gene conversion. Assuming that a cluster is due to ectopic gene conversion, it should be possible to identify donor sequences within the genome. Through whole genome alignments of gene-converted tracts, we identified different donor sequences involved in the observed gene conversion events. In order to minimize the probability of observing a donor sequence by chance, we only analysed clusters of SNPs with length ≥ 3bp, thus excluding cluster 12, which contains only two contiguous SNPs (V370 and V371). A genomic region was considered a putative "donor" if it shows 100% sequence identity with the derived state of each cluster. For about 67% of the clusters (10 out of 15), we were able to find univocally one single donor, whereas for five clusters, multiple putative donors were identified ( Table 2). The minimum observed tract length ranged from 3 bp to 86 bp, while the maximum gene-conversion tract, measured as the distance between the two nearest non-converted Paralogous Sequence Variants (PSVs) flanking the converted site, ranged from 14 to 450 bp, with an average length of 99 bp. In total, at least 14 different LTRs acted as donor sequences in gene conversion events involving LTR 2 and LTR 24 as acceptors. We observed both intra-and inter-chromosomal gene conversion events, and in at least two cases (cluster 4 and 9), the single donor sequence was located on an autosome (Table 2), which suggests that the MSY is able to recombine, in the form of gene conversion, not only with the X chromosome [23][24][25]27,32 but also with autosomes. Interestingly, the wide cluster 14 was generated by intra-LTR conversion, in which the transfer of genetic information occurred between two duplicated regions within the same elements (LTR 24). In turn, clusters 15 and 16 were generated by gene conversion between two HERV-flanking elements: LTR 24 (acceptor sequence) and LTR23 (donor sequence). In total, we found evidence for three kinds of gene conversion (GC), based on the nature of the donor/s sequence: (1) multiple-donors GC (LTR 2), where different LTR elements from the Y chromosome or autosomes acted as donor sequences; (2) single-donor GC between HERV-flanking LTRs (LTR 24) and (3) intra-LTR GC (LTR24), where ectopic gene conversion occurred between duplicated elements within a single LTR. Discussion One of the most remarkable features of the human genome is that about one half of it is composed of interspersed repetitive elements. A part of these elements are remnants of ancestral retroviruses, called Human Endogenous Retroviruses (HERVs), which originated through successive waves of ancient infection of germ cells and subsequent incorporation into the host's genome. Following integration, the provirus will be parasitically maintained within the genome and will be inherited vertically form parent to offspring in a Mendelian fashion. The inserted provirus is an identical DNA copy of the RNA viral genome with the exception that it is flanked by two identical Long Terminal Repeats (LTRs) containing transcriptional regulatory elements such as enhancers and promoters. These repeats play important roles in viral gene expression and, although most proviral genes are inactivated by several mutations, LTRs could preserve their regulative functions, thereby imposing a regulatory activity upon host cell genes 33,34 . Apart from affecting gene expression, LTRs could significantly influence genome evolution. Indeed, they can be involved in ectopic crossing over, generating structural changes within the host genome. A major effect of crossing over between LTRs flanking a HERV is the excision of the viral genes and the establishment of a single element named solo-LTR. By comparative inter-species analysis, it has been proposed that gene conversion among LTRs could have played a role in the sequence evolution of these elements [18][19][20] , but no direct evidence of this has been reported in humans. In the present study, by exploiting the haploid nature of the MSY, we demonstrated the existence of several non allelic gene conversion events among LTRs, highlighting that this molecular process can be highly effective in modulating the sequence landscape of these elements on the human Y chromosome. More specifically, using an intra-species phylogenetic study, we defined at least two LTRs (LTR 2 and LTR 24) as two new gene conversion hotspots characterized by an extremely high density of SNPs, as well as a complex patchwork of sequences derived from different LTRs spread across the entire genome. The nucleotide diversity of the LTRs here analysed was found to be higher than that calculated for their surrounding regions. In particular, LTR 2 showed one of the highest nucleotide diversities (π = 2.2 ± 0.5 × 10 −3 ) of the human genome. Excluding the HLA locus 35 , few genomic regions have been reported to show similar diversity values. Examples include the autosomic LHB/CGB loci (shaped by ectopic gene conversion) 26 , the X-to-Y gene conversion hotspots found within the MSY 24 and a few other regions where gene conversion has been previously identified 22,36 . The high values of diversity here observed provide a clear picture of the remarkable ability of gene conversion to increase diversity among allelic sequences which act as acceptor elements. Our analysis was mainly based on the identification of clusters of SNPs (i.e. groups of two or more SNPs occurring in close proximity and on the same branch of the Y phylogeny, thus indicating that they probably arise at the same time as a single event). Regarding Y chromosome phylogeny, assuming no gene conversion and random mutations, one would expect to observe different SNPs both randomly distributed along the Y chromosome tree and physically distanced from each other. This is the situation observed in a recent high coverage NGS study of the Y chromosome 28 , where the analysis of several Mb in hundreds of Y chromosomes led to the identification of a very low frequency (0.8%) of clustered SNPs ( Table 1). The opposite was observed for two of the LTR elements here analysed (LTR 2 and LTR 24) where 70% and 56% (respectively) of clustered SNPs were found. The most parsimonious explanation for this observation is that recent gene conversion events have played a role in shaping the genetic diversity of these elements. Indeed, if there are single nucleotide differences (that are physically close to one another) between the "donor" and the "acceptor" sequence, the main effect of the replacement of a stretch of DNA by a paralogous sequence (gene conversion) is to create simultaneously new polymorphisms close to each other and in the same phylogenetic context (clusters of SNPs) [23][24][25] . The uneven distribution of clusters among different elements suggests that the LTRs here analysed could, as acceptor sequences of gene conversion, act differently with some elements being probably more frequently involved. Gene conversion, together with the primary effect of generating clusters of SNPs, may generate homoplasies (in the form of recurrent mutations, back mutations or SNPs with more than two alleles) along the Y phylogeny 24,32,37 . LTR 2 and LTR 24 were found to be significantly enriched with homoplasic variants compared to other regions of the Y chromosome. The observed excess of recurrent mutations within gene conversion hotspots raises important issues about the use of SNPs within the LTRs as stable markers in phylogenetic tree reconstruction and their potential use in forensic applications such as Ancestry Informative Markers 32,37-39 . The use of these SNPs as stable variants could erroneously alter the structure of the tree, obscuring phylogenetic signals from other markers and making it difficult to understand the evolutionary relationships among different Y chromosomes. The identification of abundant gene conversion events among LTRs on the human Y chromosome means that care is needed when using these regions in phylogenetic studies and that all the SNPs that may have been arisen by gene conversion should be excluded from evolutionary analysis. Ectopic gene conversion always initiates with a double strand break (DSB) which represents one of the most deleterious forms of damage of genetic material as it can lead to genome instability and cell death 40 . DSBs are strong inducers of Homologous Recombination (HR), which will potentially lead to chromosomal aberrations if the template sequence is a paralogous region rather than an allelic one 41 . However, 98% of the DSBs, which are repaired by HR, is resolved by gene conversion rather than crossing over 21 . Recently, Trombetta et al. 25 highlighted an association, on the human Y chromosome, between gene conversion hotspots and regions of genomic instability that show a high frequency of DSBs 25 . In this view, it is tempting to speculate that differences in the occurrence of gene conversion among the LTRs here analysed may reflect a difference in the frequency of DSB formation and that the two elements (LTR 2 and LTR 24) showing the strongest conversion signals could be the most fragile elements. Through whole genome alignments of gene-converted tracts, we identified the donor sequences involved in the observed gene conversion events. However, it was not possible to determine a single donor sequence for all the events. Differently from other previously identified Y chromosome gene conversion hotspots 24,25,27 , the LTR 2 region was found to be converted by multiple donor sequences. Intra-chromosomal Y-to-Y gene conversion was much more represented than inter-chromosomal events, probably due to the linkage between donor and acceptor sequences. Indeed, it is known that the physical distance between acceptor and donor sequences is a factor affecting the rate of gene conversion. More precisely, the frequency of gene conversion is inversely proportional to the distance between the interacting sequences in cis 42 , and a higher rate of conversion has been observed between linked as opposed to unlinked paralogs in mice 43 . In this view, since the two new hotspots here identified have potentially thousands of donors, the DSBs are probably mainly repaired by the potential donor sequences that are linked to them on the same chromosome. DSBs may also induce inter-and intra-chromosomal Ectopic Recombination (ER) generating several chromosomal rearrangements (such as transposition, inversion or deletions) that may have a dramatic impact on the organization of the genome structure. Little is known about the quantitative relationship between gene conversion rate and ectopic recombination rate on the same locus, but the identification of abundant gene conversion on some LTRs leads to speculate that these elements can also be ER hotspots, therefore particularly important regions for a hypothetical structural re-organization of the human genome. To date, considering the location of the donor sequences, only two forms of non allelic gene conversion have been found to be active on the MSY: The Y-to-Y gene conversion 22,44,45 and the X-to-Y one [23][24][25]27,32,46 . In this study, for the first time, through the identification of donor sequences located on different autosomal chromosomes, we identified a third form of recombination for the MSY: the autosome-to-Y gene conversion. Thus, we further complicate the evolutionary picture of the MSY by demonstrating that the sequence landscape of this genomic region could also be modulated by autosomal sequences. LTR 2 is a solo-LTR in which we identified 13 different clusters that originated by gene conversion with at least 12 different donors spread across the entire genome. The high number of clusters within LTR 2 does not necessarily reflect a more intense gene conversion activity, but it could be the consequence of the involvement of a large number of different sequences acting as donors. The presence of different LTRs acting as donors could be a continuous source of variants and it can greatly increase the allelic diversity of the acceptor. This is mainly because the donor sequences are free to mutate and accumulate single nucleotide differences compared to the acceptor so that the dynamic balance between mutation and gene conversion will never lead to a complete sequence identity among the interacting sequences. The main effect of the presence of many different donors is that LTR 2 was found to be a sequence patchwork of short segments of DNA derived from different elements. Conversely, when only one donor is involved (as in the case of X-to-Y conversion hotspots described in 23 and in 25 ), gene conversion events are expected to decrease the diversity between donor and acceptor sequences. As a consequence, a conversion-mutation dynamic equilibrium will be reached and the similarity between interacting sequences will increase and arrive at a point in which conversion events will no longer be detectable due to the lack of differences between donor and acceptor. LTRs are composed of three separate domains: U3, R and U5. Interestingly, within LTR 2 the SNPs appeared to be unevenly distributed, and all the clustered SNPs generated by gene conversion were located within the U3 portion of the element (Fig. 4). The U3 region contains several binding sites for different transcription factors with regulatory activities that tend to be lost by the cell, through a gradual accumulation of mutations 16,47,48 . At the moment, it is not possible to determine why there is a preferential accumulation of mutations in this domain, but we cannot rule out a role for gene conversion in maintaining or silencing the functional activity of this regulatory region. Exploring in the chimpanzee genome the distribution of the human LTRs here analysed, we noted that in the orthologous region of the flanking-HERV elements LTR 23 and LTR 24, only one solo-LTR is present, which Scientific RepoRts | 6:28710 | DOI: 10.1038/srep28710 probably originated through a recombination event between the two elements. By aligning the human LTRs (LTR 23 and LTR 24) with the solo-LTR in the chimpanzee, we mapped the break point of the crossing-over event leading to the formation of the solo-LTR in the simian lineage. We noticed that the break point perfectly overlaps with cluster 16, suggesting that this region was a non-allelic recombination hotspot in both lineages. The existence of a non-allelic recombination hotspot shared between two different species suggests that it pre-dates the human-chimp speciation. This finding adds to previous evidence in favour of a greater conservation in time of non-allelic recombination hotspots (NAHR) compared to allelic recombination hotspots (AHR). Many AHR hotspots that have been described for the human genome have no counterpart in the chimpanzee 49,50 indicating a fast turn-over of these elements, whereas recent studies have shown that NAHR are often shared between evolutionarily distant species of primates 22,24,25,44,50 . The identification of gene conversion among LTRs casts some doubts on the use of Next Generation Sequencing (NGS) to identify new polymorphisms within these elements. Indeed, NGS is based on the alignment to the reference genome of very short sequences (50-200 bp) called reads. Since the length of the gene conversion tracts here identified is comparable with the length of the reads generated by NGS, it is possible that a "converted" sequence can be confused with the donor sequence and wrongly aligned to the paralogous region. Alternatively, the reads could be discarded by the alignment processes or deep-sequencing-associated bioinformatics analyses may consider closely spaced SNPs (a common signal in gene conversion) as false positives and discard them. As a consequence, detecting true SNPs in gene-converted repetitive elements may be difficult when using current short-read next generation sequencing techniques and the final result may consist in an underestimation of genetic variability in duplicated regions subjected to non-allelic gene conversion. To assess this topic, we explored the pattern of nucleotide diversity within LTR 2 from a recent paper in which 456 complete Y chromosomes belonging to different branches of the Y phylogeny were re-sequenced using NGS methodologies 28 . We noticed that regarding LTR 2, Karmin et al. 28 identified only 7 true SNPs, about 13% of the number we identified (51) by analysing less than a quarter of the samples (Supplementary Table 3). Interestingly, 15 variants identified by Karmin et al. 28 were discarded during the filtering phases. The observed discrepancy could be partially explained by the fact that some of the haplogroups we analysed were not included in the study by Karmin et al. 28 , but probably, it is largely due to problems during the alignment and/or the filtering of the SNPs due to the confounding effect of gene conversion. This observation suggests that the real impact of non-allelic gene conversion on the mutability of the genome may be underestimated during NGS studies principally because of the inadequacy of instrumental and/or bioinformatics technologies currently in use. The problem regarding the identification of SNP clusters by NGS analysis can be resolved through the introduction of deep sequencing based on "long reads" of a few thousand bases. To sum up, our results revise and expand previous research on LTR gene conversion in humans, clearly indicating a strong recent history of gene conversion among different elements and supporting the idea that the sequence landscape of MSY could be modulated by the transfer of genetic information from different genomic loci. Material and Methods DNA Samples. DNA samples came from the collections of the authors and human Y chromosomes were selected on the basis of their haplogroups, which had been determined in previous studies 29, [51][52][53][54][55][56] . Haplogroups selected for the analysis were chosen in order to maximize the coverage of the Y phylogeny. In most cases, DNA was prepared from fresh venous blood. The study was prospectively examined and approved by the "Policlinico Umberto I, Sapienza Università di Roma" ethical committee (document number 1158/13). The experiments were carried in accordance with the guidelines approved for research on human samples and informed consent was obtained from all participants. Identification of SNPs on the Human Y Chromosome. Overall, 61 kb from the MSY was sequenced for the first sample set (16 samples), whereas a total of 111 samples and 51 samples were sequenced for LTR 2 and LTR 24, respectively. We designed PCR and sequence primers (available on request) for each LTR on the basis of the Y chromosome sequence reported in the February 2009 assembly of the UCSC Genome Browser using Primer3 software (http://genome.ucsc.edu/; http://frodo.wi.mit.edu/primer3/). Sequencing templates were obtained through PCR in a 50 ml reaction containing 50 ng of genomic DNA, 200 mM of each deoxyribonucleotide, 2.5 mM MgCl 2 , 1 unit of Taq polymerase, and 10 pmol of each primer. A touch-down PCR program was used with an annealing temperature decreasing from 62 to 55 C over 14 cycles, followed by 30 cycles with an annealing temperature of 55 C. Following DNA amplification, the PCR products were purified using the 715QIAquick PCR purification kit (Qiagen, Hilden, Germany). Cycle sequencing was performed using the BigDye Terminator Cycle Sequencing Kit with Amplitaq DNA polymerase (Applied Biosystems, Foster City, CA) and an internal or PCR primer. Cycle sequencing products were purified by ethanol precipitation and run on an ABI Prism 3730XL DNA sequencer (Applied Biosystems). Chromatograms were aligned and analysed for mutations using Sequencher 4.8 (Gene Codes Corporation, Ann Arbor, MI). Data Analysis. Gene conversion between LTRs can have two effects: On the one hand, it can increase their overall sequence similarity; on the other hand, when it involves single nucleotide differences between donor and acceptor sequences, it can generate an excess of genetic diversity among allelic copies 24 . This is because the paralogous bases on the donor sequence will change the bases on the acceptor chromosome 24,24 and new Y chromosome SNPs will be observed in the population. If gene conversion events among LTRs contributes to the variation of these elements, we expect to find an excess of multiple derived variants which are both physically neighbouring and phylogenetically equivalent mutations (cluster of SNPs), where the Y-linked derived alleles are the same as the paralogous bases on the donor. The success of this analysis depends on the correct inference concerning the direction of the mutation and the identification of an ancestral donor sequence. The direction of the mutation for each Y-linked polymorphism was unambiguously determined by placing it in the context of the well-known intra-specific human Y chromosome phylogeny [28][29][30]55 . The P value for the observed number of clustered SNPs was obtained by using a random permutation test. More specifically, we randomized the distribution for the 136 variants here identified within both the 61,165 bp sequenced and the 30 branches of the Y chromosome tree built with the 16 chromosomes here analyzed (Fig. 2) (using the "rand between" function of Microsoft Excel software). We then counted the number of times we obtained two or more SNPs within a distance of no more than 50 bp and in the same phylogenetic context (i.e. on the same branch of the tree). This process was replicated 1,000 times, which produced the null distribution of clusters of SNPs. We determined the P value of observing 25 out of 134 SNPs in clusters assuming that their null distribution follows a Poisson distribution. A cluster of SNPs is defined through the distance between the two outermost variants with the distance between each polymorphism being no more than 50bp. Nucleotide diversity (p) and its standard deviation (SD)were calculated according to 57 . Donor sequences were identified by the alignment of the derived state of each cluster against the reference sequence of the human genome (hg19/GRCH 37) using the BLAT function of the UCSC Genome Browser. In most cases, we found donors of the entire cluster length while in other cases, the cluster could be generated by different gene conversion events or by gene conversion and mutation (Table 2). Moreover, in some cases, we found shared SNPs among acceptors and donors ( Table 2).
7,449.4
2016-06-27T00:00:00.000
[ "Biology" ]
On the Range of the Radon Transform on Z n and the Related Volberg ’ s Uncertainty Principle We characterize the image of exponential type functions under the discrete Radon transform R on the lattice Zn of the Euclidean space Rn (n ≥ 2). We also establish the generalization of Volberg’s uncertainty principle on Z, which is proved by means of this characterization. The techniques of which we make use essentially in this paper are those of the Diophantine integral geometry as well as the Fourier analysis. Introduction First of all, we recall briefly that the uncertainty principle states, roughly speaking, that a nonzero function and its Fourier transform cannot both be sharply localized, which can be interpreted topologically by the fact that they cannot have simultaneously their supports in a same too small compact (see the Heisenberg uncertainty principle in [1]).Considerable attention has been devoted to discovering different forms of the uncertainty principle on many settings such as certain types of Lie groups and homogeneous trees.Several versions of the uncertainty principle have been established by many authors in the last few decades.Among the contributions dealing with this important topic, let us quote principally [1][2][3][4].On the other hand, we note that the uncertainty principle is one of the major themes of the classical Fourier analysis as well as its neighboring parts of the mathematical analysis. In this paper, we are interested in developing the study of the restriction of the Radon transform of ∈ 1 (Z ) to G (1) .By means of Theorem 1 stated below, considered here as the first main result, we succeeded in proving the second main result concerning the generalization of Volberg's UP on Z (see Theorem 2).We precise that G (1) is the most important subset of the discrete Grassmannian G for our study of both fundamental results (see Sections 3 and 4). The purpose of this paper is to study the characterization of the image of exponential type functions under , as well as the generalization of Volberg's UP on Z . Our work is motivated by the fact that the uncertainty principle for the discrete Radon transform on Z plays a fundamental role in the field of physics, especially in quantum mechanics. Our paper is organized as follows. In Section 2, we fix, once and for all, some notation and also give certain properties of the discrete Radon transform on Z , which will be useful in the sequel of this paper.Moreover, we recall Volberg's theorem on Z in the same section. Section 3 deals with the characterization of the image of exponential type functions under , which is given by the following main theorem (see Theorem 4). Theorem 1 (characterization of the image of exponential type functions under ).Let be a positive function of 1 (Z ).Then (i) the following two conditions are equivalent: where > 1 is an absolute constant; (ii) the following two equivalences hold: where > 0 is an absolute constant, where > 0 is an absolute constant. Section 4 is devoted to establishing the generalization of Volberg's UP on the lattice Z (see Theorem 2 below).We make here use of the discrete Fourier transform F, which maps a function ∈ 1 (Z ) to a function F on T defined by where T = R /Z is the -dimensional torus. Theorem 2 (generalization of Volberg's UP on the lattice Z ). Notations and Preliminaries In this section, we fix some notation which will be useful in the sequel of this paper and recall certain properties of the discrete Radon transform on Z ( ≥ 2).We also introduce various functional spaces.For 1 ≤ < +∞, let (Z ) (resp., ∞ (Z )) be the space of all complex-valued functions defined on Z such that ∑ ∈Z |()| < +∞ (resp., sup ∈Z |()| < +∞).Let us denote by 0 (Z ) the space of all complex-valued functions defined on Z such that () → 0 as ‖‖ → +∞, with for all = ( 1 , . . ., ) ∈ Z .It is clear that, for 1 ≤ < < +∞, we have the following inclusions: For 1 ≤ < +∞, we denote by ‖ ⋅ ‖ the discrete norm on the space (Z ) defined by We define the discrete Radon transform on Z as follows: for all (, ) ∈ P × Z and ∈ 1 (Z ), where (, ) is the hyperplane in Z defined by and () being the greatest common divisor of the integers 1 , . . ., (see [5] for more details), and denotes the usual inner product of and regarded as two vectors of the Euclidean space R .The set () = { ∈ Z | /‖‖ 2 ∈ Z} can be written as follows: Because, for all (, ) ∈ Z 2 such that ̸ = , we have We note that Indeed, for ∈ Z \ {0}, we can take = /() ∈ P, then /‖‖ 2 = () ∈ Z.Moreover, 0 ∈ () for all ∈ P. For a function ∈ 1 (Z ), we define its discrete Fourier transform F on the -dimensional torus T as follows: We define the discrete one-dimensional Fourier transform F 1 by where T is the one-dimensional torus. At the end of this section, we recall Volberg's theorem on Z. Characterization of the Image of Exponential Type Functions under 𝑅 In this section, we study the characterization of the image of exponential type functions under the discrete Radon transform on Z .More precisely, we state the following main theorem which will be proved after introducing some intermediate lemmas. Theorem 4 (characterization of the image of exponential type functions under ).Let be a positive function of 1 (Z ).Then (i) the following two conditions are equivalent: where > 1 is an absolute constant; (ii) the following two equivalences hold: where > 0 is an absolute constant, where > 0 is an absolute constant. In order to avoid any too long proof of Theorem 4 and then prove it clearly, we need the following useful lemmas.Lemma 5. Let > 0 and be the function defined on Z by: () = exp(−‖‖ 2 ), for all ∈ Z .Then there exists a constant > 0 (which depends only on and ) such that, for all (, ) ∈ P × Z, we have Proof.Let (, ) = ∑ ∈Z ,=‖‖ 2 exp(( 2 −‖‖ 2 )).The left inequality of ( 21) is trivial since > 0 implies clearly that > 0. To show the right inequality of (21), it suffices to prove the inequality (, ) < , for all (, ) ∈ P × Z.For this, we distinguish two cases. Lemma 7. Let ∈ 1 (Z ) verifying the following condition: where is a strictly positive absolute constant.Then = 0. We now return to the proof of Theorem 4. We now give the proof of Theorem 10. Proof of Theorem 10.We prove this theorem by two steps as follows.
1,635.2
2015-02-02T00:00:00.000
[ "Mathematics" ]
Mesoscale modeling of photoelectrochemical devices : light absorption and carrier collection in monolithic , tandem , Si | WO 3 microwires We analyze mesoscale light absorption and carrier collection in a tandem junction photoelectrochemical device using electromagnetic simulations. The tandem device consists of silicon (Eg,Si = 1.1 eV) and tungsten oxide (Eg,WO3 = 2.6 eV) as photocathode and photoanode materials, respectively. Specifically, we investigated Si microwires with lengths of 100 μm, and diameters of 2 μm, with a 7 μm pitch, covered vertically with 50 μm of WO3 with a thickness of 1 μm. Many geometrical variants of this prototypical tandem device were explored. For conditions of illumination with the AM 1.5G spectra, the nominal design resulted in a short circuit current density, JSC, of 1 mA/cm, which is limited by the WO3 absorption. Geometrical optimization of photoanode and photocathode shape and contact material selection, enabled a three-fold increase in short circuit current density relative to the initial design via enhanced WO3 light absorption. These findings validate the usefulness of a mesoscale analysis for ascertaining optimum optoelectronic performance in photoelectrochemical devices. ©2014 Optical Society of America OCIS codes: (220.0220) Optical design and fabrication; (220.2740) Geometric optical design; (230.0250) Optoelectronics; (040.5350) Photovoltaic; (160.6000) Semiconductor materials. References and links 1. A. Bard and M. A. Fox, “Artificial photosynthesis: solar splitting of water to hydrogen and oxygen,” Acc. Chem. Res. 28(3), 141–145 (1995). 2. F. E. Osterloh and B. A. Parkinson, “Recent developments in solar water-splitting photocatalysis,” MRS Bull. 36(1), 17–22 (2011). 3. B. Masters, “Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo,” Opt. Express 3(9), 332–338 (1998). 4. A. J. Nozik, “Photoelectrochemical cells,” Philos. Trans. R. Soc. Lond. A 295(1414), 453–470 (1980). 5. D. G. Nocera, “The artificial leaf,” Acc. Chem. Res. 45(5), 767–776 (2012). 6. S. Haussener, C. Xiang, J. M. Spurgeon, S. Ardo, N. S. Lewis, and A. Z. Weber, “Modeling, simulation, and design criteria for photoelectrochemical water-splitting systems,” Energy Environ. Sci. 5(12), 9922–9935 Introduction For solar power to be feasible at the terawatt scale, a method to store and dispatch solar energy must be developed.Solar-generated chemical fuels have potential for large-scale storage and distribution, and most routes to solar fuels begin with hydrogen formed via photoelectrolysis of water [1][2][3].Currently, commercially available technologies carry out solar fuel generation in two steps: (1) solar to electric conversion using photovoltaics, and (2) water electrolysis to form hydrogen using an electrolyzer.Combining these two steps into one photoelectrochemical device can potentially reduce material usage and cost and increase efficiency by combining the photogeneration and electrochemical energy conversion steps [4]. The design and fabrication of an efficient and inexpensive photoelectrochemical (PEC) device is a complex task, with many issues that need to be addressed, extending from the molecular level all the way up to the module level [5,6].This paper focuses on the mesoscale (10s of nm to ~100s of µm), where light absorption, charge carrier transport, catalysis, and transport of reactants and products through multiphase solutions and ion-conducting membranes are inherently integrated.Figure 1 illustrates this concept for the case of a schematic tandem photoelectrochemical device with one-dimensional transport pathways.Realistic two-and three-dimensional nano-and microstructures have somewhat different detailed absorption and transport characteristics but function in a broadly similar fashion.Material selection, electrolyte solution properties and geometric device design are coupled to these transport phenomena and greatly affect device performance.Understanding and optimizing device operation on the mesoscale is crucial to achieve high efficiency photoelectrochemical device performance.Because light absorption, electronic transport, interfacial redox reactions and ionic transport are coupled in photoelectrolysis, a solely empirically-based iterative device design approach is impractical and time-intensive.A simulation-driven design approach enables us to quickly identify and understand the limiting factors in photoelectrochemical device operation and use this knowledge to optimize device performance.Our report here focuses on optoelectronic effects manifest at the mesoscale, demonstrating how geometric design parameters and optical materials selection can greatly alter the absorption efficiency for a photoelectrochemical device for given photoabsorber materials.Specifically, we demonstrate that modest geometrical modifications to a microwire photoelectrochemical device design can result in a three-fold increase in light absorption and short circuit current relative to a nominal baseline design. Fig. 2. Diagram of a monolithic, tandem, microwire based PEC device, including photoanode, photocathode, Ohmic contact material between the two photoelectrodes, oxygen (OER) and hydrogen (HER) evolution catalysts, and ion-conducting membrane.Reactions are written for acidic conditions, and therefore, the membrane is labeled as a proton exchange membrane (PEM). We explore design aspects for a tandem, monolithic, microwire-based silicon (Si) | tungsten oxide (WO 3 ) photoelectrochemical device [Fig.2].Experimental fabrication of one variant of this design has been demonstrated [7].This design was inspired by the recent development of Si microwire synthesis processes for photovoltaic applications [8].Deposition of a series-connected photoanode material on Si microwires enables a tandem device to be realized, with an open circuit voltage suitable for water splitting, and enables greater use of the solar spectrum in a tandem photoabsorber [9,10].This tandem, microwire design affords many advantages including (i) the series addition of the photovoltages in both components of the tandem to exceed the minimum voltage required to split water, while simultaneously absorbing visible light [8], (ii) the orthogonalization of light absorption (in wire axial direction) and carrier transport (in wire radial direction), which is especially advantageous for indirect bandgap semiconductors, which have long absorption lengths [11], (iii) increased surface area for catalysis, (iv) an approximately 90% decrease in photoelectrode material usage relative to planar monolithic device designs, depending on wire pitch and diameter, (v) a source for radial strain relief, enabling the integration of lattice mismatched materials [12,13], (vi) a monolithic structure enabling straightforward incorporation of an ion-conducting membrane, and (vii) short electrolyte diffusion pathways for low solution resistance [6]. We explored two optoelectronic variations of microwire tandem absorbers (illustrated in Fig. 3) and investigated design features that increase light absorption without compromising carrier collection.The main difference between the two variations is the optical opacity or transparency of the contact material interconnecting the two photoelectrodes.An opaque contact design increases the photon path length, and thus absorption, in the photoanode, which is crucial for wide bandgap materials, while the transparent contact design reduces absorption losses in the contact layer.Opaque aluminum (Al) and transparent indium tin oxide (ITO) contacts were investigated because their electron affinities facilitate formation of an ohmic contact to WO 3 [14][15][16][17].The optical characteristics of these contact materials and of the Si photocathode and WO 3 photoanode motivated the different doping schemes shown.Si is an attractive choice for the photocathode material owing to the existing knowledge base for design and fabrication of Si microwire arrays with good fidelity and electronic quality [8,17].A buried p|n Si junction enables a larger open circuit voltage (V OC ) and more ideal behavior than the feasible alternative, i.e., a liquid junction between p-Si and the H + /H 2 redox couple [18].In both optoelectronic designs, the solution interface consists of degenerately doped n-Si to facilitate tunneling for nearly lossless charge transfer.Similarly, the contact interface with degenerately doped p-Si creates a tunneling ohmic contact.However it is desirable to locate the p|n junction in close proximity to the areas of greatest light absorption, viz. the bottom half of the wire in the opaque contact design and the top half of the wire in the transparent contact design.Despite its wide optical bandgap (E g = 2.6eV) [19], WO 3 is an appropiate photoanode material because it is stable under oxidizing and acidic conditions [20], and is intrinsically n-type, enabling facile creation of a liquid junction with the H 2 O/O 2 redox couple [21].These properties enable experimental fabrication, and thus, comparison of experiment and theory for this design [7]. Design approach The complex dielectric function (n, k) data for each material and the tandem microwire geometric configuration (Fig. 4(a)) were used to render microstructures simulated by finite difference time domain method numerical solutions to Maxwell's equations using Lumerical FDTD.The nominal design dimensions are defined by 7 µm period square microwire arrays consisting of 100 µm tall silicon microwires with 1 µm radii; the top 50 µm of the wires are conformally coated with 100 nm of either Al or ITO contact material and 1 µm of WO 3 .The nominal photoanode geometry mimicked the shape of experimental WO 3 photoanodes (Fig. 3).Periodic arrays of the structure in the y-direction were defined via periodic Bloch boundary conditions.Perfectly-matched layer (PML) boundaries were used at the top and bottom of the simulation region to create boundaries with nearly 100% transmission.The bottom boundary was chosen to be metallic for designs incorporating a back reflector.The void space between microstructures was modeled as air in the results presented below, but insignificant changes in absorbed current (~1-2%) were observed when the structures were immersed in water, which does not absorb significantly in the UV-Vis range. Full field (E, H) electromagnetic simulations performed in two dimensions were used to investigate light propagation and absorption in these microstructures at discrete wavelengths, ranging from 350 to 1100 nm in 50 nm intervals [17].Transverse electric (TE) polarization, defined as an electric field oscillating in the xy plane, was used to illuminate the structures.Partial spectral averaging was employed in all simulations to eliminate large fluctuations in reflection and transmission that occur at specific wavelengths in thin substrates due to interference phenomena such as Newton's rings [17].By recording the spatially-resolved electric field, the normalized power absorption (P abs ) in each material was calculated using Eq. ( 1), where E is the electric field, ϵ is the material permittivity, and ω is angular frequency [17].( ) By spatially integrating this expression, the fraction of power absorbed in each material at a given wavelength was calculated.From these absorption calculations, the device's ideal short circuit current density (IQE = 1) was calculated using the AM1.5GSpectrum.Performing this procedure at multiple angles of incidence (0 to 60°) and weighting appropriately allowed the calculation of the ideal day-integrated hydrogen production based on the current-limiting photoelectrode performance and the assumption of unity Faradaic efficiency.Ideal short circuit current density at normal incidence and day-integrated hydrogen production per unit area are used as figures of merit for the designs considered. Carrier transport simulations were performed using Synopsys TCAD Sentaurus, which solves in a coupled fashion the Poisson equation and charge carrier transport equations for charge carriers using a finite element (FE) method.Device microstructures were rendered to precisely match the discretization used in the full-field electromagnetic simulations in two dimensions using radial symmetry.To create a coupled optoelectronic simulation, spatially resolved carrier generation rates for normal incidence illumination were calculated from the electric field and complex index maps obtained from Lumerical FDTD results [Eq.( 2)] and used as the optical inputs for carrier transport simulations in Sentaurus. ( ) Charge carrier transport simulations were performed only for the WO 3 |liquid junction because it limits device performance.The diffusion length of 1 µm with Shockley-Read-Hall recombination and a donor doping concentration of 10 15 cm −3 were used for WO 3 .The development of the WO 3 material parameter file is detailed in a previous publication [6].The WO 3 |liquid junction was modeled as a Schottky junction, using thermionic emission physics, with the contact work function equaling that of the oxygen evolution reaction redox potential under standard conditions (5.68 eV relative to vacuum).Short circuit current densities for each structure were determined by solving the coupled Poisson and charge carrier equations under illumination and no applied bias.Bulk and surface recombination currents were extracted from the software, as well. Original designs Plots of normal incidence absorption vs. wavelength for Si, WO 3 , and the contact material in the two original designs [Fig.3] are shown in Figs.4(b) and 4(c) and compared to a planar device with thickness giving equal material volume (w Si = 6.4 µm, w WO3 = 5 µm, w ITO = 0.7 µm).ITO was used as the contact material in these planar comparisons because aluminum (w = 0.7 µm) would block all light from the silicon.Table 1 displays the normal incidence ideal short circuit current densities, ideal day-integrated hydrogen production density, normal incidence internal quantum efficiencies, normal incidence short circuit current densities, and current density losses due to bulk and surface recombination for each design and its planar equivalent.The significantly lower absorbed current density of WO 3 in comparison to Si in all designs illustrates that WO 3 , due to its wide bandgap, absorbs far less light than Si and limits device current and hydrogen production in both cases.Because of aluminum's reflective properties, the opaque contact design outperforms the transparent contact design at normal incidence due to an increased photon path length and, hence, greater absorption in WO 3 .However, the transparent contact design outperforms the opaque contact design in dayintegrated hydrogen production because aluminum absorption in the opaque contact design increases and transmission in the transparent contact design decreases with increasing incidence angle.Absorption in the WO 3 is limited by contact absorption in the opaque contact design and silicon absorption in the transparent contact design, which has significant ramifications for light absorption optimization and is discussed later in this text. Neither of the microstructured designs outperforms the equivalent planar device optically, mainly due to a longer, uninterrupted photon path length through the WO 3 in the planar case.However, when complete optoelectronic performance is taken into account, the transparent model approaches and the opaque contact model outperforms the planar equivalent device.Both of the microstructured devices have higher internal quantum efficiencies than the planar device because they have shorter carrier diffusion pathways thereby minimizing bulk recombination.In fact, the dominant loss in the microstructured designs is surface recombination at the Ohmic contact rather than the bulk recombination which dominates losses in the planar case.The higher internal quantum efficiencies of the two microstructured devices underscores the advantage of orthogonalizing the directions of light absorption and carrier collection for materials like WO 3 (reported L D = 200nm-1um) [22][23][24] that have indirect optical bandgaps and relatively low material quality. Loss in surface plasmon polariton modes In the opaque contact design, significant absorption losses result from contact absorption (16.72 mA/cm 2 ), reflection (18.07 mA/cm 2 ) and transmission (5.52 mA/cm 2 ).While losses to reflection and transmission can be minimized with an appropriate antireflective (AR) coating, a back reflector, and smaller pitch size, the losses to contact absorption, 0.78 mA/cm 2 of which lie within the WO 3 band edge, present a more complex optimization problem.Experimental measurements of optically-thick planar aluminum indicate 93% reflection and 7% absorption; therefore, a simple ray optics framework cannot explain the absorption results for aluminum in the microwire structure.Observation of the electric field propagation in time revealed that light couples into a lossy surface plasmon-polariton (SPP) mode at the WO 3 |Al interface.For ease of visual demonstration, Fig. 5 displays snapshots of the SPP mode propagating at 200 and 800 nm wavelengths along an Al|air interface, eliminating any convolution with the complex structure geometry and the absorption constant of WO 3 .At λ = 800 nm [Fig.5(a)], the evanescent decay of the electric field away from the interface, a signature of an SPP mode [25], is readily evident.At λ = 200 nm [Fig.5(b)], as the wavelength of light approaches the Al|air SPP resonance of 113 nm [26], the decay length decreases and the speed of light that is trapped in the SPP mode slows notably in comparison to the plane wave, another signature of an SPP mode [25].The surface wave disappeared at wavelengths shorter than the resonant wavelength.Similar observations were made for the WO 3 |Al interface within the complete microstructure.Given that the lossy SPP mode at the WO 3 |Al interface is the source of absorption in the aluminum, optimization of the opaque contact structure requires eliminating or, at a minimum, greatly reducing the amount of light that couples into this mode.A plethora of approaches were employed including reduction of aluminum, both in coverage area and thickness, alteration of the shape of the WO 3 and the Al, and replacement of Al with Ag (silver).A detailed discussion of results for these structures is beyond the scope of this paper, but none of these approaches eliminated coupling into the SPP mode; this robust coupling capability of the microwire structure is attributed to the scattering properties of the wires.The most successful approach was the replacement of Al with Ag, resulting in a ~40% reduction in the contact absorption.Silver's red-shifted plasmon resonance and lower absorption coefficient throughout the visible in comparison to aluminum contribute to its lower absorption.Nevertheless, the 9.68 mA/cm 2 that is absorbed by Ag is large and places an inherent limit on the overall device performance.Despite the seemingly promising initial performance of the opaque contact design, these results motivate a more detailed analysis and optimization of the transparent contact design due to its greater potential for light absorption optimization. Optimization In the transparent contact design, the contact absorption is insignificant in the UV and visible region and minimal in the infrared region.Silicon absorbs over an order of magnitude more light than the WO 3 ; thus, the following optimization focuses solely on improving WO 3 optoelectronic performance, even at the expense of silicon absorption.As previously mentioned, the main sources of absorption loss for WO 3 within its band edge are from reflection, transmission and absorption in the silicon.Increasing the WO 3 thickness is the simplest approach to increase its absorption, but this method is constrained by the electronic performance of the material.Based on preliminary experimental measurements, the diffusion length of WO 3 is ~1 µm [22], which suggests that increasing the thickness of WO 3 beyond its current 1 µm thick coverage would have diminishing returns.Thus, other alterations to the original design were made iteratively in order to find an optimized structure for light absorption that should have comparable electrical performance with the original design.The most logical alterations include the reduction of the array pitch (7 µm to 5 µm for the optimized structure) to increase the amount of light that first interacts with the WO 3 , the addition of an antireflective (AR) coating (100 nm of SiO 2 coating the upward-facing portion of the WO 3 |air interface) to reduce reflection, and the addition of a back reflector (smooth metal surface) to eliminate transmission.Another alteration towards improving WO 3 absorption was the flattening of the top of the WO 3 in order to (i) reduce reflection at the WO 3 |air interface and (ii) decrease silicon absorption within the WO 3 band edge by directing light downwards into the WO 3 as opposed to bending light towards the silicon wire.The shape of the top half of the silicon wire was etched into a cone shape in order to promote scattering back into the WO 3 at the ITO|WO 3 interface. Schematics, absorption characteristics, and optoelectronic performance metrics for the original transparent model, a partially optimized structure, and the optimized structure along with their planar equivalent thickness structures are shown and compared in Fig. 6 and Table 2.The partially-optimized structure includes the conically-shaped silicon and an AR coating; its planar equivalent (w Si = 4.3 µm, w WO3 = 7.6 µm, w ITO = 0.2 µm) also includes an AR coating.The optimized structure includes all modifications discussed above; its planar equivalent (w Si = 8.4 µm, w WO3 = 14.6 µm, w ITO = 0.4 µm) includes an AR coating and back reflector.At normal incidence, the WO 3 in the optimized structure absorbs 230% more light and collects 240% more current than in the original transparent contact model and 130% more light and 140% more current than the opaque contact model.Despite an increase in bulk recombination due to the addition of oxide material per structure as well as per aerial area, the internal quantum efficiency of the optimized design is also slightly higher than that of the original because a smaller fraction of the minority carriers are recombining at the Ohmic contact.The absorption enhancements of the optimized design extend over all angles of incidence, resulting in a 140% increase in hydrogen production over the original transparent contact model and a striking 350% increase over the opaque contact model.Additionally, the optimized model outperforms its planar equivalent device in both optical and optoelectronic efficiency, highlighting the advantages of light guiding and scattering properties of the optimized microstructured devices. Conclusion This coupled electromagnetic simulation/carrier transport investigation of the optoelectronic performance of tandem Si|WO 3 microwire arrays reveals that geometric design and the contact material selection can significantly influence light absorption properties.Furthermore, opaque contacts are not a realistic choice for the interface between the two semiconductors because they absorb a substantial amount of light, thereby reducing the overall device efficiency.Also, antireflective coatings, back reflectors and reduction of array period are powerful handles for increasing efficiency.We note that the Si|WO 3 system is not an ideal material combination for tandem devices due to the excessively wide bandgap of WO 3 , but the optoelectronic results from this system are instructive for any system where the photoanode current density is limiting. Additionally, this study reveals the value of full wave electromagnetic simulations coupled to charge carrier transport simulations for mesoscale design of photoelectrochemical devices.Simulation tools can quantitatively predict optoelectronic performance for realistic geometries and identify absorption-limiting phenomena, such as the lossy SPP mode in the opaque contact design, more quickly and efficiently than via a large number of experiments.These observations would be difficult, if not impossible, to make experimentally.Device models were developed and optoelectronic performance analyzed for roughly 50 design variations over the span of approximately six months, demonstrating the speed at which model-based device optimization can occur. Fig. 1 . Fig. 1.One-dimensional representation of a tandem photoelectrochemical device with a proton-conducting membrane, illustrating the integrated nature of light absorption, carrier transport, catalysis, and reactant and product transport through solution and membrane. Fig. 3 . Fig. 3. Schematics for two proposed optoelectronic tandem PEC designs; (a) an opaque contact with the p|n junction in the bottom half of the Si microwire (b) a transparent contact design with the p|n junction in the top half of the Si microwire. 2 1 Fig. 5 . Fig. 5. Snapshots of the propagation of the electric field along an Al|air interface within an infinite 2D array of wires indicating the coupling into a surface plasmon-polariton mode (a) at λ = 800 nm, illustrating the evanescent decay of the electric field away from the interface; (b) at λ = 200 nm, illustrating the slower propagation of light at the interface. Fig. 6 . Fig. 6.Schematic of (a) the original, (b) the partially optimized, and (c) the optimized microwire array designs with the transparent, indium tin oxide contact; (d) plot of WO 3 absorption vs. wavelength of each structure and their planar equivalences, demonstrating the absorption enhancement.
5,290.6
2014-10-20T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
New opportunities rising More fossil specimens and an eagerly awaited age for Homo naledi raise new questions and open fresh opportunities for paleoanthropologists. simplistic scenarios. Now, in the first of the three papers, Dirks et al. deliver a particularly bold dash of complexity, and report that the fossils are most likely between 236,000 and 335,000 years old . This date firmly places them within the late Middle Pleistocene, at a time when the earliest members of our own species were making their debut. Should paleoanthropologists be excited and surprised by this new information? Excited, yes, but perhaps not surprised given that the discovery of H. naledi has already been extraordinary in almost every sense. The Rising Star cave system is at the edge of the Cradle of Humankind, a 47,000-hectare World-Heritage-listed site in South Africa that has yielded more hominin fossils than anywhere else on Earth. However, Rising Star had somehow been bypassed by generations of paleoanthropologists until one small excavation three years ago unearthed more than 1500 specimens representing at least 15 individuals . The find was remarkable to say the least, given that hominin species are often first described from only a few fossils. The H. naledi fossils were also found deep inside the cave, unmixed with artifacts or the remains of any other animals (Dirks et al., 2015). Other sites in the Cradle of Humankind, however, contained fossils of many species and were found near accessible entrances; several sites also yielded stone tools. Perhaps most tellingly, and also unlike other Cradle hominins, the Rising Star specimens were largely intact and not heavily fossilized. In short, H. naledi was different from the beginning. Under the original narrative, by the late Middle Pleistocene, all representatives of Homo in Africa were quite evolved compared to early Homo (Rightmire, 2009). Why then did H. naledi seem to have more ancient characteristics, like a small brain, that placed it with early Homo, and other characteristics, like the shape of its wrist bones, that seemed to skim past a million years of evolution and be found in later species such as humans and Neanderthals? Now, in a second paper, and in light of the new dates reported in the first paper, Berger et al. put forward a scenario that challenges the original narrative . They propose that an incomplete and poorly dated record of Middle Pleistocene fossils has created opportunities for other discoveries to be assigned to the wrong species or time period. They argue that this has led to researchers making unwarranted inferences about how many, and what kinds of, hominin species inhabited Africa at any given time. Although these claims may seem as extraordinary as the findings themselves, the simple narrative across the arc of human evolution has been breaking down for some time. The current situation -with us as a lone, global species -has long been recognized as unusual, compared to much of human evolution in the past. There has also been controversy over how to define our genus Homo, and a growing certainty that the earliest archaeological record (e.g. stone tools) was left by many different species (Kimbel and Villmoare, 2016). There is genetic evidence that humans were cosmopolitan breeders, incorporating DNA from "archaic" hominins into our lineage both in Africa and in Eurasia (Hsieh et al., 2016). Instead of thinking about the advantages that one discrete species had over other species, we now need to consider the powerful biological advantages of hybridization . The new information from Rising Star reinforces growing evidence that different branches of human evolution explored many winding paths across Africa in the Middle Pleistocene (Stringer, 2016). It may even suggest that early members of our own species directly encountered H. naledi and other presently unknown hominins. The nature of those encounters in Africa remains to be discussed, as does the role our species may have played in the ultimate extinction of other hominins. All of this is excellent fuel for opening up a new area of research, though the next phase -that of filling in the nuance -is where the hardest work will be. In a third paper Hawks et al. report the discovery of more fossils of H. naledi -including a relatively complete cranium -from the Lesedi chamber in the Rising Star system . The first specimens were found in the Dinaledi chamber and the discovery of more specimens from a different cave chamber appears to remove the possibility that the sample from Dinaledi represented a one-time catastrophic event that killed a single group of H. naledi. This deepens the lingering mystery of how the fossils came to rest in such dark and inaccessible parts of the cave. Were the dead intentionally placed there after all? The new dates put H. naledi in a time and place where there is some of the earliest physical evidence for human culture, including regular controlled use of fire, complex stone tools, and ritualistic behavior with natural pigments (Watts et al., 2016;Wilkins et al., 2012). However, none of this evidence was found with H. naledi; it was instead found at other Middle Pleistocene sites in South Africa. Berger et al. suggest controversially that the possibility of a creature like H. naledi being responsible cannot currently be discounted . Here is a robust challenge that many archaeologists will be happy to undertake, as a century of work in Europe has shown quite clearly that even closely related hominin species do leave different cultural records (Wynn et al., 2016). The Eurasian record further shows that when modern humans encountered such species, they swapped genetic material in the few thousands of years before those other species went extinct. Did this happen to H. naledi? Did they themselves leave their dead in that cave, or did something else? Was H. naledi capable of making the kinds of stone tools found in the southern African Middle Pleistocene? Did they have a symbolic culture? Although these questions make tempting headlines, it must be emphasized that for every site that has complex stone tools and pigments, there are a thousand more that do not. Well-dated Middle Pleistocene sites are rare, and there are simply not enough datapoints to understand the timing and extent of cultural systems at this time, nor how they could potentially relate to biological populations ( Figure 1). The wealth of new data from these three papers is critical for our understanding of the biology, evolution and behavior of H. naledi, and its implications ripple across the discipline. However, not all of these ripples are entirely unexpected. Paleoanthropology is experiencing a crossroads in how the human past is -and should be -conceptualized. As a field that is only a few scientific generations old, it continues to mature in the midst of a persistent stream of new discoveries, opportunities and challenges. One of these challenges will be to confront the problem of how to reconcile the biological and archaeological records of a continent where a sparse Middle Pleistocene record has formerly offered few options for doing so. With this opportunity now open, there is much work ahead. Jessica C Thompson is in the Department of Anthropology, Emory University, Atlanta, United States jessica.thompson@emory.edu
1,666
2017-05-09T00:00:00.000
[ "Geology" ]
International Journal of Advanced Robotic Systems Humanoid Introspection: a Practical Approach Regular Paper which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract We describe an approach to robot introspection based on self observation and communication. Self observation is what the robot should do in order to build, represent and understand its internal state. It is necessary to translate the state representation in order to build a suitable input to an ontology that supplies the meaning of the internal state. The ontology supports the linguistic level that is used to communicate information about the robot state to the human user. Introduction Interaction between man and robot can benefit from models of the human mind and its cognitive abilities. The reproduction, even if simplified, of known cognitive processes could lead to a reduction in the differences in behaviour between the robotic and human mind. Surely, cognitive architectures [2,29] are a useful tool in this direction of research, although many of them were focused primarily on aspects of reasoning and planning. We definitely need to improve models of the cognitive mechanisms that integrate the knowledge of sensory data arising from the environment in which the artificial agent acts [19,22] and bind them to its particular physical configuration. For example, humanoid robots [32], which are designed to be companions to humans, and to be an integral part of their socio-ecological system [10,16], may have a closer appearance and behaviour if they could have a unified model to represent themselves and the humans with whom they interact. One aspect that has not received enough attention is the mechanism that allows the robot to have a kind of selfawareness [8]. Some of the concepts associated with it have been widely discussed when it comes to consciousness [3], the robotʹs intentionality and perceived intentions [23,24], emotions [33,34,43] and so on, but it still lacks integration into a cognitive architecture. Self-awareness, including the body's physical parts and the management functions related to them, is often referred to as an a priori knowledge and it is usually implicitly integrated into the architecture. Recent studies on the functioning of the human brain, obtained using fMRI, assume the existence of resonance models [6], which make it possible to perceive the external entities (objects, actions, environment, events, emotions, etc.) through entities that represent the agent itself and its internal states. For example, the mechanism of mirror neurons is used not only for the recognition of actions, but also to explain empathy, imitation, and so on [5,15,17,18]. This paper focuses on introspection, one of the basic mechanisms of the human mind, aiming to have an internal model, which can however easily be related to external entities. The inspiration for the work presented is summarized by the following sentence from Sloman [39] "non-trivial introspection involves: An architecture with selfobservation subsystems running concurrently with others and using a meta-semantic ontology that refers to relatively high level (e.g. representational) states, events and processes in the system, expressed in enduring multi-purpose forms of representation, as opposed to transient, low-level contents of conditionals and selection procedures". We will illustrate a possible approach for the realization of an introspective capacity (at present mainly oriented to the physical components of the robot) and we will show the potential that could be realized with some practical examples of uses of this module: it is easy even to reach a linguistic level consistently with the internal representation of the knowledge of the robot. Based on an empirical approach, our idea of introspection of the robot starts from the analysis of information obtained automatically by embedded software and its related documentation, in particular regarding the relationship (direct or explicit) between the physical components and software modules. This documentation is used to construct a representation of the hardware and software of the robot on a map similar to the human somatosensory map. Using a Self Organizing Map (SOM) [25] it is possible to organize documents [20]. This map structure can be used to quickly retrieve semantically related information [21]. Such an approach could be merged with the Latent Semantic Analysis (LSA) paradigm [27] [28], which can be used to successfully simulate several psycholinguistic phenomena. From this map we can use different approaches to ascend to a higher level of abstraction. In particular, the approach used in this paper involves a simple association between labels arising from data and ontology entities, trying to achieve expressiveness with an enormous knowledge based on common sense using Cyc [11]. The presented work thus aims to automatically obtain significant semantic links between entities linked to the running processes, software libraries and physical components of the robot. The approach is more interesting if we consider that the robot, motivated by curiosity, tries to fill the gaps of "self-knowledge" by making new hypotheses to be tested on its internal representation, with the aim of discovering new possible relationships. For example, the robot could find answers through introspection to questions such as: Connecting the modern semantic tools and the ability of introspection [39], the robot can acquire not only a knowledge of its physical structure and functionality (see Figure 1), but it could also communicate this information to a human: it can talk about its perception and it could be able to describe its status to its human interlocutor (and partner) in natural language [9,36,40]. Moreover, this information could be integrated in a suitable cognitive architecture. Our analysis is based on the assumption that software modules, processes or hardware robot, can contribute to the self-knowledge of the robot directly or less (for example, when derived through some correlated entities). This hypothesis is based on an analogy with a human being, for which the internal organs and their characteristics are not directly accessible through his experience, but through specific sensations (pain, comfort, discomfort etc.), which can be deduced. There are several works that indicate introspection as a key component for the relationship between selfawareness and consciousness, either implicitly or explicitly (see for example [35]). There are also other aspects such as self-reflection, sense-of-self, selfrecognition etc., which definitely should be taken into account for a complete discussion of this research topic. In this work we have chosen a practical approach that allows us to acquire an implementation of an introspection system to perform experiments in real situations, using a specific humanoid robotics platform. The structure of the paper is as follows. Section 2 deals with the general architecture of the system introspection, Section 3 describes the components that make the selfobservation of the robot and Section 4 describes how the observation is projected to the semantic level discussed in the previous section. Section 5 describes the linguistic level and the use of Cyc. The Introspection Architecture System Our approach to introspection is based on self observation and communication. Figure 2 shows the proposed architecture for introspection. Considering the definition by Sloman [39], self observation is what the robot should do in order to build, represent and understand its internal state. In particular it is necessary to have a set of sub-systems dedicated to making a snapshot of the Nao robot state. Some systems are supplied by Nao system software, while others are developed ad-hoc. The data obtained are used to build a rich state representation that is, in some way, biologically inspired. This state representation should be supplied to an ontology that associates a meaning with the internal state. Our approach integrates static and dynamic information with robot operation. Static information is related mainly to robot hardware parts and software modules, like hardware drivers, or modules that supply some services like face tracking. But these parts can be active or not during robot operation and their state is part of the robot state itself. A simple list of these parts can be difficult to manage, considering that each part comes with a list of functioning parameters (for example, each arm movement has an attached angle, movement velocity etc.). In order to obtain a manageable state representation we decided to develop a map that collects all the robot parts and to highlight on this map all the parts that are involved in any robot operation. This is inspired by the somatosensory map in the human brain. Such a map can be obtained using the information contained in the robot documentation that is rich in hardware and software details. Dynamic information is also represented on the map but comes from robot operation. When the robot is operating, moving a hand for example, it is possible to highlight on the map the units corresponding to the active modules and the unit corresponding to the hand. This is a part of the robot state representation. According to Figure 2 this Nao State Representation is supplied to a Semantic Bridge that analyses the representation and gives as an output a set of information (semantic labels) that is used to activate the right concepts in the ontology. The Linguistic Level exploits these activations in order to perform a verbal interaction with the human user. In the present implementation the Semantic Bridge is constituted by a set of labels. Figure 3 illustrates in more detail how the modules provided achieve the introspection, when the robot is moving his hand:  the introspective task module accounts for labels assigned by the programmer (e.g., slow_ movement_rigth_hand.py) and allows queries on the documentation map to select units  the perceptual module (visual_recognition_ visual_tracking__moving_object.py) identifies the salient points of the map in the recognition and tracking by the vision of a moving object on a static scene  the inner state monitoring module contains the updated list of active modules at regular intervals of time (or even real-time). In several words it represents the monitoring of the "brain" by activating on the NAO map, which he observes as a part of himself. The collection of these states becomes a model to characterize the situation (slow movement, moving a physical part such as the right hand, visual recognition, visual tracking). The map derived from the available documentation of the NAO, could also be generated with additional documentation on the web. Self observation The Nao state representation is obtained by using the information in documentation, the list of processes running on Nao system and the information from the sensor input. In the following subsections we show the structure module involved in this process (see Figure 3). Offline Processing Data The main goal of this part of the work is to build a representation of the internal state of the robot, mixing together the information related to the robot hardware and the information available about software modules running during robot operation. This representation should be rich enough to represent the perception of the robot body from sensors, to distinguish between different kinds of sensors and also to describe the list of active processes that control the robot movements and any other useful state component. The robot documentation collects information about the robot hardware and software, so it is a good starting point for this kind of representation. Nao documentation is based on a corpus of HTML pages and a few introductory pdf documents. The HTML pages represent the core documents and are divided into 3 sections: Green Docs, Blue Docs and Red Docs. The Green Docs are introductory documents and describe the software installation, robot connection to the computer and hardware devices. There are also the descriptions of the Nao simulator and an introduction to Choreographer, the graphical programming environment. The Blue Docs are the documentation of Nao APIs: each page presents a short module description with the list of available methods and their descriptions. The described modules are related to standard Nao behaviours, such as finding a marker and face detection, or related to hardware drivers for LEDs, lasers, infrared and Bluetooth. There is also documentation for other services, such as file manager, robot pose and text to speech. The Red Docs are related to Naoqi, the programming framework and the SDK, together with a set of documents, mostly images, related to the hardware details of the robot parts. The whole document collection is made by 422 HTML pages and three pdf documents. In order to obtain a suitable document representation, which is useful for clustering, a sequence of standard preprocessing techniques was applied. Firstly, all the graphical details were stripped from the HTML pages, including details such as sidebars and footers that do not convey any information, then the stopwords were cancelled and the remaining words were stemmed using the Porter algorithm [37]. A vocabulary of about 4000 words was obtained after these processing steps. Several words were also deleted because they do not convey any information in this context (e.g., robot, Nao, or Aldebaran, which is the robot producer). The remaining words were ordered using their frequency and the 400 most common words were used to represent the documents using the tfidf (term frequency inverse document frequency) representation technique [4]. The output of this document vector representation is a set of vectors, one for each document, in a 400-dimension space where the distance between vectors is a function of the semantic distance among the documents. Each component of these vectors corresponds to one of the 400 keywords used to represent the documents. These vectors can be clustered using any suitable algorithm. Self Organizing Maps (SOMs) [25] are able to build a topographic map of vector distribution in a multidimensional space. In these maps objects are grouped according to their similarity and groups of similar objects are close on the map. SOM networks are already used to cluster documents [26,30] and we employ such an approach to organize the NAO document set. In particular, the SOM used is a square map made from 17x17 units, a dimension that represents a good trade-off between a large dimension and a reasonable training time. The other parameter values are: learning factor alpha=0.3, initial neighbourhood dimension N=17, and number of time steps t=100000, using the same symbol literals in [25]. After the learning stage, each document can be associated to the nearest neural unit and clusters of documents of the same topic can be identified on the SOM map. Figure 4 shows a representation of a section of the document SOM. Documents semantically related are associated to the same neural unit. Each neural unit on the SOM can be labelled using a word from the vocabulary because these words are associated with the component of the vector space that represents the documents. Neural units with the same maximum component are grouped together and labelled with the same label of the component. These labels also give an idea of the type of documents that are connected to each neural unit. Figure 5 shows the SOM labelled using the maximum component. The different shades of grey (different colours in the original picture) are used to highlight clusters of similar neural units. Processing sensorial inputs The vision module receives the visual input from the cameras (see Figure 6 and Figure 7) and filters the input to select the most promising areas for the downstream tasks. Considering that robot is moving, a relative motion between the head of the robot (therefore the camera) and the background is present. We subtract the background motion to identify the areas that are not moving consistently and we select the larger object that is considered to capture the focus of the attention. In particular the background motion is detected applying the RANSAC algorithm to pairs of frames. The RANSAC algorithm (RANdom SAmple Consensus) is an iterative algorithm that estimates a model when data contain outliers [14]. This module is used to detect the most reliable match between a pair of images, estimating the movement of the largest part of the image. This evaluation is based on the hypothesis that the background covers the largest part of the scene and the most interesting object covers a limited portion of the scene. By computing the motion in the scene with the RANSAC algorithm the movement will be associated with the larger area that is the background. On the other hand, the areas of the image that do not match this movement are considered as detached from the background and are related with the object in the foreground. Therefore, considering the difference between the current frame and the previous one, an image is produced where the darker areas are related to the portion following the main motion, while the lighter areas correspond to the portions where the motion is different. This assumption also holds in cases when there is no relative motion between the camera and the background and, in this simpler case, a simple difference between frames provides similar information. The main drawback of this approach is that regions with little disparity are considered as foreground portions. Also, among them regions with little dimension are detected as objects, while they donʹt have the characteristics of a relevant items in the scene. Filtering the areas through morphology functions with suitable structuring elements can filter the irrelevant sparse areas, preserving the larger homogeneous areas. The portion of the images that are captured by cameras are processed as previously described and are characterized with low level features to represent them in a suitable form for downstream processing. Low level visual features describe the image content with a sequence of values that can be interpreted as the projection of the image in a particular feature space. The most used features are based on colour representation with histograms in different colour spaces and a description of image texture with statistical descriptors. The feature values extracted from a large set of images are not evenly distributed but tend to have different densities in the vector space. The centroids of the regions, shaped by feature distribution, can be considered as forming a base for data representation and any image can be represented as a function of these points called visual terms [13]. These terms act as tokens that can be used to map the single feature values. The K-means algorithm has been widely used in automatic image annotation [13,17]. An alternative is to extract the visual terms by applying Vector Quantization to the entire set of the characteristic vectors. As an alternative approach, the codebooks are produced by the LBG algorithm [31], ensuring less computational cost and a limited quantization error [41] [42]. Since the extraction of the visual terms is data driven, it allows the representation to emerge from the data set and build generic sets of symbols with a representation power that is limited only by the coverage of the chosen training set. The visual terms calculated from multiple features from a training set can be combined allowing particular information aspects of the visual information to be captured according to multiple characteristics. Feature statistics in the image are dependent from the feature itself and are a function of its statistical occurrence in the image. A more powerful representation can be adopted to exploit the spatial distribution of the visual terms in the visual input. In this case not only the presence of a visual term is used to represent an image but also the presence in a spatial window of a pair of visual terms. This representation is analogue to the representation of bigrams where the words are given by the found visual terms [41]. The increased expressivity of the bigrams allows over performing of the results achieved with the simple visual terms (unigrams), although at the cost of an increased dimensionality for image representation. The representation with visual terms and visual bigrams allows the relevant visual information to be represented by a vector that is suitable for the downstream cognitive elaborations. Process list state The Nao software platform supplies a list of processes available on the Nao system. Figure 8 shows the web page that contains the active processes. This page is composed of many sections: the first one contains the information related to the Modules loaded in the memory; each module name is directly linked to the corresponding documentation page. This makes it easy to obtain a representation of the available NAO modules on the document SOM, a representation that is integrated with the others in the State Generation block in Figure 9. This background information is merged to the behaviours section of the same page, where all the scripts that are running on the Nao system are listed. These scripts are also mapped in the document SOM using the available set of keywords (see Section 3.1). Finally a third part of the Nao state is obtained using some custom scripts that output the list of processes running on the Nao system. This list depends on the behaviours that are required by the robot. The Semantic Bridge The trained map can be used as a search tool: if a query text can be represented in the same space as the training documents (i.e., contains at least one of the 400 keywords used to represent the documents) it is possible to calculate the activation of the SOM neural units as a normalized distance from the query text; the nearest units are the most active, while other units have a lower activation according to their distance. If we set an activation threshold and we convert the activation in a level of grey scale, it is possible to obtain an image as a representation of the response to the input. Figure 10 shows the activation pattern for the queries "hand", "arm", "jointʺ and "leg". As Figure 4 shows, it is possible to identify the position of a single file on the document map. Using a tool that queries the robot system and retrieves the list of running modules it is possible to represent on the map the set of running processes in the robot, since each process has a documentation page. The obtained representation is similar to the one illustrated in Figure 10. These representations are mixed with the representation of the scripts used to move the robot parts and with the activation of the sensory inputs traced on the map using suitable keywords like "vision" or "eye" or "camera". Figure 10 shows how the system combines this information to build the NAO State Representation. Moreover, each one of the fingerprint images is also linked to a vector of parameter values used to quantify some details (for example, the list of the angle values of joints during a movement). The semantic bridge subsystem should obtain a set of suitable labels from the Nao internal state representation that can be considered as a set of images, as previously described. In the system presented here the implementation of the semantic bridge uses a look-up table that connects the map state images to the labels in the ontology concepts. This is an easy shortcut that exploits the labels in the document map shown in Figure 5. A particular Nao internal state highlights some units (see Figure) together with the corresponding keywords. The look-up table is a preliminary implementation that builds rigid mapping between the two subsystems. We plan to study a more refined solution for this multidimensional mapping problem. One suitable replacement can be a multilayer perceptron, but we are still investigating the performances of this solution. Communicative Skills The Nao robot has an internal knowledge of its physical structure and functionalities. This knowledge is derived from the state representation and can be exploited to support direct communication on the perception capabilities of the robot and to describe his state to a human interlocutor by using natural language. Moreover, modern semantic tools and introspection capability can be exploited together to improve and support direct communication to the robot perception mechanisms. Ontology by CYC Knowledge Base The Cyc knowledge base (KB) at present appears to be an effective tool for automated logical inference aimed at supporting knowledge-based reasoning applications [11]. Besides, it is the largest and most complete general KB equipped with a performing inference engine. The Cyc knowledge base is composed of particular collections of concepts and facts concerning a specific domain. These collections are called Microtheories (Mt). A set of Java APIs facilitates the interaction with Cyc. It supports interoperability among software applications, it is extensible, it provides a common vocabulary and it is well-suited for mapping to/from other ontologies. The Cyc technology has been chosen to be embedded into the Nao introspective system because of its suitability for common sense representation. Such a feature, together with a more abstract ontology, makes it possible to extend inferences to more generic facts. All the information regarding the structure of the Nao robot, has therefore been coded into an ontology that we have implemented into the ResearchCyc Commonsense Knowledge Base. We have coded relations, concepts, constraints and rules regarding the Nao robot domain. These concepts have been organized in order to fulfil the self-observation task. The produced ontology involves concepts and relations related to processes running in the robot, its physical parts and its capabilities. We have tried to exploit as the existing concepts and predicates of ResearchCyc as much as possible in carrying out this task. The domain complexity requires the creation of suitable relations between the concepts in order to implement a high-level introspection mechanism. As an example we report in the following some of the most common predicates and concepts, among the ResearchCyc predefined items that we have used:  #$genls predicate: relates a given collection to those collections that subsume it  #$isa predicate: relates things of any kind to the collections of which they are instances. It is the most commonly used predicate in the Cyc Knowledge Base and is one of the relations most fundamental to the Cyc ontology  #$Robot: a device which operates autonomously, moving about or manipulating physical objects  #$RobotHand: the collection of mechanical devices that can manipulate objects and/or materials automatically. In the following we report a sample of items (constants and predicates) that we have defined and inserted into the ResearchCyc KB in order to fulfil our goal:  #$Nao-TheRobot: the collection of the Nao robots;  #$IcarNao: an instance of #$Nao-TheRobot and is the robot that we have used for the experiments in our lab. An example of the assertion that we have inserted into the ResearchCyc KB is: From the above example we can see that different results for one single query are obtained and each one may reference to different constants and predicates, exploiting the high potential of the Cyc KB. Linguistic Level The linguistic level is aimed at interpreting natural languages query given by the user. This level exploits a classical pattern-matching technique enhanced with Cyc ontology inference capability. This feature is obtained by transforming natural language requests into symbolic queries, expressed in the ontology language. Such commands are forwarded to the ontology engine that computes the appropriate inferences and gives results in a symbolic form. The symbolic answers are then transformed by the linguistic module into natural language sentences that are finally shown to the user. The linguistic level has been implemented by using A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) technology [1]. The interaction with the user is based on algorithms for automatic detection of patterns in the dialogue data. The A.L.I.C.E chat-botʹs knowledge base is composed of question-answer modules, named categories and structured with AIML (Artificial Intelligence Mark-up Language) an XML-like language. The question of the user is described by a pattern tag while the answer by a template tag. Other tags grant the management of more complex answers. In the following we illustrate the simplest form of category constituting the A.L.I.C.E knowledge base. Every time the user says "Hi", the system answers with "Hello there! Whatʹs your name?" <category> <pattern>HI</pattern> <template>Hello there! What's your name?</template> </category> To exploit the ontological knowledge base we have used the CYN tool [12] which constitutes a bridge between an AIML interpreter and Cyc. This tool grants an easy interface between AIML language and Cyc in order to answer the questions of the user by querying the Cyc inference engine. In CYN new tags were created in order to interact with the Cyc KB to achieve maximum flexibility with minimum overheads. As an example "$<$cycterm$>$" translates an English word/phrase into a Cyc symbol, "$<$cycsystem$>$" executes a CycL statement and returns the result, "$<$cycrandom$>$" executes a CycL query and returns one response at random and so on. The following example shows a very simple AIML category concerning the interaction with Cyc: <category> <pattern> WHAT IS A ROBOT </pattern> <template> A robot is a <cycsystem>(cyc-query '(#$isa #$Robot ?X)) </cycsystem>. </template> </category> Every time the user queries the Nao Robot with the question "What is a robot?" the ResearchCyc inference engine is activated by using the construct "(#$isa #$Robot ?X)" and the answer is wrapped as a natural language sentence. Verbal Interaction examples The following are two examples of possible dialogue between a human user and the robot Nao. The first, based on the ontology that uses Cyc, has already been implemented and will be improved in the future. The second example is based on a case envisaged in the design of the system of introspection, which is still under refinement, but which is important because it shows some objectives that we intend to achieve. The information gathered through introspection enables the robot to communicate to a human user important information regarding the modules involved in performing a given task, the relevant data stored and indicate the documents that contain further details. Three elements compose the information obtained from introspection: the first one is derived from the execution of a movement (configuration of the joints, speed, kinematic chains involved) or motor information, the second one detects the parts of the robot involved (visual bi-grams) through visual information and the third one is derived from the knowledge of the active modules running at any given time or information concerning the internal state. Similarly, the robot will be able to account for the details of each one of these components, describing the functions involved to a user who wants to learn something about the NAO. In Step (1), the words "move", "hand", and "right", belong to the reference dictionary and are recognized by the introspective system and in this case correspond exactly to the labels of an introspective task already performed. In the rest of the dialogue all the words in italics are items from the dictionary. In Step (2) the NAO shows everything that can be said about the movement of his right hand, according to the introspective module and relevant knowledge related to it. Conclusions In this paper we have described an approach for a humanoid robot to understand its internal state. The method is based on self observation and communication with the external world, according to the idea of introspection given by Sloman. Robot introspection arises from information about physical components and software modules. This information is translated in a spatial representation of the hardware and software components of the robot through a SOM. We have shown that the SOM can be an effective way to realize a semantic bridge that links the state representation of the robot with a high level representation given by an ontology. The ontology is furthermore linked to a linguistic module that makes the interaction with human beings through a conversational agent architecture possible. We have shown a proof-of-concept procedure that illustrates the effectiveness of the approach. In particular we have demonstrated that it is possible to obtain semantic links among entities related to the running processes, libraries used and physical components of the robot. Furthermore, we have also shown that the robot is capable of autonomously filling the ʺgaps of knowledge of himself", to create new hypotheses to be tested with the aim of discovering new possible relationships. Acknowledgments Financial support of the research is partially given by Regione Sicilia, PO
7,417.2
2013-05-01T00:00:00.000
[ "Computer Science", "Engineering" ]
The Stability of Metallic MoS2 Nanosheets and Their Property Change by Annealing Highly pure 1T MoS2 nanosheets were grown at 200 °C by a hydrothermal process. The effects of mild annealing on the structural and physical properties of the MoS2 were studied by heating the nanosheets in air and vacuum up to 350 °C. It was found that the annealing leads to an increase in resistivity for the nanosheets by 3 orders of magnitude, the appearance of two absorption bands in the visible range, and a hydrophilic to hydrophobic change in the surface wetting properties. Monitoring of the annealing process by Raman spectroscopy indicates that the material property changes are associated with a 1T to 2H MoS2 phase transition, with activation energies of 517 meV in air and 260 meV in vacuum. This study provides another way to control the electrical, optical, and surface properties of MoS2 nanosheets for fulfilling the needs of various applications. Introduction The most widely studied 2D material is graphene, a single or few layer graphite sheet, which has potential applications for transparent electrodes, energy storage and catalysis due to its high electrical conductivity, large specific surface area, and other striking characteristics [1][2][3][4]. However, field-effect transistors built from graphene cannot effectively function as electronic switches due to the zero-band gap. To overcome this deficiency of graphene, extensive research has been focused on transition metal dichalcogenides (TMDs), which represent a large family of layered 2D materials. Among them, MoS 2 has drawn much attention in the last few years due to its unique properties applicable to hydrogen evolution and electronic and optoelectronic devices [5,6]. Depending on the relative alignment of the two atom sheets within a single S-Mo-S sandwiched layer, MoS 2 shows two different polymorph structures: Thermodynamically stable hexagonal semiconducting 2H MoS 2 and metastable octahedral metallic 1T MoS 2 [6]. Both 2H MoS 2 and 1T MoS 2 show potential applications in various areas. For example, single layer 2H MoS 2 has a direct band gap of 1.8 eV [7] and is suitable for a wide range of devices such as Field-Effect Transistors (FETs) [8,9], memory devices [10], solar cells [11][12][13], and phototransistors [14][15][16]. Metallic 1T MoS 2 nanosheets possess high conductivity and active edge sites, which can be used as a promising alternative catalyst for the replacement of expensive Pt in an efficient and low-cost Hydrogen Evolution Reaction (HER) [17][18][19]. A recent study also shows that 1T MoS 2 can be used as an electrode material for superfast supercapacitors for energy storage [20]. We have recently realized highly pure 1T MoS 2 nanosheets by a hydrothermal process, which is promising for energy applications [17]. In this study, the effect of annealing on the 1T MoS 2 nanosheets was investigated by heating the samples in air and in vacuum up to 350 • C. Following annealing, significant changes in the physical property of the nanosheets were identified by a Hall effect measurement, UV-vis absorption, and water contact angle measurements. A Raman study revealed that the property changes are associated with the phase transition of the nanosheets from 1T to 2H. The underlying mechanism for the phase transition is discussed. The results obtained in this investigation enable one to better understand the materials and to tailor their properties to meet specific applications. Experimental Details The synthesis of 1T MoS 2 involves a similar process to that used in our previous report [17]. Three chemicals are used for 1T MoS 2 growth: MoO 3 , thioacetamide, and urea. MoO 3 powder (CAS number is 1313- was purchased from the Sigma-Aldrich Company; thioacetamide (CAS number is 62-55-5) was purchased from Acros Organics Company; urea (CAS number is 57-13-6) was purchased from the Fisher Chemical company. Octahedral MoO 3 was used as the starting material and thioacetamide was chosen as the sulfur source. Moreover, urea, a weak reducing agent, played a key role in the formation of 1T MoS 2 , which can precisely and effectively reduce MoO 3 to 1T MoS 2 [17]. Twelve mg of MoO 3 , 14 mg of thioacetamide, and 120 mg of urea were dissolved in 10 mL of deionized water. Then, the solution was placed in an autoclave of volume 25 mL and stirred for 2 h. The autoclave was loaded into an oven heated to a temperature of 200 • C. The temperature of the oven was maintained at 200 • C for 12 h. The reaction was terminated by removing the autoclave from the oven and placing it under running water to cool the solution to room temperature. Black precipitates were first retrieved from the solution and then washed with deionized water followed by centrifugation and sonication. Then, the MoS 2 sample was stored in deionized water or alcohol for later use. Similar processing and conditions were used to grow 2H MoS 2 except for the growth temperature which was set to 240 • C. To study the electrical and surface wetting properties, thin films of MoS 2 nanosheets were deposited by an airbrush method. Alcohol was chosen as solvent to enable the deposits to dry out quickly. An air blower was also used to blow the substrate for quick removal of solvent. During the deposition, the substrates were mounted onto a hot plate set to a temperature of 50 • C. Each spraying step resulted in a thin layer of deposited material, with the thickness of the films controlled by the number of sprays. The obtained MoS 2 nanosheets were annealed for 10 min at different temperatures in air and vacuum. The annealing in vacuum was performed in a vacuum chamber with a pressure of 10 −3 Torr. After each annealing step, material properties including the optical, electrical, and surface properties were studied at room temperature using a Raman microscope, UV-vis absorption spectrometer, Hall Effect measurements, and contact angle measurements. The Raman spectroscopy was measured by using a laser of 532 nm with spot size of about 5 µm and a laser power of 2 mW. Results and Discussion The morphologies of as-prepared 2H MoS 2 and 1T MoS 2 synthesized by the hydrothermal process are shown in Figure 1. These samples were sonicated for up to 10 min at room temperature to separate out the nanosheets from each other. It can be seen that the 2H MoS 2 forms clusters even after sonication, which is similar to the morphology of as-grown samples reported previously [17]. The 2H MoS 2 nanosheets are approximately 100 nm in size, with a thickness of a few nanometers. Without extended sonication, the 1T MoS 2 nanosheets show a similar morphology as that found for the 2H-MoS 2 [17]. However, extended sonication separates out the 1T MoS 2 nanosheets from each other, which results in well dispersed sheets in solution that tend to form a uniform film when dried out on a substrate. This outcome leads to a different microscopic morphology, making it difficult to identify the size and thickness of individual 1T MoS 2 nanosheets. The difference between 1T and 2H MoS 2 films can also be Nanomaterials 2019, 9, 1366 3 of 11 observed by the naked eye. The 2H MoS 2 shows a rough and inhomogeneous surface with a dark color, whereas the 1T MoS 2 surface is smooth and uniform with a metallic luster. identify the size and thickness of individual 1T MoS2 nanosheets. The difference between 1T and 2H MoS2 films can also be observed by the naked eye. The 2H MoS2 shows a rough and inhomogeneous surface with a dark color, whereas the 1T MoS2 surface is smooth and uniform with a metallic luster. The annealing of the 1T MoS2 nanosheets at different temperatures in vacuum and in air was performed, and their properties were studied after each annealing step. Figure 2 shows the electrical property of the 1T MoS2 after annealing as measured by the van der Pauw method. The experiments were repeated for four 1T MoS2 nanosheet thin films annealed in air at different temperatures (referred to as sample #1, #2, #3, and #4) with one sample annealed in vacuum (referred to as sample #5). The measured resistivity for the annealed samples is plotted as a function of annealing temperature in Figure 2. All the samples showed a similar trend in resistivity change after annealing in air and vacuum. At temperatures below 225 °C, the resistivity showed only a slight increase with increasing annealing temperature. However, the resistivity increased exponentially at annealing temperatures higher than 225 °C. At an annealing temperature of 350 °C, the resistivity was approximately 3 orders of magnitude higher than that for the non-annealed sample. Together with the change in electrical property, the optical absorption of 1T MoS2 also showed a significant change after annealing. Figure 3 shows the UV-vis absorption spectra for as-deposited 1T MoS2 and samples annealed at different temperatures. The as-prepared 1T MoS2 nanosheet film deposited at 100 °C exhibited the typical absorption spectrum expected for metallic MoS2, with no salient absorption bands but for a monotonic curve [17]. As the annealing temperature increased, the absorption bands at approximately 450 nm and 600-700 nm gradually developed. Two typical The annealing of the 1T MoS 2 nanosheets at different temperatures in vacuum and in air was performed, and their properties were studied after each annealing step. Figure 2 shows the electrical property of the 1T MoS 2 after annealing as measured by the van der Pauw method. The experiments were repeated for four 1T MoS 2 nanosheet thin films annealed in air at different temperatures (referred to as sample #1, #2, #3, and #4) with one sample annealed in vacuum (referred to as sample #5). The measured resistivity for the annealed samples is plotted as a function of annealing temperature in Figure 2. All the samples showed a similar trend in resistivity change after annealing in air and vacuum. At temperatures below 225 • C, the resistivity showed only a slight increase with increasing annealing temperature. However, the resistivity increased exponentially at annealing temperatures higher than 225 • C. At an annealing temperature of 350 • C, the resistivity was approximately 3 orders of magnitude higher than that for the non-annealed sample. identify the size and thickness of individual 1T MoS2 nanosheets. The difference between 1T and 2H MoS2 films can also be observed by the naked eye. The 2H MoS2 shows a rough and inhomogeneous surface with a dark color, whereas the 1T MoS2 surface is smooth and uniform with a metallic luster. The annealing of the 1T MoS2 nanosheets at different temperatures in vacuum and in air was performed, and their properties were studied after each annealing step. Figure 2 shows the electrical property of the 1T MoS2 after annealing as measured by the van der Pauw method. The experiments were repeated for four 1T MoS2 nanosheet thin films annealed in air at different temperatures (referred to as sample #1, #2, #3, and #4) with one sample annealed in vacuum (referred to as sample #5). The measured resistivity for the annealed samples is plotted as a function of annealing temperature in Figure 2. All the samples showed a similar trend in resistivity change after annealing in air and vacuum. At temperatures below 225 °C, the resistivity showed only a slight increase with increasing annealing temperature. However, the resistivity increased exponentially at annealing temperatures higher than 225 °C. At an annealing temperature of 350 °C, the resistivity was approximately 3 orders of magnitude higher than that for the non-annealed sample. Together with the change in electrical property, the optical absorption of 1T MoS2 also showed a significant change after annealing. Figure 3 shows the UV-vis absorption spectra for as-deposited 1T MoS2 and samples annealed at different temperatures. The as-prepared 1T MoS2 nanosheet film deposited at 100 °C exhibited the typical absorption spectrum expected for metallic MoS2, with no salient absorption bands but for a monotonic curve [17]. As the annealing temperature increased, the absorption bands at approximately 450 nm and 600-700 nm gradually developed. Two typical Together with the change in electrical property, the optical absorption of 1T MoS 2 also showed a significant change after annealing. Figure 3 shows the UV-vis absorption spectra for as-deposited 1T MoS 2 and samples annealed at different temperatures. The as-prepared 1T MoS 2 nanosheet film deposited at 100 • C exhibited the typical absorption spectrum expected for metallic MoS 2 , with no salient absorption bands but for a monotonic curve [17]. As the annealing temperature increased, the absorption bands at approximately 450 nm and 600-700 nm gradually developed. Two typical absorption peaks located at 613 and 660 nm were clearly observed after the sample was annealed at 275 • C and 300 • C, which are associated with the 2H MoS 2 . These absorption peaks are associated with the energy splitting due to the valence band spin-orbital coupling in 2H MoS 2 with large lateral dimensions. These signature absorption bands are intrinsic, which excludes the contribution from the surface absorbates or residue chemical reagents if any. The intensities for these two absorption bands with background subtraction showed no obvious change after annealing at 275 and 300 • C. absorption peaks located at 613 and 660 nm were clearly observed after the sample was annealed at 275 °C and 300 °C, which are associated with the 2H MoS2. These absorption peaks are associated with the energy splitting due to the valence band spin-orbital coupling in 2H MoS2 with large lateral dimensions. These signature absorption bands are intrinsic, which excludes the contribution from the surface absorbates or residue chemical reagents if any. The intensities for these two absorption bands with background subtraction showed no obvious change after annealing at 275 and 300 °C. The surface property change was monitored for the annealed samples. Figure 4 shows the wetting properties for an as-deposited 1T MoS2 sample and for samples annealed in vacuum and in air by using water contact angle measurements. For comparison, a freshly prepared semiconducting 2H MoS2 film is also included in Figure 4b, which shows a hydrophobic surface with a water contact angle of 125°. The original 1T MoS2 film exhibits a hydrophilic surface, with a contact angle of 49.15° (Figure 4a). After annealing at 250 °C in vacuum or in air, the surface becomes hydrophobic, with a similar contact angle of approximately 129° to that measured for the as-grown 2H MoS2. Therefore, the 1T MoS2 shows a hydrophilic surface, which can turn into a hydrophobic surface by annealing in air or vacuum. To understand the change in property of the 1T MoS2 nanosheets due to annealing, Raman spectroscopy was used to monitor the samples during the annealing process, as shown in Figure 5. Raman spectroscopy has been established as a powerful tool in distinguishing the various phases of TMDs through measuring distinct Raman intensity and frequencies for each phase [21]. The Raman spectrum of 2H MoS2 had two prominent peaks: an in-plane (E2g) mode located around 383 cm −1 and an out-of-plane (A1g) mode at 407 cm −1 [22,23], as magenta traces demonstrate in Figure 5c. Compared to the 2H phase, 1T MoS2 has relatively lower in-plane symmetry. The 1T phase shows three Raman active modes that were not present in the trigonal prismatic 2H MoS2 polytype [24]. These additional Raman peaks showed up at 150, 226, and 330 cm −1 denoted as J1, J2, and J3 [21][22][23][24]. The appearances of these characteristic peaks are attributed to the longitudinal acoustic phonon modes of 1T phase [22]. Therefore, Raman spectroscopy has been used to distinguish the two phases of 2H and 1T MoS2. In order to reduce the measurement errors in Raman intensity, three spots were selected on each sample by a predefined mark so that the same spots were measured after each annealing step. Our results showed that post-annealed 1T MoS2 exhibited Raman features completely different from those of as-grown 1T MoS2 at room temperature. One can see from Figure 5a that the as-prepared 1T MoS2 (black trace) shows signature Raman peaks at 146 (J1) and 330 cm −1 (J3) for the metallic phase of MoS2. The surface property change was monitored for the annealed samples. Figure 4 shows the wetting properties for an as-deposited 1T MoS 2 sample and for samples annealed in vacuum and in air by using water contact angle measurements. For comparison, a freshly prepared semiconducting 2H MoS 2 film is also included in Figure 4b, which shows a hydrophobic surface with a water contact angle of 125 • . The original 1T MoS 2 film exhibits a hydrophilic surface, with a contact angle of 49.15 • (Figure 4a). After annealing at 250 • C in vacuum or in air, the surface becomes hydrophobic, with a similar contact angle of approximately 129 • to that measured for the as-grown 2H MoS 2 . Therefore, the 1T MoS 2 shows a hydrophilic surface, which can turn into a hydrophobic surface by annealing in air or vacuum. To understand the change in property of the 1T MoS 2 nanosheets due to annealing, Raman spectroscopy was used to monitor the samples during the annealing process, as shown in Figure 5. Raman spectroscopy has been established as a powerful tool in distinguishing the various phases of TMDs through measuring distinct Raman intensity and frequencies for each phase [21]. The Raman spectrum of 2H MoS 2 had two prominent peaks: an in-plane (E 2g ) mode located around 383 cm −1 and an out-of-plane (A 1g ) mode at 407 cm −1 [22,23], as magenta traces demonstrate in Figure 5c. Compared to the 2H phase, 1T MoS 2 has relatively lower in-plane symmetry. The 1T phase shows three Raman active modes that were not present in the trigonal prismatic 2H MoS 2 polytype [24]. These additional Raman peaks showed up at 150, 226, and 330 cm −1 denoted as J 1 , J 2 , and J 3 [21][22][23][24]. The appearances of these characteristic peaks are attributed to the longitudinal acoustic phonon modes of 1T phase [22]. Therefore, Raman spectroscopy has been used to distinguish the two phases of 2H and 1T MoS 2 . vacuum showed a similar trend as that found for the samples annealed in air. However, a slight difference is observed in terms of the peak intensity change, i.e., it changes faster in air than in vacuum. After annealing at 250 °C, the 1T MoS2 peak in the Raman spectra became difficult to observe while the 2H MoS2 peaks became significant. These experimental data clearly indicate that annealing at mild temperatures induces a transition in the MoS2 from the 1T to 2H phase, which explains the related property changes. vacuum showed a similar trend as that found for the samples annealed in air. However, a slight difference is observed in terms of the peak intensity change, i.e., it changes faster in air than in vacuum. After annealing at 250 °C, the 1T MoS2 peak in the Raman spectra became difficult to observe while the 2H MoS2 peaks became significant. These experimental data clearly indicate that annealing at mild temperatures induces a transition in the MoS2 from the 1T to 2H phase, which explains the related property changes. As shown by Raman measurements, the phase transition occurred strongly at approximately 150 °C in air and 200 °C in vacuum. However, a change in the electrical resistivity was not obvious at this temperature. We believe this discrepancy is caused by the significant difference in electrical contribution from the two phases. The initial phase transition from metal to semiconductor does not have a significant impact on the resistivity of the film because the electrical transport is still In order to reduce the measurement errors in Raman intensity, three spots were selected on each sample by a predefined mark so that the same spots were measured after each annealing step. Our results showed that post-annealed 1T MoS 2 exhibited Raman features completely different from those of as-grown 1T MoS 2 at room temperature. One can see from Figure 5a that the as-prepared 1T MoS 2 (black trace) shows signature Raman peaks at 146 (J 1 ) and 330 cm −1 (J 3 ) for the metallic phase of MoS 2 . The broadening of A 1g and E 2g modes, as well as suppressed intensity of E 2g are also typical to 1T phase. After annealing at 100 • C in air, the intensity of the peak at 146 cm −1 was significantly reduced while another peak at 378 cm −1 (E 2g ) was observed to emerge. The changes observed in Raman spectra upon annealing imply that samples were 1T-2H hybridized MoS 2 at this stage. Following a further increase in the annealing temperature, the Raman peak due to the metallic phase gradually disappeared and the peaks at 378 (E 2g ) and 403 cm −1 (A 1g ) due to 2H MoS 2 became stronger. Samples post-annealing treatment at 250 • C in air (magenta trace in Figure 5c) showed clear A 1g and E 2g vibrational modes corresponding to the 2H phase. The Raman spectra for the samples annealed in vacuum showed a similar trend as that found for the samples annealed in air. However, a slight difference is observed in terms of the peak intensity change, i.e., it changes faster in air than in vacuum. After annealing at 250 • C, the 1T MoS 2 peak in the Raman spectra became difficult to observe while the 2H MoS 2 peaks became significant. These experimental data clearly indicate that annealing at mild temperatures induces a transition in the MoS 2 from the 1T to 2H phase, which explains the related property changes. As shown by Raman measurements, the phase transition occurred strongly at approximately 150 • C in air and 200 • C in vacuum. However, a change in the electrical resistivity was not obvious at this temperature. We believe this discrepancy is caused by the significant difference in electrical contribution from the two phases. The initial phase transition from metal to semiconductor does not have a significant impact on the resistivity of the film because the electrical transport is still dominated by the metallic phase. Only when a significant amount of the metallic phase becomes semiconducting will the resistivity show an obvious change. In contrast, the change in Raman signal directly reflects the amount of phase transition. The phase transition occurs due to thermal energy at high temperature during annealing. In terms of kinetic theory, the rate of reaction (phase change) is proportional to the probability for reaching another state, i.e., the reaction rate constant can be described using the following Arrhenius equation, where k(T) is the reaction rate constant in units of s −1 at a given temperature T. E a is the activation energy; A is the frequency factor, which varies with different reaction conditions; and K B is the Boltzmann constant of 8.617 × 10 −5 eV/K. The Raman spectral change can be used to calculate the activation energy for the phase transition by considering the intensity of the 146 cm −1 Raman peak as an indicator for the amount of metallic phase in the material. In this case, the reaction rate constant k is reflected by the rate of change in the Raman intensity. Assuming that the phase transition is a first-order thermally activated process at a constant temperature T, the intensity of the peak at 146 cm −1 can be described as a function of annealing time t by I(t) = I(0) e −k(T)t [25,26]. Note that the rate constant k is a function of temperature and constant only at a given temperature. By plotting ln(k) as a function of 1/T, the activation energy can be calculated from the slope, as shown in Figure 6. The activation energies for the metallic to semiconducting MoS 2 phase transition are determined to be 517 meV in vacuum and 260 meV in air. The activation energies obtained fall in the range of the value of 400 ± 60 meV reported by Guo et al. for exfoliated samples [27]. Though the calculated activation energy may have significant errors up to 139 meV based on the error bars shown in Figure 6, it is obvious that the phase transition depends on the annealing atmosphere. The different activation energies in air and in vacuum may be related to the presence of oxygen during annealing in air, which assists in the phase transitions. new Raman shifts associated with the phonon modes of 1T MoS2 [21,24,[28][29][30]. Therefore, the Raman data clearly provide the key evidence of phase transition in MoS2 nanosheets. Although other approaches such as x-ray diffraction could also be used to study the phase transition in layered structures, no obvious differences were observed between the metallic and semiconductor phases. The activation energy obtained in this study was well above the thermal energy of 25 meV at room temperature, which suggests that at room temperature 1T MoS2 should be relatively stable even though it is supposed to be metastable. Figure 7 shows Raman spectra measured for 1T MoS2 nanosheets stored in air for up to 16 days. One can see that there is a slight reduction in the intensity of the Raman peak at 146 cm −1 obtained for longer storage times, with the 2H MoS2 peak at 378 cm −1 not obviously developed after 16 days, confirming that the 1T MoS2 is relatively stable at room temperature. As reported in previous studies, the Raman scattering at 146 cm −1 is the signature of metallic phase of MoS 2 while the double peaks at 378 and 403 cm −1 are the fingerprints of the semiconducting phase [21][22][23][24]. It provides one of the easy and nondestructive ways to directly monitor the transition process of MoS 2 . Several groups have confirmed the formation of 1T MoS 2 from the emergence of new Raman shifts associated with the phonon modes of 1T MoS 2 [21,24,[28][29][30]. Therefore, the Raman data clearly provide the key evidence of phase transition in MoS 2 nanosheets. Although other approaches such as x-ray diffraction could also be used to study the phase transition in layered structures, no obvious differences were observed between the metallic and semiconductor phases. The activation energy obtained in this study was well above the thermal energy of 25 meV at room temperature, which suggests that at room temperature 1T MoS 2 should be relatively stable even though it is supposed to be metastable. Figure 7 shows Raman spectra measured for 1T MoS 2 nanosheets stored in air for up to 16 days. One can see that there is a slight reduction in the intensity of the Raman peak at 146 cm −1 obtained for longer storage times, with the 2H MoS 2 peak at 378 cm −1 not obviously developed after 16 days, confirming that the 1T MoS 2 is relatively stable at room temperature. The phase transition in MoS 2 is a process which largely depends on the electron donor as well as the thermal environment [5,31]. In other words, the actual dynamical process of the transition involves intra-and inter-layer atomic plane gliding, which is caused by atom displacement and extra thermal energy. The 2H MoS 2 has a hexagonal lattice with a threefold symmetry and an atomic stacking sequence (S-Mo-S') of ABA [5]. Each Mo atom in the 2H phase lies in a center prismatically coordinated by six surrounding S atoms, with the S atoms in the upper layer lying directly above those of the lower layer. In contrast, the Mo atom in the 1T phase is octahedrally coordinated to six neighboring S atoms, with an atomic stacking sequence of S-Mo-S' as ABC, where the bottom S' plane occupies the hollow center (HC) of the top S lattice. Theoretical calculations showed an energy of 0.18 eV/fu (fu = formulae unit) for interlayer S/S gliding and 1.8 eV/fu for intralayer Mo/S gliding within MoS 2 crystals, clearly indicating that interlayer gliding is the favored mechanism in MoS 2 (trigonal structure) [6]. The most extensively discussed method for realizing this phase transition is through chemical exfoliation [5,27,[32][33][34]. It has been reported that during an electrochemical process, charges mainly accumulate on the S atoms and are depleted in the area between the S and Mo atoms. The extra electrons weaken the Mo−S bonds and promote atomic gliding towards the less atomically concentrated area [5,34]. Meanwhile, the extra electrons transferred into the nonbonding d-orbitals of 2H MoS 2 make it isoelectronic with group 7 TMDs with metallic character and decrease the relative energy of 1T MoS 2 . Therefore, ion intercalation can destabilize the 2H phase and reduce the barrier for the 2H to 1T phase transition to some extent [34]. This explains why 1T MoS 2 nanosheets are obtained after intercalation. The transition of chemical exfoliated 1T MoS 2 to 2H MoS 2 by annealing was recently studied [27,32]. Several mechanisms for explaining the phase transition and stability of 1T MoS 2 have been proposed, including a doping effect, ion intercalation, and surface functionalization [35,36]. The annealing induced phase transition reported in this study is associated with the destabilization of 1T MoS 2 at moderate temperatures (150-200 • C). The basic process involves sulfur atomic plane gliding to hollow center sites due to thermal energy [5], but no detailed mechanism of the restoration process is revealed so far. It is likely that the 1T to 2H phase transition is driven by the extra thermal energy obtained by 1T MoS 2 at high temperature, which leads to sulfur atom gliding towards the hexagonal structure of the 2H phase. This outcome is consistent with the observations by Eda et al. [32] and Guo et al. [27], indicating that the Raman features due to 2H MoS 2 appear after annealing. There are several possibilities to stabilize 1T MoS 2 despite pure MoS 2 being metastable. By using scanning transmission electron microscopy, Lin and colleagues directly observed the phase transition process of single-layered MoS 2 that was controllable by an electron beam [5]. Similarly, Enyashin et al. observed that the stabilization of 1T MoS 2 can be realized by re-doping, as was shown by high resolution transmission electron microscopy and density functional calculations [35]. In a recent calculation, Tang and Jiang found that 1T MoS 2 has a strong affinity for functional groups, which is closely related to its metallicity and partially filled Mo 4d states [36]. Interestingly, they also found that 1T MoS 2 is metastable when un-functionalized but becomes a stable phase following a crossover coverage of functionalization. In our hydrothermal growth of 1T MoS 2 nanosheets in solution, the surface of the nanosheets is likely functionalized by other functional groups, which help stabilize the 1T MoS 2 nanosheet materials. Further study is needed to identify the details of the functional groups including comparison with theoretical calculations to better understand the stability of the 1T MoS 2 nanosheets observed in this study. Conclusions In summary, the effect of annealing on the properties of 1T MoS 2 nanosheets grown by a hydrothermal process was investigated. Annealing 1T MoS 2 at mild temperatures in air or in vacuum causes significant changes in the electrical transport, optical absorption, and surface wetting properties. After annealing, the resistivity of 1T MoS 2 nanosheets increases from 10 −1 to 10 3 Ohms per square, the hydrophilic surface becomes hydrophobic, and typical semiconducting bands appear in the optical absorption spectrum at 613 and 660 nm. Raman spectroscopy measurements indicate that these property changes are associated with a temperature-induced phase transition from 1T to 2H MoS 2 , with an activation energy of 517 in vacuum and 260 meV in air. The properties of the annealed 1T MoS 2 are controllable by annealing, which may be potentially useful for device fabrication.
7,403.2
2019-09-24T00:00:00.000
[ "Materials Science", "Physics" ]
Fabrication and Analysis of Denture Plate Using Single Point Incremental Sheet Forming Incremental sheet forming (ISF) is a metal forming technology in which small incremental deformations determine the final shape. The sheet is deformed by a hemispherical tool that follows the required shape contour to deform the sheet into the desired geometry. In this study, single point incremental sheet forming (SPIF) has been implemented in dentistry to manufacture a denture plate using two types of stainless steel, 304 and 316L, with an initial thickness of 0.5mm and 0.8mm, respectively. Stainless steel was selected due to its biocompatibility and reasonable cost. A three-dimensional (3D) analysis procedure was conducted to evaluate the manufactured part's geometrical accuracy and thickness distribution. The obtained results confirm the capability of SPIF to produce thin-walled biomedical components with satisfactory dimensional accuracy, as geometrical deviations between the developed and the actual models are predominantly in the range of ±0.25mm. Introduction Incremental sheet forming is a flexible and innovative process that can be used in rapid prototyping and batch production; it requires no dies and no special tools and can be performed to produce customized and tailored products [1]. The process uses a hemispherical tool that deforms the sheet into the desired geometry. The final shape (3D model) is accomplished by dividing it into 2D levels so that the tool proceeds to the upcoming contour after finishing the present contour [2]. The main limitation of this process can be summarized as the lengthy manufacturing time, and the poor geometrical accuracy compared to other manufacturing processes. The poor geometrical accuracy can be attributed to the sheet bending and spring back. Furthermore, the pillow effect (a protrusion originating at the center of the SPIF made part) also accounts for the poor accuracy [3]. A schematic diagram that summarizes the single point incremental sheet forming is shown in figure 1, in which the sheet is fixed on a clamping frame and supported by a backing plate to reduce geometrical inaccuracies caused by the sheet bending. The sheet is formed by the repeated application of a minor, controlled force at a single point. The forming tool is moved incrementally along the sheet, applying force at each point and gradually forming the sheet metal into the desired shape [4]. The most significant area in which Single point incremental sheet forming has been employed is in the automobile and aerospace industry [5]. SPIF is considered a relatively new technology in the manufacturing of biomedical implants. The first application of SPIF in the medical field was the fabrication of ankle support which was successfully manufactured without any die and with a sufficient geometrical deviation [6]. The most common application of SPIF in the biomedical field is the production of cranial implants. The cranial prosthesis was fabricated using biocompatible materials such as polymeric sheets [7] and titanium [8]. Other applications of SPIF regarding biomedical implants are the manufacturing of knee prostheses [9], facial implants [10], and clavicle implants [11]. A denture plate is a major dental prosthetic of a complete denture that replaces missing teeth and is positioned in the roof of the mouth in the oral cavity [12]. The most common material used in the fabrication of dentures is Polymethyl methacrylate (PMMA). However, due to several disadvantages, such as brittleness which leads to fracture [13], it is crucial to strengthen the denture with materials that impede the rupture and one of the most effective ways to ensure a steady performance is to incorporate it with a metal plate or wire [14]. The most common technique used to fabricate the denture plate is the lost wax technique (conventional method). In addition, in the last twenty years, several technologies have been developed to produce dental components, such as rapid laser forming [15], additive manufacturing [16], and selective laser melting [17]. However, the main problem in the field of dentistry is that there is a particular demand for new technologies to replace conventional techniques that are based on casting technology, as they require a high degree of skill, high equipment costs and high processing times, and SPIF is a promising technology that can be applied in these types of applications. As opposed to the medical field, research is scarce concerning SPIF technology in the dental field. M.Sbaytietal. [18] applied single point incremental sheet forming to fabricate a titanium denture plate with a satisfied geometry by providing an optimization strategy based on numerical simulation and finite element analysis. In addition, M. Milutinovic et al. [19] used lowcarbon steel (EN DC04) and manufactured a denture base with acceptable geometry. In our study, we used two types of stainless steel. The type 316L is one of the most widely used materials in biomedical applications, however, there has never been a study about this material in incremental sheet forming applications. Taking the previously mentioned into consideration, the main purpose of this study is to investigate the ability of SPIF to be applied in the field of dental restorations by evaluating the geometrical accuracy and thickness distribution of the fabricated parts to determine if SPIF is capable of producing a geometrically correct dental plate using stainless steel sheets. Material and Mechanical Properties Stainless steel is one of the most used metals in biomedical implants for cardiovascular, dentistry, craniofacial, and orthopedic applications [20]. This is due to its good mechanical properties, acceptable biocompatibility, and reasonable cost compared to titanium, cobalt, and vanadium alloys [21]. The most common stainless steel used in dentistry is the austenitic series 300, which has been utilized widely in the manufacturing of crowns, dental bridges, and orthodontic wires [22], especially the 316L stainless steel type. However, stainless steel made implants can only be used for a short term due to their poor corrosion resistance, which increases the infection risk [23]. The chemical compositions of 304 and 316L stainless steel are listed in Table 1. The mechanical properties of the materials were obtained using the tensile test. The tensile test provides information about the material's ductility and flow stress, as these properties are key factors that affect the ISF process. The ductility is used as a formability indicator for the materials during the forming process to ensure the successful implementation of ISF. At the same time, flow stress (the initial yielding and tensile stress) determines the forming load and the contact pressure between the tool-sheet interfaces. Using a united universal hydraulic testing machine, the tests were executed for 316L and 304 stainless steel sheets of 0.8 and 0.5mm. Three tensile test specimens having different orientations with respect to the sheet rolling direction were cut from each sheet: 0ᵒ to the rolling direction, 45ᵒ diagonal to the rolling direction, and 90ᵒ transverse to the rolling direction.The tensile test specimens were fabricated according to the standard for the traction testing of metallic materials, SR EN 10002-1: 2002. The mechanical properties are listed in table 2. Concerning material ductility, both materials exhibited good elongation characteristics, and with the consideration of the increased formability that ISF offers compared to the tensile test, it can be reasonable to assume that both materials have good potential to be used in ISF processes. Process Design This section consists of three parts. The first part focuses on the computer-aided design (CAD) modeling aspect of the denture plate, whereas the second and third sections focus on the computeraided manufacturing (CAM) modeling and experimental work. The methodology used for manufacturing the denture plate is presented in Figure 3. CAD modeling of denture base The manufacturing process of a denture base starts with the creation of a denture base CAD model. By applying the reverse engineering concept, the CAD model of a denture base can be acquired either by using medical imaging techniques such as magnetic resonance imaging (MRI) or computer tomography scan (CT) of the patient's jaw or by digitizing a master cast (a cast that replicates the entire oral structure) of the patient's mouth. In this research, a gypsum working cast was scanned using a desktop scanner (Medit T710). Multiple overlapping scans were taken to produce the cast in the standard triangulation language file format (STL) to achieve maximum scanning accuracy. The acquired geometrical data was then imported into CAD software (EXOCAD) to create the denture plate model. The CAD software was chosen due to its high capability of designing dental prostheses and implants. The CAD modeling process of the denture plate is presented in Figure 4. Tool Path Generation After developing the denture base CAD model, the second step will be generating the tool path required to manufacture the denture plate. The tool path is a significant step in incremental sheet forming since it strongly influences manufacturing time, surface finish, thickness distribution, and geometrical accuracy. The tool path consists of points representing the contact between the tool and the sheet along a programmed and confined space. The SPIF process is similar to the milling process, which enables commercial CAM software to generate the tool path and NC code for the milling machine. In this research, Autodesk Fusion 360 was used to generate the tool trajectories in SPIF. The tool path strategy selected to fabricate the denture plate is contour milling, and was programmed to manufacture two denture bases simultaneously. The implant perimeter is open, a region must be constructed to join the perimeter and create a closed space for the tool to move. The process parameters were selected to achieve a highquality surface finish. They were chosen according to the findings in [24] and [25] and for both materials (304 and 316L): a stepdown of 0.02 mm, a tool diameter of 10 mm, a spindle speed of 110 rpm, and a feed rate of 1000mm/min. After performing the simulation, the computer numerical controlled (CNC) machine codes had to be postprocessed to create a G-code specific to the CNC machine where the denture plate was fabricated. The CAM modeling process is shown in Figure 5. Experimental setup The manufacturing process was carried out using a C-tek three-axis (KM-80D) CNC vertical milling machine, as shown in figure 6 (a), equipped with a maximum spindle speed and feed rate of 6000 rpm and 10000mm/min. The forming tool was made from high-speed steel (molybdenum-MAX cobalt high-speed steel, M42) and designed with a hemispherical tip (10mm diameter). as shown in figure 7. The forming fixture frame was built and designed by [26] and is mounted on the CNC machine to fix the sheet during the manufacturing process. The Initial blank dimensions were 225mm×225mm, and a thickness of 0.5 and 0.8 for 304 and 316L, respectively. The frame consists of a square blank holder and a square backing plate. The backing plate was made from mild steel with dimensions of 225mm×225mm×5mm with a profile-specific hole, as shown in Figure 6 (b). The profile-specific hole represents the CAD model's top view (x-y) plane to ensure maximum accuracy and correlation with the tool path and minimize geometrical deviation caused by sheet bending and material spring back. The hole was made with an offset of 2mm to account for sheet thickness and tool material. Lubrication was applied between the tool and the sheet to reduce friction and tool wear. The manufacturing process of the denture plate is shown in Figure 8. Measurement procedure A three-dimensional (3D) comparison analysis to evaluate the geometrical discrepancies between the manufactured parts and the reference CAD model was conducted by using GOM to inspect the 3D data test software. The scanned blank dimensions were 80×60mm as the sheets' excess material was removed for the blank to be scanned. The external surfaces of the 304 and 316L denture bases were scanned using the MeditT710 lab scan, as shown in figure 9. The lab scan consists of four 5.0 MP camera systems and employs blue light scanning technology, ensuring high scanning accuracy (<4 μm). The scanned data (points of clouds) were reconstructed to the STL model to be imported into the GOM inspect software. The analysis starts by defining the scanned data (STL) as the actual model and the designed CAD model as the reference model. The scanned data and the CAD model were aligned (as shown in figure 10) using the pre-alignment strategy to detect dimensional discrepancies and generate the deviation map. The outcome of this analysis is represented graphically, as shown in figures 11 and 12. Colors represent different geometrical deviations along the part in mm. Negative values show that the produced part has failed to reach the desired profile. In contrast, the positive values indicate that the produced part has exceeded the designed profile. Geometrical accuracy Denture fit is an essential aspect that determines the quality of the produced denture, as it ensures the patient's comfort with wear and reduces or prevents traumatic ulcers in the mouth. Thus, manufacturing a denture plate with satisfactory dimensional accuracy is essential to ensuring the patient's comfort [27]. In our study, both stainless steel prototypes (304 and 316L) showed similar results, as shown in figures 11 and 12. The deviation between the developed and actual models primarily ranged between ±0.25 mm. The obtained results are considered within the clinically acceptable tolerances of 0.311mm [28]. Therefore, the obtained results are judged to be satisfactory. Furthermore, in comparison to the traditional method (lost wax technique), the results are similar to those obtained by [29], who measured the average dimensional gaps of the conventionally fabricated models with the reference model to be between 0.15 to 0.28mm. In addition, the results are also comparable to the most common CAD-CAM techniques that are currently being used in the fabrication of denture prosthetic bases, such as the subtractive milling process (primarily ranging around 0.2 mm) and the additive manufacturing method (3D printing) (primarily ranging around 0.16mm) [30]. Taking these facts into account, it can be considered that SPIF technology has the potential to be applied in dentistry to fabricate metal dental prostheses. In addition, the manufacturing time is highly reduced (1 hour) compared to the milling process (5 hours), and 3D printing (1.3hours) [31]. The deviation map shows that the maximum errors are located at the edge of the parts in the regions closest to the clamping plate, which can be ascribed to the sheet bending during the forming process, as these regions are under excessive bending stress since they are the nearest to the backing plate and the farthest from the tool. Geometrical inaccuracies caused by the bending effect during the forming process are one of the primary origins of geometrical errors in the SPIF process. However, from a dental perspective, high dimensional error in this region is not of significant concern since artificial teeth, and acrylic resin will further pad them [19]. One possible way to minimize the bending effect and improve the overall geometrical accuracy is the application of double-sided incremental forming (DSIF), in which an additional supporting tool is utilized during the forming process. In contrast, regions located at the bottom part and the walls of the denture plate have a slight and uniform geometrical deviation. As geometrical deviations in these regions are commonly affected by the pillow effect, it can be concluded that the pillow generation was minimized and didn't produce significant inaccuracies. Furthermore, the dimensional discrepancies detected in the prototype made from 0.5mm 304 stainless steel showed a slight improvement in dimensional accuracy compared to the 0.8 mm 316L prototype, which can be attributed to the lower sheet thickness, which exhibits less spring-back. Thickness distribution One of the significant drawbacks of incremental sheet forming is sheet thinning, which has an adverse effect on the formed parts in terms of geometrical accuracy, formability, and strength. In fact, when the formed part reaches its forming limit, thinning can lead to its cracking. Thus, it is crucial to evaluate the thickness distribution of the formed components. In this research, the thickness distribution was measured by scanning the entire part (internal and external surfaces of the formed component). The thickness distribution was measured using the GOM inspection software. The results showed that the thickness remained unchanged at the bottom of the plate for both types of stainless steel (304 and 316L) with initial sheet thicknesses of 0.5mm and 0.8mm. The most affected zones are located along the lateral and front walls of the plate, with a maximum thickness reduction of 50% as thinning intensity has increased at regions with high wall angles. From a dental point of view, this is highly desirable as it ensures mass reduction, which as a result, increases the comfort of wear for the patients [32]. However, additional tests are required to evaluate the strength of the parts to verify if this mass reduction affects the performance of the plate. The early results are promising; Milutinovic et al. [25] performed mechanical tests (stiffness and flexural strength) to evaluate the performance of the SPIF-made denture plate from stainless steel 430 with 0.5mm thickness, in which a thickness reduction of about 60% was recorded. The study's results revealed that the produced denture plate using SPIF technology had satisfactory mechanical properties and even exhibited similar performance to a conventionally produced denture plate (using the lost wax technique) with a thickness of 0.8mm. In conclusion, additional tests are still necessary to assess the performance of the produced denture plate, especially in the long term. Thickness distribution for the 304 and 316L parts is presented graphically in Figures 13 and 15. Conclusions Single point incremental sheet forming has been successfully implemented to produce a denture plate from two types of stainless steel, 304 and 316L, with initial thicknesses of 0.5mm and 0.8mm, respectively. The reported results demonstrated that the process is feasible and can be used in real-life medical applications. Compared to the traditional way of impression taking and model waxing, which requires a high degree of skill, high equipment costs, and high processing times, this process can minimize the working hours of dental clinicians and cut production costs significantly. The process was performed using a universal CNC vertical milling machine and required no preparation of shape-specified tools. The main conclusions of this work can be summarized as follows: 1. The process can be realized by developing a 3D model of the biomedical component, tool path generation, G-code transfer to the CNC machine, and the preparation of a customized backing plate to reduce the geometrical deviation. 2. Geometrical deviations between the developed CAD model and the actual model are predominantly in the range of ±0.25mm. The highest geometrical deviation was located at the upper zone of the deformed sheet. This geometrical shift can be attributed to the bending effect during the forming process at regions bordering the backing plate. 3. Thickness distribution was analyzed for both plates of stainless steel. The thickness was almost unchanged at the bottom of the plates and around the edges. The most critical zones with extensive thinning were located at the lateral and front walls. Future work and recommendations 1. Similar work can be implemented using biocompatible polymeric materials such as Polyether ether ketone (PEEK) and Polymethyl methacrylate (PMMA), as polymer processing using SPIF is a relatively new technology, and research is scarce regarding these materials. 2. The implementation of double-sided incremental sheet forming (DSIF) as this technology can highly reduce the geometrical deviations caused by the bending effect of the sheet. 3. The performance of a comparison between the SPIF-made denture plate and the commercial one regarding performance, cost, biocompatibility, and accuracy. 4. More work is required to concentrate on the technical parameters used during the manufacturing process to improve the obtained accuracy.
4,531.8
2023-06-05T00:00:00.000
[ "Materials Science" ]
Single-shot single-gate RF spin readout in silicon For solid-state spin qubits, single-gate RF readout can help minimise the number of gates required for scale-up to many qubits since the readout sensor can integrate into the existing gates required to manipulate the qubits (Veldhorst 2017, Pakkiam 2018). However, a key requirement for a scalable quantum computer is that we must be capable of resolving the qubit state within single-shot, that is, a single measurement (DiVincenzo 2000). Here we demonstrate single-gate, single-shot readout of a singlet-triplet spin state in silicon, with an average readout fidelity of $82.9\%$ at a $3.3~\text{kHz}$ measurement bandwidth. We use this technique to measure a triplet $T_-$ to singlet $S_0$ relaxation time of $0.62~\text{ms}$ in precision donor quantum dots in silicon. We also show that the use of RF readout does not impact the maximum readout time at zero detuning limited by the $S_0$ to $T_-$ decay, which remained at approximately $2~\text{ms}$. This establishes single-gate sensing as a viable readout method for spin qubits. I. INTRODUCTION Semiconductor quantum dots show great potential for scalable quantum information processors [4][5][6][7]. Singlettriplet qubits, formed by taking the subspace of the twoelectron spin states singlet S 0 = (|↑↓ − |↓↑ )/ √ 2 and triplet T 0 = (|↑↓ + |↓↑ )/ √ 2 under a energy gradient (such as a magnetic field gradient across two quantum dots), have enabled all electrical control of qubit rotations while demonstrating immunity to common mode magnetic field noise [8,9]. The singlet-triplet subspace spanned by S 0 and triplet T − = |↓↓ can also be used to read out single electron spins. Here by loading a spindown electron onto the dot with the lower spin-down ground state, RF readout can be used to measure the spin state of the target electron on the other dot [1]. If the target electron on the other dot is spin-down, Pauli blockade prevents the target electron from tunnelling across the dots and yields no RF response, while a spin-up electron will form a singlet state with the other electron, giving a non-zero RF response. One of the challenges in scaling up to many qubits is the space real-estate needed for the spin sensors required to readout and initialise the individual qubits. An optimal solution has been suggested to use the mandatory gates assigned for qubit control and manipulation as single-gate RF sensors [1,2,[10][11][12]. To date, however the sensitivity of such single gate sensors has not been high enough to achieve single-shot readout. Single-shot qubit readout is a necessary requirement for running error correction codes where time-correlated measurements are required between many qubits [3,13]. Phosphorus donor quantum dots have previously exhibited large (∼ 8 meV) singlet-triplet splittings [14], with independent readout of double quantum dot systems using three-lead single electron transistor (SET) sensors [15,16]. When replacing the three lead sensors with a single gate sensor, large S 0 to T − = |↓↓ relaxation times of 2 ms have been achieved, enabling sufficient integration times to perform spin state readout [2]. However, the sensitivity of the resonator circuit was limited by the low quality factor of its Coilcraft 1206CS-821XJE chip inductor. Superconducting inductors have recently demonstrated effective quality factors of up to 800, subsequently increasing the sensitivity of the readout circuit [17][18][19]. II. METHOD In this paper we integrate a superconducting inductor into a single gate donor-based quantum dot architecture in silicon for single-shot readout. The device shown in Fig. 1a (previously measured in [2]) was fabricated in silicon with the leads and dots defined by atomically placed phosphorus donors using hydrogen resist scanning tunnelling microscope (STM) lithography [20]. Two pairs of quantum dots (D1L, D1U) and (D2L, D2U), each consisting of approximately 3-4 donors each, are each manipulated by two leads: a reservoir to load electrons and a gate to tune the singlet-triplet state. Single-shot readout was performed on a singlet-triplet state hosted across the dots D2L and D2U, using the resonator connected to reservoir R2. A global tunnel junction charge sensor TJ was patterned at the side and connected to a chip inductor resonator to help locate a singlet-triplet charge transition. The resonators were connected to a frequency multiplexed line [2,21,22]. We have incorporated a 100 nm thick NbTiN, on Si subtrate, superconducting spiral inductor on the singlegate sensor R2 to increase the quality factor for maximal readout signal (as shown in Fig. 1b). This inductor was a 14-turn spiral, 78 mm in length, 10 µm in width and had a 30 µm gap between turns. The inductor was found to retain its effective quality factor in parallel magnetic fields of ∼ 3.3 T. These large fields are necessary for both operating singlet-triplet qubits (to break the triplet degeneracy) and in performing RF readout where the S 0 -T − energy anti-crossing should not intersect the RF tone as this would cause the qubit state to change during the measurement due to singlet triplet T − mixing [23]. Fig. 1c shows the frequency response of the inductor when connected to R2. The internal and external quality factors of this inductor when wire-bonded to the device (75 mK), were approximately 800 and 400 respectively. The res- The STM image shows the silicon surface lithography where the lighter regions have been desorbed from the lithographic hydrogen mask. These areas are dosed with phosphorus to form the dots (D1U, D1L, D2U and D2L) and metallic electrodes [24]. A standard chip-inductor resonator connects to the tunnel junction charge sensor TJ. Reservoir R2 is used to load dots D2L and D2U with electrons while the gates G1 and G2 are used to manipulate the singlet-triplet detuning of the dot pairs. B indicates the in-plane magnetic field during millikelvin measurements. (b) The superconducting resonator is added to the frequency multiplexed line, connected to R2 and measures the singlet-triplet state across D2U and D2L. (c) The reflected (sending and receiving the RF tone via the multiplexed line) frequency response of the superconducting inductor when connected to R2 at zero magnetic field. onator's frequency was 339.6 MHz at zero magnetic field and 335.2 MHz at 2.75 T. III. RESULTS In Fig. 2a we plot the differential response from the tunnel junction charge sensor TJ as we sweep the gates G1 and G2 at the (3, 3) to (2, 4) inter-dot charge crossing. This charge configuration is equivalent spin-wise to a (1, 1)-(0, 2) singlet-triplet crossing where we observe a clear inter-dot transition due to the tunnelling of a single electron. We measure the tunnel-coupling at the (3, 3) to (2, 4) transition as 39±6 GHz by plotting the dependence of the inter-dot transition at different applied magnetic fields [2]. Since the tunnel coupling is much larger than the driving frequency of the resonator (335.2 MHz), this inter-dot transition forms a good candidate for singlegate readout as the RF drive will ensure that the electron will adiabatically oscillate between the two dots. When performing readout, the electrons oscillate between the dots giving rise to a measurable quantum capacitance [2,[10][11][12]. Triplet states cannot oscillate electrons due to Pauli blockade. Singlet states adiabatically move one electron between the dots, the S 0 (1, 1) and S 0 (0, 2) states, shown by the red branch in Fig. 2b. The optimal point for maximal electron shuttling is at zero detuning ∆ = 0, since here the RF tone moves the electron the greatest distance, pushing it equally into both dots respectively. We set the magnetic field to 2.75 T as this moves the S 0 -T − anti-crossing (the overlap of the red and blue lines) far enough away from the zero-detuning point such that the RF tone does not intercept this anticrossing. We test the response of the single gate sensor on R2 using the multi-purpose fast-pulse gate, G2. Here we send the RF tone with an amplitude ∆ RF (grey dotted lines) through G2 while the superconducting resonator on R2 captures the resulting RF response. The response was fed into a lock-in amplifier, which modulated the amplitude of the input RF tone at 21.361 kHz, to filter the detection of noise originating from the room temperature apparatus. The overall measurement bandwidth was approximately 3.3 kHz. If we consider Fig. 2a, we can load a singlet S 0 (0, 2) state by pulsing from the (3,4) state at point L to the (2, 4) state at point M. It is important that we wait ∼ 4 ms at point L before moving to the measurement point M so that the spin relaxes to the singlet ground state as discussed later. The energy diagram describing the two electrons across the two quantum dots (Fig. 2b) shows that at the zero detuning readout point M, the triplet T − (blue line) is the ground state. Thus, the singlet S 0 will eventually decay into the triplet T − state (inset Fig. 2e) during measurement and this sets an upper bound to the overall measurement time. To find the optimal detuning point for maximal response, we measure the RF response at different points in detuning, as shown in Fig. 2c, taking an average of 10,000 individual time traces at each point. On moving point M from negative detuning to zero detuning, a non-zero response is observed indicating the presence of a singlet state. This signal decays again at positive detuning where the singlet population decreases as the RF tone oscillates past T − -S 0 (0, 2) anti-crossing [23]. We fit these decay events at different points in detuning ∆ to an exponential distribution giving the amplitudes and time constants of the decay events in Fig. 2d (in black and orange respectively). It is clear that the optimal RF response occurs at zero detuning as the electrons are maximally shuttled between S 0 (1, 1) and S 0 (0, 2) states. Finally we test whether the use of the RF tone itself shortens the S 0 -T − lifetime (for example, due to spinorbit coupling). This would be undesirable since we do not want the detector to affect the population dynamics of the singlet-triplet state during measurement. To measure the bare S 0 -T − lifetime when no RF tone is applied we started with the RF tone turned off, waited at point L over 4.1 ms (to load a singlet as before) and then moved to zero detuning. We only switched on the RF tone to measure the singlet population after waiting different timeperiods at point M, as shown in Fig. 2e. When fitting to the resulting exponential decay of the singlet population, we found that the decay time remained the same (2 ms) as that when the RF tone remained switched on during the whole experiment (Fig. 2d). Thus, we conclude that the RF excitation does not play a major role in the S 0 -T − decay as the singlet lifetime remains unaffected by the RF measurement tone. We perform single-shot measurements of spin-up electrons (equivalent to the singlet state) by waiting at point L for 4.1 ms and then pulsing to point M at zero detuning for maximum RF response. Here we leave the (b) A histogram (a probability density function (PDF) from 10,000 traces) of the maximum value of the RF response when waiting at point L for 0 s and 4.1 ms shown in blue and red respectively. The dashed line shows the selected threshold that maximises the readout fidelity at 82.9%. (c) 500 individual time traces of the RF response. The first 250 were taken after waiting at L for 0.7 ms to partially load singlet states and the second 250 traces after waiting at L for 4.1 ms to fully load singlet states. The high RF response signifies the presence of singlet states that stochastically decay into triplet T− states. The shorter wait time highlights the lower singlet population as insufficient time was given for the T− state to decay into the S0 state before measurement. (d) To observe this dependence, using the optimal readout threshold, 1000 single-shot traces were taken to measure the singlet population on varying the time spent at point L. This probes the T− to S0 relaxation time at ∆ ∼ 1 meV of 0.62 ms. RF tone switched on and measure the resulting response over time in Fig. 3a. Five such single-shot time-traces are shown in red demonstrating that we can detect a singlet state. Here, when moving to point M, the signal (dotted lines) clearly departs from the background level (dashed lines) and, after stochastic relaxation, returns back to the same level highlighting real-time single spin detection. The finite life-time of the non-zero signal can be attributed to the singlet-state S 0 decaying into the triplet T − ground state. To distinguish a singlet state from a triplet state, these traces were compared against traces (shown in blue) when waiting 0 s at point L (that is, always reading a triplet T − ) where the signal remains at the background level throughout the measurement. To quantify the fidelity of the single spin readout, we must discriminate between a fully null signal (triplet) and one with a non-zero signal (singlet). The singlet signal on average follows an exponential decay function (as shown in Fig. 2e). Therefore, the actual signal is concentrated around the beginning of the measurement and diminishes during the course of the measurement. Thus, we applied an exponential window over the portion of the signal where the measurement begins (after we have moved to zero detuning at point M) and then compiled a histogram of the maximum values of each trace [25]. The histogram shown in Fig. 3b was created from 10,000 traces taken after waiting at point L for 4.1 ms and 0 s to measure the distribution for singlets and triplets respectively. We took a threshold (shown by the dotted line in Fig. 3b) to partition the distributions such that values above were considered to be singlet states and values below were considered to be triplet states. We determined that the optimal threshold to maximise the readout fidelity was 0.157 [26,27]. This yielded an average single-spin readout fidelity, of the single-gate RF sensor, of 82.9% (where the singlet and triplet readout fidelities were 78.2% and 87.6% respectively). With the ability to perform single-shot, single-spin readout, we measure the T − to S 0 (0, 2) decay (at ∼ 1 meV from zero detuning) by varying the time the pulse spends at point L. After every measurement, the electrons will have decayed into the triplet T − state. On initially pulsing to point L, the electrons (residing in separate dots) remain in T − instead of entering the (3,4) charge state. This is because there is a slower tunnel rate of electrons from the reservoir R2 to D2U since it is further away when compared from R2 to D2L. Thus, the electrons cannot move to the (3, 4) charge state until the electron in D2L moves to D2U to enter the singlet S 0 state [2,28]. Fig. 3c shows 250 traces taken when waiting 0.7 ms at point L and 250 traces when waiting 4.1 ms at point L. The RF response starts to appear when measuring at point M. The lengths of each non-zero signal are exponentially distributed with a time constant of 2 ms and represent singlet states decaying into triplet T − states. When waiting a lower time at L, there is clearly a smaller proportion of singlet states. Fig. 3d shows the singlet-counts over 1000 traces taken at different wait times at point L. When viewing the singlet-counts as a function of the wait time at point L and fitting to the resulting exponential rise in the singlet counts, the decay time was measured to be 0.62 ms. This corresponds to the upper bound in the measurement time when using a charge sensor to distinguish between T − (1, 1) and S 0 (0, 2) states [9]. IV. CONCLUSION In summary we demonstrated single shot readout of an electron spin using the singlet-triplet basis in silicon using a single-gate RF sensor. The reduction in gate density using a single-gate sensor simplifies architectures for large arrays of solid state qubits [1,2]. We demonstrated that the S 0 to T − relaxation time, which limits the qubit measurement time, was 2 ms and unaffected by the presence of the RF tone. The single-gate RF sensor gave an average measurement fidelity of 82.3%. The fidelity can be further improved by using resonators with higher internal quality factors along with better matching networks to the transmission line for greater signal strength. For example, in this experiment, if the internal and external quality factors were matched at 1600 the expected readout fidelity exceeds 99%.
3,890.6
2018-09-06T00:00:00.000
[ "Physics" ]
Classification of age-related macular degeneration using convolutional-neural-network-based transfer learning Background To diagnose key pathologies of age-related macular degeneration (AMD) and diabetic macular edema (DME) quickly and accurately, researchers attempted to develop effective artificial intelligence methods by using medical images. Results A convolutional neural network (CNN) with transfer learning capability is proposed and appropriate hyperparameters are selected for classifying optical coherence tomography (OCT) images of AMD and DME. To perform transfer learning, a pre-trained CNN model is used as the starting point for a new CNN model for solving related problems. The hyperparameters (parameters that have set values before the learning process begins) in this study were algorithm hyperparameters that affect learning speed and quality. During training, different CNN-based models require different algorithm hyperparameters (e.g., optimizer, learning rate, and mini-batch size). Experiments showed that, after transfer learning, the CNN models (8-layer Alexnet, 22-layer Googlenet, 16-layer VGG, 19-layer VGG, 18-layer Resnet, 50-layer Resnet, and a 101-layer Resnet) successfully classified OCT images of AMD and DME. Conclusions The experimental results further showed that, after transfer learning, the VGG19, Resnet101, and Resnet50 models with appropriate algorithm hyperparameters had excellent capability and performance in classifying OCT images of AMD and DME. . Additionally, nearly 750,000 individuals aged 40 or older suffer from diabetic macular edema (DME) [3], a vision-threatening form of diabetic retinopathy that causes fluid accumulation in the central retina. Many researchers have attempted to develop effective artificial intelligence algorithms by using medical images to diagnose key pathologies of AMD and DME quickly and accurately. Naz et al. [4] addressed the problem of automatically classifying optical coherence tomography (OCT) images to identify DME. They proposed a practical and relatively simple approach to using OCT image information and coherent tensors for robust classification of DME. The features extracted from thickness profiles and cysts were tested using 55 diseased and 53 normal OCT scans in the Duke Dataset. Comparisons revealed that the support vector machine with leave-one-out had the highest accuracy of 79.65%. For identifying DME, however, acceptable accuracy (78.7%) was achieved by using a simple threshold based on the variation in OCT layer thickness. Najeeb et al. [5] used a computationally inexpensive single layer convolutional neural network (CNN) structure to classify retinal abnormalities in retinal OCT scans. After training using an open-source retinal OCT dataset containing 83,484 images from patients, the model achieved acceptable classification accuracy. In a multi-class comparison (choroidal neovascularization (CNV), DME, Drusen, and Normal), the model achieved 95.66% accuracy. Nugroho [6] used various methods, including histogram of oriented gradient (HOG), local binary pattern (LBP), DenseNet-169, and ResNet-50, to extract features from OCT images and compared the effectiveness of handcrafted and deep neural network features. The evaluated dataset contained 32,339 instances distributed in four classes (CNV, DME, Drusen, and Normal). The accuracy values for the deep neural network-based methods (88% and 89% for DenseNet-169 and ResNet-50, respectively) were superior to those for the nonautomatic feature models (50% and 42% for HOG and LBP, respectively). The deep neural network-based methods also obtained better results in the underrepresented class. In Kermany et al. [7], a diagnostic tool based on a deep-learning framework was used to screen patients with common treatable blinding retinal diseases. By using transfer learning, the deep-learning framework could train a neural network with a fraction of the data required in conventional approaches. When an OCT image dataset was used to train the neural network, accuracy in classifying AMD and DME was comparable to that of human experts. In a multi-class comparison among CNV, DME, Drusen, and Normal, the framework achieved 96.1% accuracy. In Perdomo et al. [8], an OCT-NET model based on CNN was used for automatically classifying OCT volumes. The OCT-NET model was evaluated using a dataset of OCT volumes for DME diagnosis using a leave-one-out cross-validation strategy. Accuracy, sensitivity, and specificity all equaled 93.75%. The above results of research in AMD indicate that automatic classification accuracy needs further improvement. Therefore, the motivation of this study was to find CNN-based models and their appropriate hyperparameters that use transfer learning to classify OCT images of AMD and DME. The CNN-based models were used for transfer learning included an 8-layer Alexnet model [9], a 22-layer Googlenet model [10], 16-and 19-layer VGG models (VGG16 and VGG19, respectively; [11]), and 18-, 50-and 101-layer Resnet models (Resnet18, Resnet50, and Resnet101, respectively; [12]). The algorithm hyperparameters included optimizer, mini-batch size, max-epochs, and initial learning rate. The experiments showed that, after transfer learning, the VGG19, Resnet101, and Resnet50 models with their appropriate algorithm hyperparameters had excellent performance and capability in classifying OCT images of AMD and DME. This paper is organized as follows. The research problem is described in Sect. 2. Section 3 describes the research methods and steps. Section 4 presents and discusses the results of experiments performed to evaluate performance in classifying OCT images of AMD and DME. Finally, Sect. 5 concludes the study. AMD and DME The macula, which is located in the center of the retina, is essential for clear visualization of nearby objects such as faces and text. Various eye problems can degrade the macula and, if left untreated, can even cause loss of vision. Age-related macular degeneration is a medical condition that can cause blurred vision or loss of vision in the center of the visual field. Early stages of AMD are often asymptomatic. Over time, however, gradual loss of vision in one or both eyes may occur. Loss of central vision does not cause complete blindness but can impair performance of daily life activities such as recognizing faces, driving, and reading. Macular degeneration typically occurs in older people. The classifications of AMD are early, intermediate, and late. The late type is further classified as "dry" and "wet" [13]. In the "dry" type, which comprises 90% of AMD cases, retinal deterioration is associated with formation of small yellow deposits, known as Drusen, under the macula. In the "wet" AMD type, abnormal blood vessel growth (i.e., CNV) occurs under the retina and macula. Bleeding and fluid leakage from these new blood vessels can then cause the macula to bulge or lift up from its normally flat position, thus distorting or destroying central vision. Under these circumstances, vision loss may be rapid and severe. A DME is characterized by breakdown of blood vessel walls in the retina resulting in accumulation of fluid and proteins in the retina. The resulting distortion of the macula then causes visual impairment or loss of visual acuity. One precursor of DME is diabetic retinopathy, in which blood vessel damage in the retina causes visual impairment [5]. OCT images of AMD and DME In this study, all OCT images of AMD and DME used in the experiments were obtained from Kermany et al. [14]. The images were divided into four classes: CNV, DME, Drusen, and Normal. Figure 1 shows representative images of the four OCT classes. Considered problem The considered problem was how to classify large numbers of different OCT images of CNV, DME, Drusen, and Normal efficiently and accurately. Since OCT images of CNV, DME, Drusen, and Normal can differ even for the same illness, a specialist or machine learning is needed to assist the physician in classifying the images. Methods The research methods and steps were collecting data, processing OCT images of AMD and DME, selecting a pre-trained network for transfer learning, classifying OCT images of AMD and DME by CNN-based transfer learning, comparing performance among different CNN-based transfer learning approaches, and comparing performance with other approaches in classifying OCT images of AMD and DME. The detailed steps were as follows. Collecting data and processing OCT images of AMD and DME The OCT images of AMD and DME in Kermany et al. [14] were split into a training set and a testing set of images. The training set had 83,484 images, including 37,205 CNV images, 11,348 DME images, 8,616 Drusen images, and 26,315 images of a normal eye condition. The testing set used for network performance benchmarking contained 968 images, 242 images per class. To maintain compatibility with the CNN-based architecture, each OCT image was processed as a 224 × 224 × 3 image, where 3 is the number of color channels. Selecting pre-trained network for transfer learning Transfer learning is a machine learning method in which a model developed for a task is reused as the starting point for a model developed for another task. In transfer learning, pre-trained models are used as the starting point for performing computer vision and natural language processing tasks. Transfer learning is widely used because it reduces the computation time, the computational resources, and the expertise needed to develop neural network models for solving these problems [15]. In his NeurIPS 2016 tutorial, Ng [16] highlighted the potential uses of transfer learning and predicted that, after supervised learning, transfer learning will be the next major commercial application of machine learning. In transfer learning, a pre-trained model is used to construct a predictive model. Thus, the first step is to select a pre-trained CNV DME Drusen Normal Fig. 1 Representative optical coherence tomography images of the CNV, DME, Drusen, and Normal classes. CNV choroidal neovascularization, DME diabetic macular edema source model from available models. The pool of candidate models may include models developed by research institutions and trained using large and complex datasets. The second step is to reuse the model. The pre-trained model can then be used as the starting point for a model used to perform the second task of interest. This may involve using all or parts of the model, depending on the modeling technique used. The third step is to tune the model. Depending on the input-output pair data available for the task of interest, the user may consider further modification or refinement of the model. The widely used commercial software program Matlab R2019 by MathWorks has been validated as effective for pre-training neural networks for deep learning. The starting point for learning a new task was pretraining, in which the image classification network was pretrained to extract powerful and informative features from natural images. Most pre-trained networks were trained with a subset of the ImageNet database [17] used in the ImageNet Large-Scale Visual Recognition Challenge [18]. After training on more than 1 million images, the networks could classify images into 1000 object categories, e.g., keyboard, coffee mug, pencil, and various animals. Transfer learning in a network with pre-training is typically much faster compared to a network without pre-training. Classifying OCT images of AMD and DME by CNN-based transfer learning Fine-tuning a pre-trained CNN with transfer learning is often faster and easier than constructing and training a new CNN. Although a pre-trained CNN has already learned a rich set of image features, it can be fine-tuned to learn features specific to a new dataset, in this case, OCT images of AMD and DME. Fine-tuning a network is slower and requires more effort than simple feature extraction. However, since the network can learn to extract a different feature set, the final network is often more accurate. The starting point for fine tuning deeper layers of the pre-trained CNNs for transfer learning (i.e., Alexnet, Googlenet, VGG16, VGG19, Resnet18, Resnet50, and Resnet101) was training the networks with a new data set of OCT images of AMD and DME. Figure 2 is a flowchart of the CNN-based transfer learning procedure. Classification performance in comparison with other approaches The accuracy, precision, recall (i.e., sensitivity), specificity, and F 1 -score values were used to compare performance with other approaches. Precision was assessed by positive predictive value (number of true positives over number of true positives plus number of false positives). Recall (sensitivity) was assessed by true positive rate (number of true positives over the number of true positives plus the number of false negatives). Specificity was measured by true negative rate (number of true negatives over the number of false positives plus the number of true negatives). The F 1 -score, a function of precision and recall, was used to measure prediction accuracy when classes were very imbalanced. In information retrieval, precision is a measure of the relevance of results while recall is a measure of the number of truly relevant results returned. The formula for F 1 -score is Results The proposed CNN-based transfer learning method with appropriate hyperparameters was experimentally used to classify OCT images of AMD and DME. The OCT images in Kermany et al. [14] were used to train models and to test their performance. The experimental environment was Matlab R2019 and its toolboxes developed by The MathWorks. The network training options were the options available in the Matlab toolbox for CNNbased transfer learning with algorithm hyperparameters, i.e., 'Optimizer' , 'MiniBatch-Size' , 'MaxEpochs' (maximum number of epochs), and 'InitialLearnRate' . The experimental data for OCT images of AMD and DME included a training set and a testing set. To maintain compatibility with the CNN-based architecture, each OCT image was processed as a 224 × 224 × 3 image, where 3 is the number of color channels. Table 1 shows the training and testing sets of OCT images of AMD and DME. For training, different CNN-based models require different algorithm hyperparameters. The hyperparameter values are set before the learning process begins. Table 2 shows the selected CNN-based models with algorithm hyperparameters. The training option was use of 'sgdm' , a stochastic gradient descent with a momentum optimizer. MiniBatchSize used a mini-batch with 40 observations at each iteration. MaxEpochs set the maximum number of epochs for training. InitialLearnRate was an option for dropping the learning rate during training. For each CNN-based model, Table 3 shows the accuracy in each experiment, the average accuracy for all experiments, and the standard deviation (SD) in accuracy in classifying OCT images of AMD and DME. Data are shown for five independent runs of the experiments performed in the training set and in the testing set. Table 3 shows that the average accuracy in the testing set ranged from 0.9750 to 0.9942 when using the CNN-based models with appropriate hyperparameters for transfer learning. For the testing set, the VGG19, Resnet101, and Resnet50 models had average accuracies of 0.9942, 0.9919, and 0.9909, respectively, which were all very high (all exceeded 0.99). Moreover, the SDs in accuracy obtained by VGG19 and Resnet101 were all 0.0005. That is, the VGG19 and Resnet101 had the most robust performance in classifying OCT images of AMD and DME. accuracy for the training set, and the black line shows the progressive improvement in accuracy for the testing set. Figures 4 and 5 show how model training progressively improved accuracy in Resnet101 and Resnet50, respectively. The training option was sgdm optimizer. Mini-BatchSize used 40 observations at each iteration. Iterations per epoch were 2087. Max-Epochs were set to 5. Therefore, the maximum iterations were 10,435(= 2087 × 5). The blue line shows the progressive improvement in accuracy when using the training set, and the black line shows the progressive improvement in accuracy when using the testing set. The accuracy metric was used to measure the transfer learning performance of the CNN-based models. Precision, recall, specificity, and F 1 -score were further used to validate classification performance. The results were depicted by creating a confusion matrix of the predicted labels versus the true labels for the respective disease classes. Tables 4, 5 and 6 show the confusion matrices used in multi-class comparisons of Normal, CNV, DME, and Drusen for VGG19, Resnet101, and Resnet50 for the testing data. Table 4 shows that, in Experiment #5, VGG19 achieved an accuracy of 0.9948 with an average precision of 0.9949, an average recall of 0.9948, an average specificity of 0.9983, and an average F 1 -score of 0.9948. Table 5 shows that, in Experiment #5, Resnet101 achieved an accuracy of 0.9928 with an average precision of 0.9928, an average recall of 0.9928, an average specificity of 0.9976, and an average F 1 -score of 0.9928. Table 6 indicates that, in Experiment #5, Resnet50 achieved an accuracy of 0.9917 with an average precision of 0.9918, an average recall of 0.9917, an average specificity of 0.9972, and an average F 1 -score of 0.9917. Next, the performance of the proposed CNN-based transfer learning approach in classifying OCT images of AMD and DME was compared with the results reported in Kermany et al. [7], Najeeb et al. [5], and Nugroho [6]. Table 7 shows the confusion matrix for Normal, CNV, DME, and Drusen obtained by Kermany et al. [7]. The model in Kermany et al. [7] achieved an accuracy of 0.9610 with an average precision of 0.9610, an average recall of 0.9613, an average specificity of 0.9870, and an average F 1 -score of 0.9610. Table 8 shows the confusion matrix for Normal, CNV, DME, and Drusen obtained by Najeeb et al. [5]. The model in Najeeb et al. [5] achieved an accuracy of 0.9566 with an average precision of 0.9592, an average recall of 0.9566, an average specificity of 0.9855, and an average F 1 -score of 0.9563. For the testing set, Table 9 shows the classifier accuracy, average precision,average recall/sensitivity, average specificity, and average F 1 -score obtained by the different CNNbased models. When the testing set was used in Experiment #5, the accuracies obtained by VGG19, Resnet101, and Resnet50 were 0.9948, 0.9928, and 0.9917, respectively, which are all very high and were superior to the accuracies obtained by the models in Kermany et al. [7], Najeeb et al. [5], and Nugroho [6]. In Experiment #5, other measures (i.e., average precision, average recall/sensitivity, average specificity, and average F 1 -score) obtained byVGG19, Resnet101, and Resnet50 were higher than those obtained by the models in Kermany et al. [7], Najeeb et al. [5], and Nugroho [6]. That is, by using transfer learning with Discussions In this study, the appropriate algorithm hyperparameters for CNN-based transfer learning were very important for classifying OCT images of AMD and DME. This phenomenon was demonstrated by experiments in which the VGG19, Resnet50, and Resnet101 models achieved a classification accuracy exceeding 99%. If an inappropriate combination of algorithm hyperparameters is used, the classification accuracy will be reduced. For example, the algorithm hyperparameters for Googlenet transfer learning and the results in Table 10 indicates that an appropriate set of hyperparameters can provide good performance for transfer learning, where Optimizer of sgdm and InitialLearnRate of 10 -4 are identical. Therefore, the combination of algorithm hyperparameters of the third case (i.e., Optimizer of sgdm, MiniBatch-Size of 40, MaxEpochs of 5, and InitialLearnRate of 10 -4 ) was selected for the study because it achieved high accuracy in the training and testing sets. Tables 11 and 12 show the algorithm hyperparameters for Resnet50 and Resnet101 transfer learning and their respective results. Tables 11 and 12 show that, if all other hyperparameter are identical (Optimizer of sgdm, MiniBatchSize of 40, and InitialLearnRate of 10 -4 ), changing MaxEpochs from 3 to 5 improves accuracy for the test set by more than 0.99. Therefore, this combination of algorithm hyperparameters (i.e., Optimizer of sgdm, MiniBatchSize of 40, MaxEpochs of 5, and InitialLearnRate of 10 -4 ) was selected for Resnet50 and Resnet101 transfer learning in classifying OCT images of AMD and DME. Figure 6 displays four sample images with predicted labels and the predicted probabilities of images with those labels. The results for four randomly selected sample images were very similar to the results for the predicted category, and the probabilities of prediction approached 100%, indicating that the model established by CNN-based transfer learning had high classification ability. Presently, CNN-based transfer learning is very efficient and stable [19,20]. The key to successful image classification is ensuring that the original images are correctly classified. This phenomenon was demonstrated by experiments in this study in which the CNN-based model achieved a classification accuracy exceeding 99%. Therefore, CNNbased transfer learning with appropriate hyperparameters has the best performance in classifying OCT images of AMD and DME. Conclusions This study used CNN-based transfer learning with appropriate algorithm hyperparameters for effectively classifying OCT images of AMD and DME. The main contribution of this study is the confirmation that suitable CNN-based models with their algorithm hyperparameters can use transfer learning to classify OCT images of AMD and DME. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from:
4,716.6
2021-11-01T00:00:00.000
[ "Computer Science" ]
DO IMPACT INVESTING OPPORTUNITIES EXIST IN PUBLIC EQUITY? AN EMPIRICAL EXAMINATION How to cite this paper: Hansen, S. E. INTRODUCTION Impact investing has developed as one of the most visionary and promising areas of the social finance movement (Bernal et al., 2021;Jackson, 2013;Lehner & Nicholls, 2014;Lehner et al., 2019).Within impact investing, the goal is to create positive social or environmental impact and financial returns simultaneously (Boscia et al., 2019;Kölbel et al., 2020;Nicholls et al., 2015), whereby it can attain goals that are not feasible through neither pure philanthropic grants nor conventional investments (Roundy, 2020;Weber, 2016).Notwithstanding the expeditious and noteworthy momentum of impact investing (Ormiston et al., 2015), research has not kept pace with the growing practitioner interest (Agrawal & Hockerts, 2021).The amount of academic publications on impact investing remains limited (Kölbet et al., 2020; Wendt, 2019).The chief part of the literature currently originates from industry-based reports (Hebb, 2013), resulting in a gap between scholarly and practitioner bases of knowledge (Clarkin & Cangioni, 2016).The current landscape of industry-based reports, although Even though impact investing increasingly establishes a presence in public equity, research confirming that this asset class is feasible for impact investments is lacking (Phillips & Johnson, 2021).This has resulted in queries about unrealistic assumptions of achieving positive social and environmental impact, alongside financial returns, in a public equity setting (Bernal et al., 2021;Boscia et al., 2019).Resultingly, the public equity approach to impact investing has been accused of being the first step towards a total dilution of the industry's original mission of attaining goals that are not feasible through neither pure philanthropic grants nor conventional investments.Aimed at bridging the current research gap, within the literature of impact investing, this paper examines whether impact investing opportunities exist in public equity.Based on an empirical foundation of 163 publicly listed companies, which are the target of impact investments made through impact funds, it is found that impact investing opportunities exist in public equity when evaluated based on long-term measures of shareholder value creation.Theoretical implications suggest that the concept of impact investing does not need to be refined in a public equity setting and that the field could advance from discussing the fundamental assumptions to start defining the boundaries of impact investing in public equity. attempting to increase the transparency of this emergent industry, focuses mainly on the financial return of impact investments (Hehenberger & Harling, 2018).However, the problem is that financial data disconnected from impact fail to account for the double-bottom line (Wilburn & Wilburn, 2014) and consequently, the complex reality faced by impact investors (Hehenberger & Harling, 2018). As a result, the rapid growth of the impact investing industry has been followed by queries about potentially unrealistic assumptions of the ability to achieve social and environmental impact alongside financial return (Phillips & Johnson, 2021;Born & Brest, 2013).In this context, academic research has a distinctive contribution to make in developing the impact investing industry, in questioning its underlying assumptions based on empirical evidence (Daggers & Nicholls, 2016).At this nascent stage of formation, the field needs to manage market expectations through a comprehensive and transparent assessment of the simultaneously generate financial return and impact (Evans, 2013).Consequently, scholars have continued to highlight the need for studies that include both impact data and data on financial return (Reisman et al., 2018;Hehenberger & Harling, 2018).At its broadest, this research aims to close this gap by empirically assessing the relationship between impact and financial return among impact investments. Impact investing is starting to venture out of its private market origins and establishing a presence in public market investments that has generated both renewed interest and new investors (Roundy et al., 2017).Among repeat respondents of the Global Impact Investing Network's (GIIN) 2022 Annual Impact Investor survey, the highest growth of impact investing activity per asset class allocation has occurred in public equity.Public equity has grown by 33% compound annual growth rate from 2015 to 2019 (GIIN, 2022).These findings are supported by a recent report published by Blackrock (Rice & Tran, 2022); they claim that public equities can take an important "complementary role in the impact investment ecosystem, offering solutions that private markets cannot and allowing more investors to participate in a space long available only to high-net-worth and institutional investors".This trend is even though some scholars argue that the public equity approach to impact investing is a move towards a total dilution of the industry's original mission (Born & Brest, 2013;Balbo, 2016).O'Donohoe et al. (2010) anticipate more publicly traded investment opportunities will be greenwashing the impact investing industry.However, it is postulated that a process like greenwashing (Delmas & Burbano, 2011) might be occurring in the impact investing industry as it gradually mainstreams (Born & Brest, 2013), threatening the legitimacy of the entire concept (Findlay & Moran, 2019).Ultimately, this paper contributes to the literature by studying the relationship between impact and financial return in a public equity setting. The value of longitudinal studies that analyse impact investments becomes progressively essential as a tool for establishing integrity and legitimacy (Suchman, 1995) and increasing investor confidence (Clarkin & Cangioni, 2016).If impact investments in public markets are adopted by interests seeking to use the term merely to inflate market share, this might have disadvantageous implications for the field's general development (Findlay & Moran, 2019).Consequently, scholars have called for future research to conduct longitudinal studies to bring further insights and causal inferences into the relationship between impact and financial performance in public equity (Urban & George, 2018).Aimed at contributing to the empirical knowledge of impact investing in public equity and derived from the above, applying 163 publicly listed companies, the problem statement of this research takes its point of departure from the identified research gap and can be summarized as: RQ: Do impact investing opportunities exist in public equity? Through the empirical examination of this research, it is impossible to falsify that impact investing opportunities exist for long-term impact investors, when impact investments are made into publicly listed companies, which are the target of impact funds.The findings have important theoretical implications for the field of impact investing, as they indicate that the concept of impact investing does not need to be redefined in a public equity setting when evaluated based on market-based measures of shareholder value creation.Thus, the findings suggest that the field can continue its progression and wide-ranging adoption, as impact realization can be attained pari passu with shareholder value creation in a public equity setting.Further, findings suggest that the field could advance from discussing the fundamental assumptions and start defining the boundaries of impact investing in public equity. This paper is structured as follows: Section 2 outlines the theoretical background and provides hypotheses for the research.Then there is a description of the research methodology in Section 3 and presentation of the results in Section 4. Thereafter, a discussion on the research's implications is proposed in Section 5. Finally, a conclusion of the findings and directions for future research are presented in Section 6. washing.Secondly, the lack of lucidity potentially impedes the progression and wide-ranging adoption of impact investing (Höchstäder & Scheck, 2015), as it obstructs the possibility that conventional investors understand and take a stance towards it (Sandberg et al., 2009).An impact investing industry deprived of concord on what constitutes impact and what data must be collected is intrinsically inhibited, making it less accessible for conventional investors who might procure additional capital were the field properly defined (Sardy & Lewin, 2016).As a result, the knowledge of the dynamics of impact investing lies with a few competent players (Höchstäder & Scheck, 2015).Furthermore, without a detailed framing of the impact investing industry, governments, policymakers, and regulatory authorities have a hard time building the required market infrastructure (Brandstetter & Lehner, 2014).Thirdly, scholarly research needs clear definitional parameters of the concept to allow for accurate discussions (Sandberg et al., 2009).Without conceptual, terminological, and definitional transparency, it becomes difficult for the impact investing industry to gain legitimacy and for corresponding theories to advance (Höchstäder & Scheck, 2015). LITERATURE AND HYPOTHESES Despite the diverse conceptualizations, all definitions of impact investing share the achievement of societal or environmental alteration through capital investment (Urban & George, 2018).Further, like conventional investing (Höchstäder & Scheck, 2015), impact investing implicates the provision of financial resources for a financial return (Grabenwarter & Liechtenstein, 2011;Brandstetter & Lehner, 2014).Ultimately, even though the majority of scholars emphasize the absence of definitional homogeneity (Daggers & Nicholls, 2016;Agrawal & Hockerts, 2021), at the most general level, the field appears to agree on the fundamental definitional components around which impact investing is generally defined.Here, the prevailing definition of impact investing is based on the two core dimensions of financial return and social or environmental impact (Höchstäder & Scheck, 2015). Financial return Concerning the financial dimension of impact investing, the return of the invested principal seems to be the minimum requirement to classify as an impact investment (Freireich & Fulton, 2009 (Trelstad, 2016;Nilsson & Robinson, 2018).In their inductive study, Roundy et al. (2017) find that there is a variance among impact investors in terms of the priority they put on achieving financial return and that individual impact investors have different financial targets.These findings support the claims of Hehenberger and Harling (2018), who find that it makes no sense to study impact investing from a financial perspective, decoupled from the social and environmental components of impact investing. According to Freireich and Fulton (2009), impact investors can be generally categorized into two groups based on their primary objective impactfirst and finance-first investors.To impact-first investors, the aim is to optimize social or environmental impact with a floor for financial return (Freireich & Fulton, 2009;Glänzel & Scheuerle, 2016).Hebb (2013) suggests that the impact-first investor seeks financial returns ranging from repayment of the invested principal up to a riskadjusted market return.On the other hand, financefirst investors prioritize financial returns and understand impact investing as investments intended to create social and environmental returns in addition to financial returns but with a floor for social or environmental impact (Freireich & Fulton, 2009;O'Donohoe et al., 2010).This implies that impact investors can select from various investment approaches that offer different blends of financial return and impact to fit the rationality of their investments (Nicholls, 2010).However, both components must have a positive relationship to separate impact investing from traditional investing and philanthropy (Grabenwarter & Liechtenstein, 2011). Impact The notion of impact is the central component of impact investing that sets it apart from related concepts within the broader social finance paradigm (Nicholls et al., 2015).A comprehensive definition of impact is lacking in the current impact investing literature (Reeder & Colantonio, 2013), and the standards for impact remain largely subjective without any defined criteria for judging the impact hurdle that an investment must pass to qualify as an impact investment (Höchstäder & Scheck, 2015).Resultingly, one of the most disputed conditions remains the question of what types of impact are adequate to distinguish an impact investment (Kölbel et al., 2020;Svedova et al., 2014).However, as long as the standards for what constitutes impact is defined by the individual impact investor, Bugg-Levine and Emerson (2011) argue that almost all investments can qualify as impact investments. In the literature, the impact is defined mainly as social or environmental impact (Boscia et al., 2019;Ashta, 2012).Another regular definition of impact is centred around the focus on solving thematic issues (Wendt, 2019), which is why impact investing is tightly linked to social and environmental problems and challenges (Spiess-Knafl & Scheck, 2017).Nicholls et al. (2015) argue that all companies create economic, social, and environmental value.However, most companies are not managed to optimize their long-term social and environmental value creation (Emerson, 2003). However, Trelstad (2016) argues that intent is not significantly important as long as the investor manages to find an investment capable of delivering the desired impact since everyone doesn't need to share the same intentions around a specific impact for it to be realized.Countering this argument, Born and Brest (2013) maintain that while social or environmental impact is possible to achieve unintentionally, it does not imply that intention is insignificant, as investors are more likely to achieve what they intentionally seek.This is supported by Dadush (2015), who finds that the less an investor is concerned with realizing a positive environmental or social impact, the more unlikely the investor is to proffer relevant assistance to the investee in accomplishing its environmental or social undertaking.Moreover, the less an investor cares for positive environmental or social impact, the higher the likelihood that the investor will pressure the investee to prioritize financial performance over impact realization.Embedded in this line of reasoning, Brown and Swerky (2012) add to the definition of impact investing that the intended impact must be clearly defined a priori.Thus, positive externalities in the form of incidental side-effects of commercial deals are not enough to qualify as an impact investment.Kölbel et al. (2020) argue for a greater investors impact where a lack of capabilities and financial constraints are evident.Ultimately, an impact investor must demonstrate an intention to cause both a positive environmental or social impact and a financial return (Born & Brest, 2013;Barber et al., 2021). Ultimately, the definition of impact investing is based on the two core principles of blended value (Emerson, 2003) and financial return (Weber, 2016).In this context, the blended value principle claims that impact investing can attain both financial returns as well as social and environmental impact (Emerson & Cabaj, 2000), whereas the principle of financial return assures the lasting viability of such investments (Geobey & Weber, 2013).A prerequisite to qualify the deployment of capital for social and environmental impact in public equity as impact investing is that there must be a positive relationship between the scale of social and environmental impact achieved and the ability of the investee to generate shareholder value.Consequently, the first hypothesis can be summarized as follows: H1: There is a positive linear relationship between social and environmental impact and shareholder value creation. The utmost essential condition is the existence of a causal connection between the impact achieved and the generation of financial return (Grabenwarter & Liechtenstein, 2011).However, scholars point out that the existing empirical literature has not effectively dealt with the inherent causality issues of impact investing (Aguinis & Glavas, 2012).Suppose impact investing opportunities should exist in public equity.In that case, the causal relationship between social and environmental impact and shareholder value creation must be bidirectional in order to deliver blended value (Emerson, 2003) and financial return (Weber, 2016), which are the two core principles that set impact investing apart from philanthropy and conventional investing.Ultimately, this gives rise to the second set of hypotheses: H2a: Higher levels of social and environmental impact cause higher levels of shareholder value creation. H2b: Higher levels of shareholder value creation cause higher social and environmental impact levels. Intentionality Oleksiak et al. (2015) specify the deliberate structuring of investments to provide positive social or environmental impact alongside financial return, where social and environmental externalities are more than a by-product of financial value creation, as a critical trait of impact investing.Thus, unintentionally realizing a social or environmental impact in of searching for a financial return does not qualify as an impact investment (Brandstetter & Lehner, 2014). However, Trelstad (2016) argues that intent is not important as long as the investor finds an investment capable of delivering the desired impact since everyone doesn't need to share the same intentions around a specific impact for it to be realized.Countering this argument, Born and Brest (2013) maintain that while social or environmental impact is possible to achieve unintentionally, it does not imply that intention is insignificant, as investors are more likely to achieve what they intentionally seek.This is supported by Dadush (2015), who finds that the less an investor is concerned with realizing a positive environmental or social impact, the more unlikely the investor is to proffer the relevant assistance to the investee in accomplishing its environmental or social undertaking.Moreover, the less an investor cares for positive environmental or social impact, the higher the likelihood that the investor will pressure the investee to prioritize financial performance over impact realization. The intentionality criterion also implies that investments in sectors associated with positive externalities but driven by a pure for-profit motive do not classify as impact investments (Barber et al., 2021).Born and Brest (2013) argue that if an impact investor is unwilling to make a financial sacrifice, which they assume is not the case when investments go to publicly traded cap large-cap markets, the impact investment cannot contribute with anything the market would not have achieved anyway. Measurability The core of impact measurement is identifying a causal relationship between an investment and its impact while attributing both the negative and positive effects to the investment (White, 2010).The measurement and assessment of impact is a way for impact investors to mitigate the risk of mission drift and exploitation of impact investees, which Spiess-Knafl and Scheck (2017) consider legitimate concerns for impact investing.Nicholls et al. (2015) claim that the more impact investing is accepted within the traditional financial markets, the graver this problem will become.Reeder et al. (2015) raise the concern that if impact investing should continue to burgeon, more robust quantifications of the broader effects of impact investing are needed.However, demonstrating impact is multifaceted and impeded by methodological complications like collecting and measuring sometimes intangible effects (Reeder & Colantonio, 2013). Data collection and sampling The general population to which the results are meant to apply consists of publicly listed companies that are the target of impact investments.However, it is acknowledged that no certified database exists on impact investing activities (Urban & George, 2018).Consequently, the population is reduced to a target population that is redefined to account for publicly listed companies that are the target of impact investments made through impact funds investing in public equity.This paper draws upon existing lists from the GIIN, as the GIIN remains the sole actor within the impact investing industry that methodologically reports on the dynamics of impact investing (Balbo, 2016).First, an initial list was compiled based on all current members of the GIIN, consisting of 98 asset owners, 171 asset managers, and 64 service providers.Then, for all 51 GIIN research publications, the individual lists of participants and survey respondents for each publication are assessed and sorted into asset owners, asset managers, and service providers.Hereupon, duplicates, and service providers are removed.A manual screening process is performed from the list of asset managers where any impact fund investing in public equity is sorted into an individual list.Further, for each asset owner, this research follows the money invested in order to identify any impact fund investing in public equity, which is then added to the list of impact funds investing in public equity.However, the final list of impact funds is not perfect at excluding impact funds investing in publicly listed companies that do not apply to the target population.Membership of the GIIN is obtainable for asset owners and asset managers that are interested in deepening their engagement with the impact investing industry.Thus, they might not all be active in impact investing at the point of sampling.As a result, there is a risk that the list might include funds that do not belong to the target population.In order to mitigate the risk of this type of sampling frame error, which may contribute to bias (Zhengdong, 2011), this research performs a second round of manual screening.Here, each impact fund investing in publicly listed companies is screened for the four inclusion criteria derived from the literature review as: 1) financial return, 2) impact, 3) intentionality, and 4) measurability.Ultimately, the final sampling frame consists of the 45 impact investing funds.The literature review reveals that impact investors can select from a variation of approaches that differ in terms of financial return and impact realization (Nicholls, 2010).Thus, intentionality is the most significant defining characteristic of impact investing (Barber et al., 2021), which is why this sample frame is considered the strongest possible. Only the top teen holdings for each of the 45 impact funds investing in public equity are accessible through DataStream.Consequently, each of the publicly listed companies included among the top teen holdings are extracted for the final sample, which, after removing duplicates and checking for data availability, consists of 163 publicly listed companies. The impact investment market is characterized by a scarcity of publicly available data (Saltuk et al., 2013), especially when it comes to impact data (Reisman et al., 2018).Looking into the limited amount of research on impact investing, it is revealed that in the few studies where data is used, it is sourced from internal sources or based on anecdotal evidence (Glänzel & Scheuerle, 2016).This paper will rely on external archival data from DataStream in the form of ASSET4 environmental, social, and governance (ESG) indicators.The data collected is two-dimensional, as it combines both cross-sectional and time series data, where data is collected for the same company on a yearly basis repeatedly between 2013 and 2018. Measures and variables While methods for measuring financial returns are largely perceived as systematic and robust (Reisman et al., 2018), the parallel task with respect to the measurement of social and environmental impact lacks such historical pedigree (Reeder et al., 2015).Most often, the impact assessment conducted by evaluators is tailor-made to the individual evaluand, for which they have wide discretion when choosing indicators and methods (Ruff & Olsen, 2018).The resultant heterogeneity of approaches to impact measurement makes it difficult to compare impact across companies (Rawhouser et al., 2019). At the most fundamental level, the assessment of social and environmental impact can be distinguished between standardized and company-specific measurements.Given that the sample includes cross-sectional data, the standardized measurement approach is favored for the purpose of this paper.One of the main challenges associated with measuring social and environmental impact, following the standardized approach is to assure comparability across companies (Kroeger & Weber, 2014).As a result, a proxy for social and environmental impact is developed based on Porter and Kramer's (2011) creating shared value (CSV) framework and the three constructs of CSV.Table 1 below outlines the six categories constituting the theoretically derived proxy for social and environmental impact.Here, all six categories are numeric variables of continuous nature that have a positive scaling ranging from 0 to 100.Measures a company's commitment and effectiveness towards reducing environmental emission in the production and operational processes. Resource use score Reflects a company's performance and capacity to reduce the use of materials, energy or water, and to find more eco-efficient solutions by improving supply chain management. TRESGSOWOS Workforce score Measures a company's effectiveness towards job satisfaction, healthy and safe workplace, maintaining diversity and equal opportunities, and development opportunities for its workforce."Enabling local cluster development" TRESGSOCOS Community score Community category score measures the company's commitment towards being a good citizen, protecting public health and respecting business ethics.Source: Authors' conceptualization based on categories and descriptions adopted from DataStream and the CSV framework developed by Porter and Kramer (2011). The literature, to date, has not managed to pinpoint a theoretically derived ranking of importance for the various sources of environmental and social impact as a guide for empirical work (Ioannou & Serafeim, 2012).Consequently, the three constructs of CSV are equally weighted in the construction of the proxy for social and environmental impact, as illustrated in the following calculation: The forthcoming analyses are performed on both dimensions of shareholder value creation.This is because the impact investing field has only recently begun to engage in confirmatory studies (Agrawal & Hockerts, 2021), with this paper being the first to study the concept in public equity.Adopting a multidimensional approach further increases the robustness of the forthcoming statistical modelling, as it enables the comparison of different summary measures to see if these are sensitive to disturbance before inferences are drawn to the target population. Based on the established literature, this research adopts return on equity (ROE) and earnings per share (EPS) as accounting-based measures for shareholder value creation (Griffin & Mahon, 1997;Hall, 2016).As market-based measures of shareholder value creation, market value added (MVA) and market capitalization (MRK) are adopted. Acknowledging that the assumptions underlying whether impact investing opportunities exist in public equity should be evaluated comparatively and unconditional of exogenous industry-specific factors, a categorical variable controlling for industry affiliation is included in the analyses.This is done by grouping the publicly listed companies based on their Thomson Reuters business classification (TRBC) industry group.The literature related to the relationship between corporate social performance disagrees whether company size constitutes a significant confounding variable for the relationship between the two (Van Beurden & Gossling, 2008).Acknowledging this ambiguity, this paper will adopt two different measures of company size, whereupon backward selection will be used to drop the least significant of the two to assure the most well-fitted model. FINDINGS H1 is tested relying on multiple linear regression.The following baseline specification is adopted, which will be performed on each measure of shareholder value creation individually: In order to test H2a and H2b, the causal link between social and environmental impact and shareholder value creation is addressed in the context of Granger causality (Granger, 1969).However, as the Granger (1969) causality test is designed for time series data, this paper relies on the extension provided by Dumitrescu and Hurlin (2012) specifically developed to detect causality in panel data.The following equations will be adopted and performed on all four measures of shareholder value creation individually: Analysis I: Correlation between impact and shareholder value creation Analysis I relies on multiple linear regression analysis to examine the relationship between environmental and social impact and shareholder value creation, as specified in Eq. ( 2).To ensure that the appropriate panel data technique is adopted, the Hausman specification test is performed to compare whether fixed effect or random effect models provide the best representation of the data. The results for each measure of the dependent variable are summarized in Table 2.As can be deduced from Table 2, the null hypothesis that the differences in coefficients are not systematic can be rejected at a significance level of less than 0.01 for all four measures of shareholder value creation.Hence, the estimations of the models with fixed effects and random effects are systematically different, suggesting that estimations with fixed effects are preferred, as the regressors are not orthogonal with the random effects. Table 3 presents the estimates of the multiple regressions with fixed effects, controlling for company size and industry affiliation, for each of the four measures adopted for shareholder value creation.Here, the log net sales variable, controlling for company size, is excluded to avoid problems with multicollinearity.Further, the industry affiliation controls are automatically dropped because of adopting the fixed effect models, as these are perfectly correlated with the company-level fixed effects.The results indicate that social and environmental impact has a contemporaneous, significant and positive effect on EPS, MVA and MRK.For these three measures of shareholder value creation, the effect is significant at the significance level of 0.05.contrary to the aforementioned measures of shareholder value creation, the relationship between the impact proxy and ROE is not statistically significant. The respective fits of the two models adopting market-based measures of shareholder value creation are high.Here, the model adopting MVA as a measure of shareholder value creation explains 96.8% of the variance in MVA and the model adopting MRK explains 98.5%.On the contrary, the model fits for EPS and ROE are low, with an overall R 2 of less than five percent.Specifically, these findings suggest that social and environmental impact has a relevant effect on MVA and MRK, whereas it does not affect EPS and ROE.In other words, social and environmental impact has little to no power in explaining ROE and EPS, based on their low R 2 of 0.0034 and 0.0204, respectively.This suggests that social and environmental impact has a long-term rather than a short-term effect on shareholder value creation.The values of the estimated coefficients for the models adopting market-based measures suggest that a 10% increase in social and environmental impact results in an average increase of 4.056% in MVA and an average increase in MRK of 2.528%. Analysis II: Causality between impact and shareholder value creation The concept of Granger causality entails that a variable X is said to Granger cause variable Y if former values of X help in explaining Y, even after controlling for the lagged values of Y. Dumitrescu and Hurlin (2012) provide an extension of the Granger (1969) methodology that is designed to detect causality in panel data, as the approach accounts for the heterogeneity in the data, while estimating pairwise causal relationships.However, it requires variables to be stationary.Analysis II sets out by testing the stationarity of all the variables included in the Granger causality tests, based on Harris-Tzavalis tests.Table 4 presents the results of the Harris-Tzavalis tests.The results of the Harris-Tzavalis tests suggest that the variables are not stationary, as it is impossible to reject the null hypothesis that the panels do not contain unit roots.Consequently, the Dumitrescu and Hurlin (2012) tests is performed based on variables in the first differences.The variables in the first differences are found by taking the change from one year to the next, ultimately removing the unit roots and ensuring the stationarity of variables (please see Table 6).It does so by proposing two null hypothesis that no statistical significance exists between impact and shareholder value creation on hand and shareholder value creation and impact on the other. Additionally, the Dumitrescu and Hurlin (2012) test requires panels to be balanced, which is why unbalanced observations are excluded from the analysis.Further, it requires T > 5 + 3 * k, where T denotes the number of years and k denotes the number of lags.If this assumption is not met, the Z-bar and Z-tilde statistics will not converge to the asymptotic standard distributions.However, due to data unavailability, this paper only has observed data from 2013 to 2018.Thus, the observed data for the companies included in the sample is extrapolated to the years 2009, 2010, 2011 and 2012, using the average differences of all measures of shareholder value creation and the impact proxy.Ultimately, the time index t in Eq. (3) and Eq. ( 4) starts in 2009 (t = 2009, 2010,..., 2018).Specifically, four years of data are extrapolated as this paper relies on both variables and the first differences in levels.Taking the first differences imply one year of lost data, for the models adopting variables in differences, this needs T = 10 years of data for running the tests with one lag (k = 1) to meet the assumption of T > 5 + 3 * k. Ideally, a lag-order selection test would have been conducted to identify the suitable number of lags.However, since four years of data are extrapolated to perform the Dumitrescu and Hurlin (2012) test, this paper is limited to conducting the tests based on one-lag, to limit manipulation of the results.The results of the tests based on one-lag are presented in Table 5.The results suggest that a bidirectional Granger causal relationship exists between social and environmental impact and all of the adopted measures of shareholder value creation, as the null hypothesis of no Granger cause can be rejected at a significance level of less than 0.01 for all cases.This suggests a bidirectional relationship between shareholder value creation and social and environmental impact. To test whether the one-lag is the most appropriate lag-order selection for the data, this paper tests the lag-order selection based on the coefficient of determination, with the results presented in Table 6 below.Based on the coefficients of determination, it is concluded that the optimal lag-order selection would be to apply third-order lags for ROE, EPS and MRK, whereas for MVA, the first-order panel is the preferred model.This implies that, for the cases of ROE, EPS and MRK, there is a risk that the analysis does not properly capture the dependence in the data.Based on the results of the Dumitrescu and Hurlin (2012) test adopting MVA as a measure for shareholder value creation, the null hypothesis that impact does not Granger cause shareholder value creation as well as the null hypothesis that shareholder value creation does not Granger cause impact is rejected. DISCUSSION Aimed at bridging the identified research gap in the literature on impact investing, the purpose of this paper has been to empirically examine whether impact investing opportunities exist in public equity.The findings suggest that impact investing opportunities exist in public equity for the MVA measure of shareholder value creation.This is based on the existence of a positive causal relationship between impact and shareholder value creation for the publicly listed companies included in the sample.These findings have important theoretical implications for impact investing, as they indicate that the concept of impact investing does not need to be redefined in a public equity setting when evaluated based on market-based measures of shareholder value creation.Thus, the findings suggest that the field can continue its progression and wide-ranging adoption, as impact realization can be attained pari passu with shareholder value creation in a public equity setting.Further, these findings suggest that the field could advance from discussing the fundamental assumptions and start defining the boundaries of impact investing in public equity. However, this relationship is found to be highly sensitive to the adopted measures of shareholder value creation.This suggests that impact investing in public equity is a suitable strategy for long-term impact investors, but not for short-term impact investors.Having said that, Roundy et al. (2017) find that impact investors generally take what Tasch (2010) defines as a slow money approach to investments, suggesting that impact investors take a longer view on investments.The findings suggest that impact investors can rely on public equity as part of an asset allocation strategy.However, it remains questionable whether the impact realized by publicly listed companies is sufficient to meet the respective impact objectives of impact-and finance-first-impact investors.Thus, based on the findings, it seems reasonable to assume that impact investing in public equity could be part of a portfolio of impact investments spanning different asset classes.In that way, impact investing in public equity could be a way to ensure the maximization of shareholder value creation for finance-first investors.In contrast, it could be a strategy for impact-first investors, aimed at securing their floor of financial performance. Future research would be well served to investigate whether the impact realized by publicly listed companies is sufficient to meet the respective impact objectives of both impact-and finance-first investors.A similar avenue of future research lies in investigating whether publicly listed companies that are the target of impact investments realize the social and environmental impact that is significantly higher compared to that of their respective industry peers. What sets impact investing apart from earlier types of socially responsible investment strategies is that the intention behind the investment is to have a positive impact on society (Nicholls et al., 2015), as opposed to simply avoiding or minimizing negative effects (Verrinder et al., 2018).Whereas the literature review reveals that investor intent is a defining characteristic of impact investing, the intent of the fund manager chosen by the impact investor is not (Johnson & Lee, 2013).This implies that an investment could be an impact investment even if the fund manager to which decision-making of the investment is delegated is indifferent to social or environmental impact.Additionally, facilitated by the intentionality of clients, Bugg-Levine and Emerson (2011) argue that asset managers could undertake the original mission of impact investing by organizing investment products that seem to generate positive impact, but fail to generate more than nice narratives.Chiappini (2017) studies whether or not funds, identified as impact investment-oriented, comply with the definition of social impact investments suggested by the Organisation for Economic Co-operation and Development (OECD).She finds that none of the 156 funds included in the sample respect all of the features fixed by the OECD.However, to identify the publicly listed companies that are the target of impact investments, this paper assumes that the adoption of impact funds as a sampling frame ensures the intentionality criterion of impact investors.In the impact investing industry, like in any financial market, impact funds are intermediaries crucial in managing the relationship between those asking for capital and those providing it (Lehner et al., 2019).Considering this critical role in greater detail, impact investing intermediaries can potentially undermine impact investments irrespective of the impact investor's impact objective (Lehner et al., 2019).Thus, it might be the case that the publicly listed companies included in the sample are not really the target of intentional impact investing.Instead, they might be greenwashing investments branded to appear intentionally impact oriented (Delmas & Burbano, 2011).However, when investing in a fund, the impact investor is interested in the aggregate impact and financial return generated (Hehenberger & Harling, 2018).Considering this thought to what has been coined as a modern portfolio theory (Markowitz, 1952), it might be the case that the funds constituting the sampling frame of this paper do not exclusively include companies based on positive screening.Given that this paper can only identify the top ten holdings of each fund, such a scenario would imply that the publicly listed companies included in the sample are at risk of not meeting the defining characteristics of impact investments.Thus, another avenue of potential research lies in identifying publicly listed companies that meet the impact investing criteria of impact-and finance-first-impact investors.Based on this, studies evaluating the absolute performance of impact investing in public equity could be initiated.This could explain why the average impact proxy score of the sample of publicly listed companies is 66.18.The standards for what constitute sufficient impact is defined by the individual impact investor (Höchstäder & Scheck, 2015), why Bugg-Levine and Emerson (2011) argue that almost all investments are capable of qualifying as impact investments.Having said that, it still seems reasonable to assume that at least impact-first investors, who aim to optimize their impact (Freireich & Fulton, 2009;Glänzel & Scheuerle, 2016), would find an impact proxy score that is only 16.18 points higher than the average score of their industry peers to be insufficient to qualify as impact investing.Thus, even though this research is not able to falsify the existence of a positive causal relationship between impact and shareholder value creation, which is derived in the literature review as the prerequisite for impact investing opportunities to exist in public equity, it does not imply that all impact investors would consider the impact generated sufficient to qualify as impact investing. CONCLUSION Scholars agree that the current state of impact measurement is far from satisfactory (Grabenwarter & Liechtenstein, 2011;Reeder & Colantonio, 2013;Reeder et al., 2015).Based on existing theory and previous literature, this paper has constructed a proxy of impact, which is why impact is specified to include social and environmental impact.Social scientists generally express the concern that standardized measurements and proxies for impact risk neglecting or misrepresenting essential dimensions of social and environmental impact (Brandenburg, 2010).More precise findings are expected to be attainable once the industry matures further, with more comprehensive impact data on the levels of investments and companies (Vecchi et al., 2016). In terms of practical implications, it is suggested that impact investors can rely on public equity as part of an asset allocation strategy.However, it is acknowledged that even though this research cannot falsify that impact investing opportunities exist in public equity, the findings add to the practical implications for impact investors.Specifically, it is questionable whether the impact realized by publicly listed companies is sufficient to meet the respective impact objectives of impact-and finance-first-impact investors.However, based on the findings of this research, it seems reasonable to assume that impact investing in public equity could be part of a portfolio of impact investments spanning asset classes.In that way, impact investing in public equity could be a way to ensure the maximization of shareholder value creation for finance-first investors.In contrast, it could be a strategy for impact-first investors, aimed at securing their floor of financial performance.Future research would be well served to investigate whether the impact realized by publicly listed companies is sufficient to meet the respective impact objectives of both impact-and finance-first-impact investors.Another avenue of future research lies in investigating whether publicly listed companies that are the target of impact investments realize the social and environmental impact that is significantly higher compared to that of their respective industry peers. No uniform and soundly based definition of impact investing has reached definitive status (Bernal et al., 2021; Reeder & Colantonio, 2013).Correspondingly, definitions of impact investing share the achievement of societal or environmental alteration through the investment of capital (Urban & George, 2018).Notwithstanding this lack of conceptual, terminological, and definitional accuracy is intelligible since it is representative of nascent research domains (Dinneen & Beach, 2018; Wendt, 2019), it remains problematic for multiple reasons.Firstly, an ambiguous definition threatens the reliability of the entire impact investing industry, along with the credibility of associated organizations and stakeholders (Erickson, 2011).Further, it facilitates what Findlay and Moran (2019) describe as purpose washing, a term that conceptualizes what Harji and Jackson (2012) originally coined as impact short-term financial performance and market-based measures as a reflection of long-term financial performance(Gentry & Shen, 2010). (,) = ∑ (,−) + ∑ (,−) (,) = ∑ (,−) + ∑ (,−) tests whether shareholder value creation Granger causes social and environmental impact, and Eq.(4) tests whether social and environmental impact Granger causes shareholder value creation.For both equations, (,) measures the shareholder value creation as either ROE, EPS, MVA or MRK for the i-company during t-years and (,) is the social and environmental impact of the i-company during t-years.Further, the coefficients and weight the effect of the lags of shareholder value creation and social and environmental impact, respectively. Table 1 . CSV constructs and categories for impact conceptualization (,) is measured by the coefficient 1, , for j-lags.The effect of company size ( 2 (,) ) on shareholder value creation is accounted for, capturing systematic errors in the regression.Further, control factors for industry affiliation are included in the matrix (,) and weighted by the vector β.Lastly, captures the heterogeneity in the i-companies and (,) is an independently and identically distributed perturbation term, capturing random errors. Table 2 . Results of Hausman tests for each measure of the dependent variable Table 3 . Panel regression results with fixed effects Source: Authors' findings. Table 4 . Estimates of lagged effects of social and environmental impact Source: Authors' findings. Table 5 . Results of Fisher-ADF and Harris-Tzavalis tests Table 6 . Results of Dumitrescu and Hurlin tests Table 7 . Coefficients of determination
9,829.4
2024-01-01T00:00:00.000
[ "Economics", "Business", "Environmental Science" ]
Decision Support Model using FIM Sugeno for Assessing the Academic Performance A R T I C L E I N F O A B S T R A C T Article history: Received: 06 November, 2020 Accepted: 16 January, 2021 Online: 30 January, 2021 Assessing academic performance is a common way of evaluating and assessing the abilities of students in tertiary institutions. Usually it is practically performed based on the cumulative grade point average (GPA) at the end of each semester passed. Unwittingly there are many factors that are able to influence student performance results apart from GPA as a performance measure; i.e. gender, hometown, sibling, family status, residence, father education, mother education, family income, motivation, mileage, traveled time, transportation, scholarship, community, social media, and hang-out. Academic performance assessment is proposed through the decision support model (DSM) applying the fuzzy logic (FL) Sugeno technique. The model output generates a decision value (linear or constant equation) for academic performance based on the calculation of the measured fuzzy parameter value (ax) and conventional parameter value (bx). The DSM with the FL Sugeno method is able to provide sharp output in assessing student academic performance. In this case, the model is able to be applied then to assist academics in higher education in determining educational strategies for students with poor academic performance results. Introduction The academic performance is one indicator of the education quality in universities. It generally is able to be measured with the value achievements record of the grade point average (GPA) [1]- [3]. The GPA is greatly influenced by various factors. Those factors are such as the social, family, economic and educational environment of each individual [1], [4], [5]. The assessment process is a part of the evaluation process toward student academic performance which is done by higher education academics. Academic performance appraisal is one of the effective solutions to detect student failure problems. Analyzing stored student data can help provide important information in their academic performance appraisal process. The new models to support objectives in the academic performance assessment process are still being investigated and researched scientifically. One of them is a study related to decision support models (DSM). The research has been carried out by a number of researchers with DSM as the main issue. Creating DSM to solve difficulties related to mosque rebuilding. Fuzzy method was operated to determine the priority value of eleven parameters [6]. Also, eco-DSM for treating medical waste was constructed by involving the fuzzy method as the main part of the model [7]. Then, the model for determining the amount of production, especially in the business world, functioned the fuzzy Sugeno technique [8]. In addition, [9] created DSM to measure the performance of logistical companies based on various indicators performance. In [10], the author created a model to calculate the exact number of goods ordered, this affects the level of inventory and the sales of goods. The determination of the venue for the national multi-sport event or called the national sport week (NSW). The determination of the best location was carried out in Indonesia with thirty-four provinces that had various distinctive features and cultural uniqueness [11]. This paper is DSM development in fulfilling research objectives for the assessment of student academic performance which is implemented in the education area (universities), exclusively operated in a calculation using the fuzzy logic method as well as using the data asset for 100 students at the faculty of Teacher Training and Education (FKIP) at the University of Mulawarman, East Kalimantan. Fuzzy logic (FL) has been widely used in various fields in the real world. FL is technically the principal method used in calculating mathematics as well as it is ASTESJ ISSN: 2415-6698 used to assess uncertainty in various fields. FL plays an important role in changing the complex symptoms of problems and cannot easily be translated into mathematical models, with the aim of providing a best solution approach to problems [12]. The assessment result can provide early warning to academics to be able to pay attention in taking quick steps to improve student academic performance in bad conditions. Related Works The concept of fuzzy logic is a popular method used in decision-making support. In [13], the author was applying the fuzzy method in engineering asset management (EAM), checking the condition of assets is an important aspect of EAM because it is able to identify symptoms of potential failure and suggest corrective actions before operational disruptions. In [14], the author was creating a conceptual model for evaluating the performance of social sustainability and has been tested for later implementation in India in automotive component manufacturing organizations. In addition, from year to year, various researchers conduct research on the issue of academic performance (AP) appraisal as the main topic in their research [15]- [17]. In [15], the author chose to take AP as the main topic of their research. They observed twenty student data based on three characteristics in the one academic period involving, exam 1, exam 2 (theoretical), and exam3 (practical). The results of the study were compared with traditional evaluation methods and gave similar results. They concluded that the evaluation results with the proposed approach could be a practical method in evaluating AP. In [16], the author introduced the new fuzzy expert system (NFES) to be used in evaluating student AP based on the concept of FL by considering two parameters, semester 1 and semester 2 scores. In order to make decisions about learning in the next period. In [17], the author applied the fuzzy method in the evaluation of AP control of twenty students in the engineering laboratory at the faculty of Engineering Education Marmara University, department of Electricity Education. The semester 1 and semester 2 grades are used as the input that counts. In [17], the author concluding that there are variations in the evaluation results from the results of comparisons made to classical evaluation methods. The fuzzy method provided the advantage of flexibility in AP evaluation and provides many evaluation options. Prepare Your Paper before Styling As mentioned in the introduction part, the purpose of the assessment is part of a big task in the process of evaluating academic performance. Figure 1 describes the step by step of the whole study. The first step we take is to study real case (in the case is academic performance measurement) supported by reviewing several scientific manuscripts. Two these stages operated to enrich our knowledge of case we face. Furthermore, we define the parameters related to the assessment of academic performance based on the study results of the related research literature involving 17 validated ones (Table 1) and are taken into account in influencing the AP assessment. Then the parameters will be configured which include; parameter grouping and parameter weighting. The preparation process for the parameters is successfully carried out, then data collection is carried out for further preparation as a dataset for this study. Then, the construction of the model is carried out. The model is built with FL method to be operated as the main method, in addition to conventional methods used. The concept of FL (fuzzy inferencede fuzzy) begins with building membership functions (MF) and linguistic variables (LV) as calculation scales. The LV definition of the fuzzy parameter is used to describe the input value in the form of the condition level of each academically calculated fuzzy parameter. It is then designed via MF fuzzy triangletrapezium on each fuzzy parameter. Further preparing the vague ground rules as reference engine rules in determining the results of considerations so that we found 3,456 basic rules. The final stage of FL is de fuzzy, the result of FL decision is determined using the centroid method of sugeno technique. Data Collection Collecting data as a research dataset using a combination of data collected through two methods; 1) online questionnaire is applied in collecting actual information from student personal data, 2) list of GPA scores from the academic database for the five initial periods of learning downloaded from the academic database for 100 data from FKIP to be studied. The collected data is prepared for research. The dataset is combined and prepared by eliminating defective data samples and by manually adding missing pieces of data to the data sample. Grouped Parameters Seventeen parameters configured to prepare the entire parameter can be used on the model. Parameter configuration is done through two stages, grouping and weighting parameters. The first configuration is done by grouping parameters based on the calculation operation to be used in each parameter group. The grouping was successfully formed into fuzzy groups (ax) and conventional groups (bx). Parameter grouping shapes described in Table 2. Some researchers stated that there were several parameters in the bx section that only had a small effect on the achievement of AP. According to [18], there is no significant result for the effect given by the gender parameter on student AP results. From the results of the study in [19], the author stated that three other parameters also had a small effect on the PA results including; number of siblings, mileage from faculty and gender. At bx each attribute of the parameter contains a nominal value, where in the conventional group concept (stated as bx) the shape of the attribute effect of the parameter is stated in integer form (0 and 1) represented by conventional values (VC) and shown in Table 3. Weighted Parameters The second configuration of the parameters is done by weighting the parameters. The weighting technique is carried out in two stages; 1) the initial stage is carried out by using the Rapid Miner tool to determine the weight level of each parameter in giving an effect on student academic performance based on the case data used in this study, 2) the next stage is to normalize the initial weighting results. The normalization process aims to avoid systematic technical effects in the form of sufficient data gaps between parameters to ensure that technical bias has minimal impact on the results [20]. The normalization process uses a formula as in equation (1). The results of normalized weights are shown in Table 4. Where is the fuzzy group (referred to as ax) and the conventional group (referred to as bx), and the weight value of the parameter (referred to as W). The total weight value for ax is represented by (∑W Fuzzy) and the total weight value for bx is represented by (∑W Conventional). The total weight value ax produces a greater total weight value than conventional parameter groups in influencing the final calculation results in the AP. Fuzzy Logic Model After getting the data collected, we create a fuzzy set by determining the LV of each fuzzy parameter (FP) shown in Table 5. Where FP is a fuzzy parameter of academic performance parameters, LV is a linguistic variable, and then MF domain are domain membership functions based on the LV of each FP. After the LV is determined successfully. It is then designed to create a MF. The degree of membership is obtained by first making a graph of each selected FP. The parameters for the number of siblings, father and mother education, social media, and traveling have 2 fuzzy sets, namely: low and high. The parameters of family income, mileage, and travel time have 3 LV, namely: low, medium and high. CGPA parameter has 4 LV, namely: poor, good, very good, and excellent. All MFs are determined in Figure 2 to Figure 10. Table 6) is used to obtain the FAP value. Calculation Academic Performance In building the model, we apply two separate calculation operations with the Sugeno technique fuzzy method as the main counting operation on the model, specifically in producing the decision value for the final score of academic performance appraisal. The application of the FL Sugeno method provides a systematic approach to generate fuzzy rules from a given input to the output. To calculate the resulting output, Sugeno technique uses weighted average where the resulting output can be a separate characteristic with the final result not in the form of a fuzzy set but a linear or constant equation [21], [22]. From each of the total weight values of ax and bx that were found. Later it will be used to operate as a multiplier index for each calculation operation in the parameter group. The value of (∑WFuzzy) is used as a multiplier index to multiply the total value of the FAP value calculation operation to produce the total value of the effect of ax on academic performance ((∑(ax)) which is mathematically written in equation 2. whereas the value (∑W Conventional) used as a multiplier index to multiply the total value of the simple CV addition calculation operation to produce the total effect value of bx on academic performance (∑(bx)) which is mathematically written in equation 3. The final calculation stage for the assessment of academic performance, equation (4) is used in calculating the results of the value of academic performance decisions, where each calculation result of the coefficients (∑(ax)) and (∑(bx)) is added up to obtain the final score for the assessment of academic performance. Where the final assessment results as a decision value (DV) in this model produce a decision value in the form of a firm value to assess the results of student academic performance. Experiment Results The DSM model built for academic performance appraisal using the FL method Sugeno technique which is a popular method in the DSM field. It is also quite elastic in dealing with complex problems in the real world in the form of linear or nonlinear systems. It works by mapping problems into the form of fuzzy reasoning to obtain a decision. The complete flow of activity in the model is depicted in Figure 11. The initial activity begins by configuring all the parameters used as model input. The configuration at the initial stage is done by grouping the existing parameters into fuzzy groups (ax) and conventional groups (bx). Furthermore, the configuration for weighting is carried out to identify each weight value (W) of each parameter and the total weight value for axis represented by (∑WFuzzy) and the total weight value for bx is represented by (∑W Conventional). Furthermore, the counting operation for bx is carried out separately from the FL Sugeno method. The calculation is done by multiplying the number of CV against the (∑W Conventional) value to produce the total effect value of bx on academic performance (∑(bx)). The counting operation using the FL Sugeno method for ax is carried out in the next step. The main concept of the FL method includes the "fuzzy -inference -defuzzy" stage which is carried out sequentially based on the fuzzy rules created. The rules then produce a FAP value based on fuzzy input parameters. At the end of the Sugeno FL method, the FAP value is used to multiply the (∑WFuzzy) value to produce the total effect of on academic performance (∑(ax)). The final process of assessing academic performance is carried out by adding up the respective values of (∑(ax)) and (∑(bx)) that were successfully obtained in the previous stage. The final result of the academic performance appraisal is formed as a DV which is in the form of a linear or constant equation. DV based on the determination of the index value for the performance criteria is determined. Where the good performance criteria have an index value of 80 while the bad performance criteria have an index value of 50. The sample results from our calculations are shown in Figure 12. The bar graph in Figure 11, represents the 10 data cases used. The visualization on the bar graph in Figure 11 shows that our 5 test data resulted in good academic performance scores with the highest score at 71.08, 2 data are poor academic performance scores with the lowest score at 44.29, with an average of 62.03 results for the assessment of academic performance that we have done. Discussion The measurement of AP results in the classical method is usually expressed in numerical form (GPA) obtained at the end of the learning period [15]. Therefore, it can be said that the classical method is a form of presentation based on the comparison of student performance results with the predetermined standard performance value category. The academic performance assessment of FKIP students at Mulawarman University focuses more on the indication of the achievement of GPA. Policymakers tend to focus more on the GPA score. That makes various policies focused on increasing the GPA over a period of time. One shortcoming is found for the classical method used in the current AP assessment. Where it only focuses on increasing the GPA score in a certain period without any criteria instead of using the GPA indicator for the final result [16]. While the GPA indicator is an assessment of university accreditation as well. However, the supporting factors of achieving GPA scores are not used such as; gender, hometown, sibling, family status, residence, father education, mother education, family income, motivation, mileage, traveled time, transportation, scholarship, community, social media, and hang-out. The results of the model assessment of AP from the GPA of the results of the conventional method are compared and displayed in Table 7. It is proven academically, that PA produced by the constructed DSM does not has similar rank (priority) to PA measured based on GPA. Table 7 shows that the results of the academic performance assessment using the model we built can be a solution. That can be used practically. As well as it can give freedom to academics as policymakers to be able to assist in the process of academic performance evaluation by providing the results of an effective approach. The final results of the assessment are carried out using various parameters outside of the educational factors for the family, economic, social, and environmental factors that we calculated on the model. We identify the influence great value of each parameter from the fuzzy group (ax) and the conventional group (bx) on our AP assessment ( Figure 12). The symbolized parameters refer to Table 1. The Traveled Time (P11) parameter has a significant effect by being the second largest influencer (0.27696) on ax after the CGPA parameter (P17) which has an effect of (0.32969), which simultaneously is the main indicator in making an assessment AP, while the social media parameter (P15) is the smallest one (0.2308). In (bx), each parameter does not seem to have much influence on the AP assessment, with the Scholarship parameter (P13) as the biggest influencer (0.03398) and the Residence parameter (P5) giving an effect of (0.00026). Figure 12 shows that the AP assessment process is heavily influenced by various parameters on ax with the influence great value of is (0.88100) and the parameter on (bx) has little effect (0.12000) on the AP assessment. Besides the various parameters that we use, which are parameters related to academic performance from various factors such as social, family, economic, and educational environments. Many studies have been carried out related to academic performance by proposing to use parameters derived from educational factors themselves. According to research that has been done [16], [17], and [18], the assessment of academic performance is influenced by factors originating from the educational side which includes, record of student semester scores. [8] stated that the use of the parameter of semester scores is relevant in conducting performance appraisals. Where the semester scores in learning period 1 60% affect Conclusion The process of assessing the results of student academic performance is generally applied to universities using the classical method (based on performance categories). The DSM model is built using the FL method Sugeno technique which is a form of practical solution that can be used in the world of education to assess academic performance which is part of the AP evaluation process. As the model using the FL Sugeno method provides advantages with the elastic properties. Thus, it can be adjusted to the needs of the study while still providing sharp outputs as an effective form of decision approach based on a series of fuzzy rules from the input given to the output. The academic performance appraisal process is carried out by involving relevant seventeen parameters to the performance appraisal based on validated previous research. Parameters are grouped into fuzzy groups (ax) and conventional groups (bx). The fuzzy group (ax) is identified to have a significant effect on the assessment of academic performance compared to the conventional group (bx) with only a slight influence on the assessment process.
4,776.8
2021-01-01T00:00:00.000
[ "Computer Science", "Education", "Mathematics" ]
A CNN-transformer fusion network for COVID-19 CXR image classification The global health crisis due to the fast spread of coronavirus disease (Covid-19) has caused great danger to all aspects of healthcare, economy, and other aspects. The highly infectious and insidious nature of the new coronavirus greatly increases the difficulty of outbreak prevention and control. The early and rapid detection of Covid-19 is an effective way to reduce the spread of Covid-19. However, detecting Covid-19 accurately and quickly in large populations remains to be a major challenge worldwide. In this study, A CNN-transformer fusion framework is proposed for the automatic classification of pneumonia on chest X-ray. This framework includes two parts: data processing and image classification. The data processing stage is to eliminate the differences between data from different medical institutions so that they have the same storage format; in the image classification stage, we use a multi-branch network with a custom convolution module and a transformer module, including feature extraction, feature focus, and feature classification sub-networks. Feature extraction subnetworks extract the shallow features of the image and interact with the information through the convolution and transformer modules. Both the local and global features are extracted by the convolution module and transformer module of feature-focus subnetworks, and are classified by the feature classification subnetworks. The proposed network could decide whether or not a patient has pneumonia, and differentiate between Covid-19 and bacterial pneumonia. This network was implemented on the collected benchmark datasets and the result shows that accuracy, precision, recall, and F1 score are 97.09%, 97.16%, 96.93%, and 97.04%, respectively. Our network was compared with other researchers’ proposed methods and achieved better results in terms of accuracy, precision, and F1 score, proving that it is superior for Covid-19 detection. With further improvements to this network, we hope that it will provide doctors with an effective tool for diagnosing Covid-19. Introduction Covid-19 is a pulmonary disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in 2019 and is highly infectious, mutagenic, and Covid- 19 strains such as delta, omicron, and omicron XE variant have become pandemic worldwide [1,2]. On 30 January 2020, the World Health Organization (WHO) recognized the outbreak as a public health emergency of international concern (PHEIC) [3] and identified it as a pandemic on 11 March 2020 [4]. According to the WHO weekly epidemiological update on COVID-19 (12 April 2022), as of 10 April 2022, over 496 million confirmed cases and over 6 million deaths have been reported globally [5]. Therefore, detecting Covid-19 positive patients in the population at an early stage is not only important to curb the virus transmission and mutation, but also crucial to make disease staging and present treatment plans. Currently, the main method for testing Covid-19 patients worldwide is Reverse transcription polymerase chain reaction (RT-PCR) [6]. RT-PCR is the gold standard for detecting viral Ribonucleic Acid (RNA), however, in some cases, the sensitivity of RT-PCR appears to be lower than computed tomography (CT), with 71% vs 98% according to the reports. The lower sensitivity is caused by the possible inadequate supply of reagents, the lack of expertise required for the testing, low viral load in patients, and the long testing cycles [7]. Unfortunately, if Covid-19 mutates during transmission, the epidemic will likely spread rapidly, with large numbers of cases appearing instantly. As a result, the task of rapidly detecting Covid-19 in a large population poses a great challenge to medical institutions worldwide. Medical imaging and deep learning (DL) can play an important role in pre-detection efforts to combat disease. In recent years, researchers have used deep neural networks to achieve remarkable results in a variety of fields. Recent advances in DL show that computers can extract more information from images, more reliably, and more accurately than ever before [8,9]. However, further developing and optimizing DL techniques for the characteristics of medical images and medical data remains important but challenging research [10]. For example, ground-glass opacities are evident on chest X-ray or CT images for patients with Covid-19 [11,12]. Thus, a chest radiology-based system could be an effective way to detect, quantify and track Covid-19 cases. Furthermore, nature-inspired and heuristic optimization algorithms have been successfully adopted for various applications of medical images. For example, the use of a heuristic red fox heuristic optimization algorithm (RFOA) for medical image segmentation [13]. Nowadays, DL is increasingly applied to medical image classification, object detection, segmentation, and other tasks, and is replacing traditional machine learning methods in medical imaging [14]. Convolutional neural networks (CNN) [15][16][17][18][19][20][21] expansion-projection-extension (PEPX) design, which can enhance the representation capability greatly and reduce the computational complexity, achieving better classification results [28]. Khan et al proposed a CNN-LSTM and improved max value features optimization framework to address the issue of multisource fusion and redundant features [29] and they also proposed a deep learning and explainable AI technique to select the best features for the diagnosis and classification of COVID-19 [30]. Arias-Garzón et al. proposed a new approach using existing DL models, which focuses on enhancing pre-processing stage to obtain accurate and reliable classification results. The pre-processing stage consists of a projection-based filtering network to divide the data into frontal or lateral, a segmentation model to extract lung regions containing relevant information, and a migration learning VGG classification model for classification with an accuracy of 97% [31]. Ahmed et al. studied four classification methods based on X-ray images and CT from three aspects: pre-processing, feature extraction, and classification, and proposed the use of Convid-Net to classify Covid-19 with an accuracy of 97.99% [32]. Islam et al. proposed a detection system for Covid-19 based on the combination of LSTM (Long Short-Term Memory) and CNN, where CNN was used for deep feature extraction and LSTM was used to classify the extracted features, with an accuracy of 99.4% [33]. Recently, the application of transformer [34][35][36][37][38][39][40][41][42][43] to the area of computer vision tasks increasingly demonstrates unique advantages. Vision transformer (ViT) uses a combination of a self-attention mechanism and a multi-layer perceptron (MLP), which reflects complex spatial transformations and long-range feature dependencies. Unlike that the CNN pays attention to local features, the transformer focuses on the global representation of images. Inspired by the transformer's success in natural language processing (NLP), Dosovitskiy et al applied a standard transformer directly to images, with the fewest possible modifications. To do so, an image is split into patches and the sequence of linear embeddings of these patches are provided as an input to a transformer. Image patches are treated the same way as tokens (words) in an NLP application [35]. For the detection of Covid-19 caused by SARS-CoV-2, this study proposes a classification network with CNN and transformer fusion, which automatically classifies the chest radiographs acquired during medical examinations. This network could assist doctors to judge whether a patient contracts pneumonia, furthermore to detect Covid-19 or bacterial pneumonia. Data sets are collected from different medical institutions to enhance the applicability and robustness of this model. Data differences between medical institutions are reduced in the data processing stage, and a CNN-transformer fusion network is utilized for classification. The main contributions of this work are as follows. A fusion network of CNN and transformer is presented for COVID-19 CXR image classification. 2. Both local and global features are obtained and fed into two branches for feature extraction and finally fused for classification. Data processing The data processing stage includes data transformation, data augmentation and adaptive normalization. Data transformation Datasets collected in this study come from different medical institutions, in image or DICOM file formats. Thus, the data will be converted to the tensor of the same resolution to ensure their uniformity. Data augmentation There are two problems regarding model universality in DL: the large amount of data required to train the model and the imbalance of data category. The number of training samples varies greatly from category to category, which causes problems in the learning process of the classification task. To address these issues, image translation and rotation are required in this work to augment the dataset and balance the category. Image translation: The chest X-ray image will be randomly translated horizontally and vertically, (Δx,Δy) is the amount of random translation and is determined by the image resolution. Image rotation: The chest X-ray image will be rotated randomly clockwise or counterclockwise around the geometric center, θ is the angle of random rotation and is also determined by the image resolution. Normalization Medical images differ significantly from natural images in terms of dynamic range, natural images have a dynamic range of 3×255, while medical images can have a dynamic range of several thousand, such as CT, and even medical images may have a dynamic range of floatingpoint data, such as X-rays. PLOS ONE image I to the interval [a,b] according to Eq (1). In Eq (1), I is the original image, I Max and I Min are the maximum and minimum values extracted from I. In this study, b and a are specified as 1 and 0 respectively, and the normalized image I N is calculated by this equation. CNN and transformer network CNN module. In DL, a CNN is a class of artificial neural network (ANN) commonly applied to image processing. CNN, thought to be shift-invariant and space-invariant, is based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. CNN uses multiple convolutional kernels at different levels to collect local features of images for representation and it has a unique advantage in extracting local features of images. CNN could enrich the extraction of hierarchical features and enhance its representation as the depth of the network increases. The residual structure in ResNet can effectively solve the network degradation as the depth of the network increases [18]. Fig 4(A) shows a basicblock, in which BacthNormal follows the down-sampled 3×3 spatial convolution and the 3×3 spatial convolution, and the identity mapping by shortcuts is between the basicblock input and the convolution output. The convolution module in this work contains L (L>1) basicblocks. PLOS ONE Transformer module. Transformer, an architecture consisting of self-attention and MLP, uses the multi-head attention mechanism to obtain spatial transformations and long-range feature dependencies of an image to extract global features of the image. Self-attention is the core of the transformer, which corresponds to the queue and a set of values to the input, forming a mapping of query q, key k, and value v to the output. The output can be seen as a weighted sum of values, and the weight is derived from self-attention. Through the self-attention mechanism, the inputs of x 1 and x 2 are transformed into z 1 and z 2 , and the equations are as follows. W Q , W K and W V are three weight matrices. In the above equations, x 1 and x 2 share the same weight matrix W, and by this operation, information exchange is made between the vectors of x 1 and x 2 . z 1 and z 2 are obtained by the linear combination of v 1 and v 2 , and θ is the combination of weights. The equation is as follows. Attention ðQ; K; VÞ ¼ softmax The encoder part of the transformer module for image classification tasks is often used in ViT, as shown in Fig 4(B). Each transformer encoder contains a multi-head self-attention module and an MLP module, and LayerNorm is before the multi-head self-attention module and the MLP module. The embedded patches input is connected to the transformer encoder using residuals. Proposed network The features of medical images include obvious local lesion features and scattered global features. Thus, this study proposes a three-stage image classification network based on CNNtransformer, which consists of feature extraction, feature focus, and feature classification subnetworks. In this model, the local features of the image are extracted using the convolution module and the global features of the image are extracted by the transformer for fusion. This fusion could obtain the lesion features with both local and global features and get better classification results. Feature extraction sub-network. The structure of the feature extraction sub-network is shown in Fig 5. The image tensor is convolved by a 5×5 convolution kernel with the stride of 2 and a 3×3 convolution kernel with the stride of 2. We measured the effect of different convolution fields in Section 4, and the extraction of local features with the convolution kernel is 5×5 is better. Batch normalization (BN) and rectified linear unit (ReLU) follow each convolution layer. However, the feature map extracted by CNN does not match the feature dimension of the transformer. In detail, the feature vector extracted by the CNN is H × W × C (H, W and C are height, width, and channel, respectively), while the encoded shape after the transformer is (N + 1) × D (N, 1, D are the number of patches, category labels, and output dimension, respectively). In this study, feature maps extracted by the convolution layer are converted into 7×7 patches by the customized convolution and transformer feature interaction module, then the downsample of patches through a linear layer is the same as embedding patch in dimension, and is added to embedding patch for feature fusion. The global features of the multi-head self-attention concerns are converted into a 56×56 tensor, and the tensor is projected using the 1×1 convolution layer and added to the tensor using 3×3 convolution layer for feature fusion. This makes it possible to realize information interaction of the local features with the global features. Feature focus sub-network. The feature focus sub-network is made up of two modules, the CNN and the transformer. The CNN branch consists of 3 different basicblock modules, the first consisting of 2 with a step size of 1 and a convolutional kernel of 64, the second consisting of 2 with a step size of 2 and a convolutional kernel of 128, and the third consisting of 2 with a step size of 2 and a convolutional kernel of 256. The transformer branch consists of 8 transformer modules. As Fig 6 shows, the local features by the convolutional extraction are becoming more and more complex and abstract, while the global features are aggregated through the self-attention mechanism of the transformer. Feature classification sub-network. Fig 7 shows the structure of the feature classification sub-network, where the features extracted from different modules of the feature focus sub-network are fused to obtain local and global features, and the final feature vectors are generated by the Average Pool and linear layers and are outputted through the softmax layer to predict the categories of Covid-19, normal and bacteria pneumonia. Experiment and analysis This section is concerned with the evaluation of the proposed model. To begin with, the data set and the parameters setting are specified to start the experiment. Next, the proposed model is compared with some DL-based models on this dataset, then with some other models regarding the detection of Covid-19. Dataset The data in this study come from three medical institutions: Guangzhou Women and Children Medical Centre dataset [44], MIDRC-RICORD [45][46][47], COVIDx CXR dataset [28]. The categories of collected data are Covid-19, bacterial pneumonia, and normal. Data pre-processing was performed on the collected data and the distribution of the data after pre-processing is shown in Table 1. Experimental setup Accuracy, Precision, Recall, and F1 score were used as the evaluation metrics. The experiments were carried out on the 64-bit operating system of Red Hat 4.8.5-28. The 4-card parallel training was conducted on Intel(R) Xeon(R) E5-2630 and Tesla M60 GPU, and each graphics card was executed on a server with a storage capacity of 8 GB memory. Under Pytorch version 1.9.1, CUDA 10.2 and CUDNN 7.6, the model was built and trained, with the training parameters in Table 2. Table 3 shows the evaluation metrics of the different models on the present dataset. Table 4 illustrates the proposed model in comparison to some other models regarding the detection of Covid-19. The results show that the proposed model is more suitable than other models in classifying Covid-19 images. This may be due to that local and global features of the lesion are equally important for the diagnosis of Covid-19. In the previous study, researchers tend to focus more on local features and realize lesion classification by aggregating local features. The network proposed in this study focuses both on local lesion features and scattered global features in Covid-19 images. The fusion of two features solves the problem caused by paying more attention to local features than global features in characterizing lesions, thus achieving better results. Discussion This section compares the effects of possible module combinations, different convolutional kernel sizes, and mutual fusion and one-way fusion as well. Possible module combinations As well known that local features become progressively more abstract as the convolution layer gets deeper, and global features become progressively more decreasing as the transformer is extracted. Our proposed network can extract different features and fuse features from different branches through mutual feature fusion to reduce the loss of useful feature information. Discussing the fusion of the local features with the global features, we conducted some experiments to test possible module combinations. As shown in Table 5, the fusion of the Transformer_block1 and the Conv_block1 gains the best results. Different convolution kernel size CNN acquires local features through convolutional kernels, by which different feature maps are outputted. Feature extraction sub-network extracts local features and the transformer extracts global features for fusion, achieving good experimental results on the experimental PLOS ONE dataset. To verify the performance of the proposed model, this study also explores the fusion effect of feature maps with different sizes of convolutional kernels and global information From Table 6, we can see that the Feature extraction sub-network achieves better results when the convolutional kernel size is 5. Compared with other convolutional kernel size settings, this sub-network can extract more accurate detailed features and can better complement the local information missing from the global features, thus achieving better results. Mutual fusion and one-way fusion Our network demonstrates that the fusion of global feature and local feature gets excellent results in Covid-19 classification. The comparations include the fusion from CNN to Table 7 compares the effect of three fusion methods and the results show that mutual fusion works best. Conclusions In this study, a CNN-transformer fusion network is proposed for Covid-19 image classification. This network could make the best of the different feature extraction capabilities of the CNN and the transformer, the CNN module and the transformer module are able to extract the local features and the global features of medical images, respectively. Therefore, this network fuses local lesion features and scattered global features to achieve classification, focusing PLOS ONE on both the global and the local features. The experiments show that the proposed network performs better than other DL-based models for the classification of Covid-19, bacterial pneumonia, and the normal. The comparison of the proposed model with other models concerning Covid-19 reveals that our model is good at detecting Covid-19 in CXR images, and achieves superior results compared to other models. There is still room for improvement in the following areas. Decentralized data from different institutions is required to improve the classification of the model. The model remains to be refined to distinguish between more types of pneumonia disease, and further developed as a computer-aided diagnosis system for pneumonia.
4,515
2022-10-27T00:00:00.000
[ "Computer Science", "Medicine" ]
Evidence-Based Policymaking during the COVID-19 Crisis: Regulatory Impact Assessments and the Polish COVID-19 Restrictions Abstract The COVID-19 pandemic transformed our understanding of the state’s role during a public health crisis and introduced an array of unprecedented policy tools: ever-stricter travel restrictions, lockdowns and closures of whole branches of the economy. Evidence-based policymaking seems to be the gold standard of such high-stakes policy interventions. This article presents an empirical investigation into the regulatory impact assessments accompanying sixty-four executive acts (regulations) introducing anti-pandemic restrictions in Poland over the first year of the pandemic. To this end, the study utilises the so-called scorecard methodology, which is popular in regulatory impact assessment research. This methodology highlights the shallowness of these documents and the accompanying processes, with an absence not only of a sound evidence base behind specific anti-pandemic measures or estimates of their economic impacts, but even of the comparative data on restrictions introduced in other European Union/Organisation for Economic Co-operation and Development (OECD) countries. Overall, the collected data support the hypothesis that the ad hoc pandemic management process crowded out the law-making process through tools such as regulatory impact assessments and consultations. In other words, the genuine decision-making occurred elsewhere (with the exact process being largely invisible to public opinion and scholars) and drafting legal texts simply codified these decisions, with the law-making process becoming mere window-dressing. I. Introduction The COVID-19 pandemic was neither the first nor the deadliest pandemic faced by modernage humanity. However, it was the one that transformed our understanding of the state's role during a public health crisis. It also introduced an array of unprecedented policy tools: ever-stricter travel restrictions, lockdowns and closures of whole branches of the economy. Together with these new goals and tools, one could expect appropriate reflection on their effective deployment. On the one hand, evidence-based policymaking seems to be the gold standard of such high-stakes policy interventions. On the other hand, the dynamic of the pandemicas well as the intertwined processes of scientific research and organisational learningcomplicated efforts to rationalise anti-pandemic policies. Moreover, countries differ in terms of the decision-making processes they implemented and their legal frameworks governing anti-pandemic policy implementations. At one extreme, Sweden conformed to the pre-COVID-19 legal framework, exemplified by famous remark from its epidemiologist-in-chief, A. Tegnell: "The Swedish laws on communicable diseases are mostly based on voluntary measures : : : This is the core we started from because there is not much legal possibility to close down cities." 1 At the other extreme, Austria implemented perhaps the first COVID-19-related constitutional amendment. Poland, a gradually illiberalising European Union (EU) Member State 2 (1) firmly refused to evoke the constitutional state of emergency, (2) passed an ad hoc COVID-19 law of 2 March 2020the backbone of its crisis managementand (3) passed numerous COVID-19-labelled amendments to other laws (while the state of emergency is, by definition, temporary, these changes are not). To gauge how this response squared with the principle of legal certainty, it is enough to say that just four months since its passing of the COVID-19 law of 2 March 2020, the law already includes Article "15zzzzzze" (pasted between Articles 15 and 16, with subsequent letters denoting articles added such as 15a, 15b : : : 15za, 15zb : : : 15zza and so on). What is moreand crucially for this articlemost of the restrictions binding citizens and businesses were not introduced by the COVID-19 law itself but by almost weekly executive act amendments. This article analyses the regulatory impact assessments (RIAs) accompanying sixty-four executive acts introducing anti-pandemic restrictions in Poland's first year of the pandemic (March 2020-March 2021. To this end, the study utilises the so-called scorecard methodology, which is popular in RIA research. At this point, the following hypothesis can be set forth: the ad hoc pandemic management process crowded out the law-making process and its evidence-based policy tools. In other words, genuine decision-making moved outside of bureaucratic law-making (including RIAs), rendering it mere window-dressing. The rest of the article is organized as follows: Section II briefly presents the Polish antipandemic policy mix as compared to other EU jurisdictions. Section III introduces the specific crisis-response governance framework set up in early 2020 and links it to the RIA process in Poland. Section IV outlines the criteria behind RIAs evaluation and describes the utilized scorecard. Section V summarizes results while section VI concludes. II. Severity of the Polish COVID-19 response as compared with those of other EU jurisdictions To provide a background to the analysis of the RIAs accompanying Polish executive acts introducing anti-pandemic restrictions, it is useful to illustrate the general severity of the Polish COVID-19 response as compared with the typical response of other EU Member States. To this end, the Oxford University COVID-19 Government Response Stringency Index 3 for the EU-27 is plotted in Figure 1. Although the timing of the initial COVID-19 outbreakas well as subsequent wavesdiffered from country to country, three general observations can be noted. First, during the first wave (February-May 2020), the pace and severity of restrictions introduced in Poland broadly followed the response of the majority of EU countries (for details on the regulatory actions introduced in Poland during the early phase of the pandemic, see Gruszczyński et al 4 ). Second, the presidential elections due in May 2020 (and postponed to late June and mid-July 2020) had been associated with a substantial loosening of restrictions, separating Poland from the EU "mainstream" and placing it in the low-stringency group of the EU Member States (from 18 September to 9 October 2020 Poland had the lowest COVID-19 Government Response Stringency Index score across the EU-27). Third, during the second wave of the pandemic in October 2020, Poland introduced a relatively severe lockdown, moving it into the high-stringency group (and it maintained this policy throughout the winter). Therefore, it can be concluded that, during some periods, the Polish response was characterised by relatively high stringency, while during others it was characterised by relatively low stringency. As such, an amount of discretionary power had been wielded, which ought to have been based on a sound evidence basewhich, in turn, ought to be present in the examined RIAs. III. Decision-making processes and the regulatory impact assessments As the Polish crisis response had been influenced by domestic decision-making (instead of merely following EU trends), it is important to examine the process underlying it. Specifically, one has to distinguish between (1) the ad hoc process established to counter the pandemic and coordinate the economic crisis management and (2) the pre-pandemic, well-established law-making process of issuing executive acts, including public consultations, intra-cabinet consultations and RIAs. The ad hoc process had been conducted in the Chancellery of the Prime Minister and included the so-called Medical Council advising the Prime Minister. 5 Numerous accounts (including press conferences, during which new restrictions had been routinely Fifteen "typical" EU Member States are the 25th-75th percentilesnamely, the range between the seventh and twenty-first Member Statesas ordered from the lowest to the highest stringency index level. Note that the timing of subsequent COVID-19 waves differed from country to country, which is not reflected in this figure. Therefore, the presented data should be interpreted with caution. Source: Hale et al, supra, note 3. announced) suggest that at least to some extent the decisions had been supported by the data on newly confirmed cases and their rolling weakly averages, as dramatic weekend effects manifested. 6 In particular, in early October 2020, on the eve of the second wave of the pandemic, the Polish Prime Minister embarked "on forward guidance", 7 linking the rolling weekly average of newly confirmed cases to the colour-coded restrictions packages. Such a rule-based approach had been advocated on the grounds of introducing some predictability into the anti-pandemic policy, thereby reducing business uncertainty. However, as is plotted in Figure 2, this guidance quickly went out of fashion. On the one hand, abandoning a rigid approach based on a singleand admittedly flawed time series could be praised as escaping McNamara's fallacy. 8 Indeed, administratively confirmed cases (1) represent only a tiny fraction of all cases, (2) are sensitive to the number of tests carried out and the pre-testing selection routines used and (3) are affected by behavioural patterns; for example, people with mild syndromes might be less likely to report them as the associated quarantine burden increases. On the other hand, it can be seen that these decisions were based on some sort of discretionary process prone to cognitive biases (eg groupthink) and political pressures, 9 while various metrics were used for support rather than illuminationas in the case of the so-called "National Quarantine" 10 announced on 17 December 2020 to begin 28 December 2020 and last until 17 January 2021 (which includedamong other thingsa ban on leaving home on New Year's Eve). 11 Consequently, when the newly reported cases rule suggested loosening restrictions, cross-country comparisons or healthcare capacity data (admittedly a much more appropriate metric) had been invoked. According to the cabinet communications, the Medical Council should provide its expert judgment 12 and the Council of the Ministers should decide on the scope of the restrictions. As indicated in Section I, the Polish ruling elite refused to invoke a 6 As weekends had been associated with sudden drops in cases due to reporting process deficiencies. 7 An analogy to the unconventional monetary policy aimed at reducing uncertainty over the future interest rates path. 8 This term was coined after the 1960s' US Secretary of Defense R. McNamara's focus on "body count" numbers in managing the Vietnam War, as defined by D Yankelovich: "The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured doesn't exist. This is suicide"; D Yankelovich, "Corporate Priorities: A continuing study of the new demands on business" (1972), as cited in P Ward, JM Schraagen and E Roth, The Oxford Handbook of Expertise (Oxford, Oxford University Press 2019) p 402. 9 Perhaps the most memorable example was offered by the executive act of 1 December 2020 (Journal of Laws ref. no. 2020.2132) that lifted the ban on the core operations of ski resorts, which had been reintroduced in the executive act of 21 December 2020 (Journal of Laws ref. no. Dz.U.2020.2316). The Deputy Prime Minister -Minister of Development J Gowinrevealed on 30 November 2020 that it was the President of the Republic -A Dudawho "called Gowin and said, that he will not accept closure of the ski facilities", see <http://www.polsatnews.pl/ wiadomosc/2020-11-30/wicepremier-jaroslaw-gowin-w-gosciu-wydarzen-ogladaj-od-godz-1920/>. 10 See, for example, "Będzie narodowa kwarantanna [There will be National Quarantine]", Rzeczpospolita daily newspaper, 18 December 2020. Also other names had been involved; see, for example, "the responsibility stage": <http:// www.gov.pl/web/koronawirus/przedluzamy-etap-odpowiedzialnosci-i-wprowadzamy-dodatkowe-ograniczenia>. 11 See the executive act of 21 December 2020 (Journal of Laws reference no. Dz.U.2020.2316). See also "Narodowy chaos prawny", Gazeta Wyborcza daily newspaper, 19 December 2020. 12 The media recorded both instances of disagreement within the Council and Cabinet departures from its advice; see, for example, the interview with Prof A Horban -Chairman of the Medical Council -"Pojutrze możemy zamknąć Polskę [The day after tomorrow we could lock-down Poland]", Rzeczpospolita daily newspaper, 26 March 2021 < https://www.rp.pl/diagnostyka-i-terapie/art8635101-prof-andrzej-horban-pojutrze-mozemyzamknac-polske>. constitutionally prescribed state of emergency. Instead, an ad hoc legal framework including one-off legislation as well as permanent amendments had been set up. 13 The Polish constitution of 1997 introduces the hierarchy of universally binding laws, 14 including (1) the Constitution itself, (2) ratified international treaties, (3) statutory legislation (passed by the parliament and signed into law by the president; in Polish, ustawa) and (4) executive acts, issued by the Cabinet or individual cabinet ministers based on explicit statutory authorisation (in Polish, rozporządzenie). According to the constitutional principles, the rights and obligations of citizens ought to be regulated at least on the level of statutory legislation, with executive acts being appropriate to regulate technical issues regarding the implementation of statutory legislation. However, since the outbreak of the pandemic it was decided to introduce key restrictions via executive acts (on their timing and the stringency of the implemented measures, see Figure 3)first issued by the Minister of Health and then by the Council of Ministers. This decision had been advocated based on the requirement for flexibility (which is hardly convincing given the speed of statutory law-making practised in Poland after 2015, such as during so-called judiciary reforms 15 ). However, this plain unconstitutionality backfired, as enforcement actions had been challenged before the courts. 16 Statutory provisions relating 13 An explanation of this pattern exceeds the scope of this paper; however, Varieties of Democracy (V-Dem) reports on autocratic backsliding and research on the third wave of autocratisation (Lührmann and Lindberg, supra, note 2) seem to provide the departure point for such research. 14 The phrase "universally binding law" is simply a translation of the Polish constitutional provision distinguishing law addressed to all subjects from so-called "internal law acts" such as Parliament or Cabinet bylaws (binding members of parliament and cabinet members, respectively). According to Art 87 of the Constitution of the Republic of Poland of 2 April 1997: "The sources of universally binding law of the Republic of Poland shall be: the Constitution, statutes, ratified international agreements, and regulations." 15 to the obligation to wear surgical masks had been introduced as late as 29 November 2020. 17 Leaving constitutionality issues aside, the law-making process of issuing executive acts had been well-established and regulated long before the outbreak of COVID-19. 18 It was designed to facilitate collegiality (intra-cabinet consultation procedures) and to achieve better regulation guidance on evidence-based policymaking (public consultations and RIAs). Typically, the process is initiated by the appropriate minister submitting a draft of the normative text, an explanatory memorandum and a RIA to the consultations (public and internal). The agreed draft (or list of disagreements) is then submitted to the Permanent Committee of the Council of the Ministers (the so-called "little cabinet") and, if appropriate, to other committees. Finally, the worked-out project is submitted to the Council of the Ministers. Notably, the so-called Government Law-Making Process is transparent (ie all submitted documents ought to be recorded and published on the Internet 19 ). The RIAs of the executive acts introducing anti-pandemic restrictions examined in this article represent part of this law-making process. This brings us back to the distinction between (1) the ad hoc pandemic management and (2) the law-making process itself. In practice, these processes could be either integrated, parallel or one of them could dominate the other. IV. Regulatory impact assessment quality measurement: methodology To examine the RIAs accompanying the executive acts introducing anti-pandemic restrictions, a "compliance test" assessing the RIAs' formal compliance with requirements stated in laws, bylaws or guidelines 20 was applied. Specifically, the "scorecard" methodology was used. Its application included two steps. First, the scorecard was designed (ie the set of questions regarding the RIAs' contents), drawing upon the literature, the Polish RIA template and the unique context of the anti-pandemic policy response. Second, each of the sixty-four RIAs was assessed against such a benchmark. 1. The scorecard method of regulatory impact assessment quality measurement The so-called scorecard method is by far the most popular approach in the RIA quality literature. Introduced by Hahn et al, it attempts to quantify the "quality" of RIAs, defined in terms of compliance with formal requirements and best practices guiding such analyses. 21 The scorecard method assumes that it is plausible to develop a list of specific questions addressing the content of a RIA and its compliance with predefined requirements. Then, such a questionnaire (or scorecard) is applied to scrutinise specific RIAs, question by question, by assigning them points. 22 The "scorecard" methodology had been evaluated by the Office of Information and Regulatory Affairs (OIRA)the US RIA scrutiny body located within the Office of Management and Budgetto determine its suitability for the institutionalised oversight of RIA quality. The results of this evaluation indicate the main strengths and weaknesses of this scorecard method. The OIRA report concluded that "for such a scorecard to be effective, the metrics should be both objective and meaningful, which is challenging. Objective metrics can measure whether an agency performed a particular type of analysis, but may not indicate how well the agency performed this analysis. In addition, the metrics may be too broad to reflect agency compliance with specific guidance on technical matters (e.g., how to conduct an underlying contingent valuation study that provides key information to a regulatory analysis)." Peer reviewers commenting on the OIRA report pointed out that the selection of scorecard items should reflect the "ultimate goal of encouraging agencies to improve regulatory impact analyses to aid decision-making", rather than "creating a perfect RIA". 23 Essentially, these comments spelt out concerns with the scorecard design and its application for the institutionalised oversight of RIA quality. 24 Indeed, over the yearsas various scholars applied scorecards to examine RIA qualitythe scope of the included questions widened. However, they could be broadly classified into what are called the "accessibility block", the "diagnosis block", the "expected impacts block" and the "implementation/evaluation block". In line with this classification, the review of selected scorecards is summarised in Appendix 1. 25 20 P Ladegaard, "Measuring RIA quality and performance" in C Kirkpatrick and D Parker, Regulatory Impact Assessment: In addition, the RIA coding procedure has evolved. For example, Ellig and McLaughlin proposed a "qualitative evaluation of how well the analysis was performed [on the 0-5 scale], rather than an objective 'yes/no' checklist of analytical issues and approaches covered". 26 Although such an approach would fine-tune the quantitative analysis, it also raises concerns over the subjectivity and replicability of the results (to overcome these concerns, Ellig and McLaughlin proposed a specific evaluation protocol). Designing a scorecard for anti-pandemic restriction regulatory impact assessments To design a scorecard to evaluate the RIAs accompanying sixty-four executive acts introducing anti-pandemic restrictions, the insights from the literature review were combined with (1) a Polish RIA template 27 and (2) the specific context of the anti-pandemic policy development. The proposed scorecard was organised into five blocks: "Consultations"; "Problem identification"; "Evidence base"; "Estimated impact of the imposed measures" and "Evaluation". A summary of "Consultations" is required by the Polish RIA template. This constitutes the mandatory stage in the typical law-making process, as defined in the Cabinet Bylaw, 28 although it can be skipped under the "urgent" procedure. Given the scope and depth of the implemented policy interventions (involving closures of whole economic sectors), one would expect a sound policy formulation process to involve some sort of consultation with stakeholders, although the urgency of the pandemic response might affect the mode of such consultations. 29 The second block of the proposed scorecard, "Problem identification", refers to the very first box of the Polish RIA template: prominence in evidence-based policymaking. On the one hand, during the pandemic outbreak, this process was likely to be neglected, as "the problem" appeared to be quite obvious. Consequently, the issue of defining operational goals seemed to boil down to "saving lives". 30 That seemed also to be the case for cost-benefit analysis and zero-option considerations. Although life valuation techniques are well-developed 31 and routinely practised in industrial regulation (say, transportation security), it is difficult to document their usage in COVID-19 pandemic management across the developed world. Just as Stanley Kubrick's President Muffley rejected General "Buck" Turgidson suggesting "two admittedly regrettable, but 'distinguishable', postwar environments: one where you got twenty million people killed, and the other where you got a hundred and fifty million people killed", 32 contemporary politicians largely refused to monetise pandemic deaths (or at least go public with that). Nevertheless, one would expect a sound policy formulation process to be based on some systematic collection and analysis of the data, which should be indicated in this section of the scorecard. The third block of the proposed scorecard referred to the "Evidence base" supporting the introduced restrictions. This is consistent with the second box of the Polish RIA template. 27 The Polish-language.doc files can be accessed at <https://www.gov.pl/attachment/5ab00536-cd07-4f80-bacd-48fc8f8665f8> and <https://www.gov.pl/web/premier/ocena-wplywu-w-rzadowym-procesielegislacyjnym>. 28 Given the "evidence-based policymaking" paradigm, this aspect is undoubtedly important but again problematic in the pandemic environment. First, running rigorous studies (eg randomised controlled trials) takes time, and the dynamics of a pandemic put pressure on taking quick and bold action. As was explained by Swedish epidemiologist-in-chief A. Tegnell: "It is difficult to talk about the scientific basis of a strategy with this type of disease because we do not know much about it and we are learning as we go. Lockdown, closing bordersnothing has a historical scientific basis, in my view." 33 Such a context would favour the precautionary principle: it is better to err on the side of too severe than too lenient action. Second, even a year into the pandemic, the evidence base regarding the effectiveness of surgical masks 34 or other non-pharmaceutical interventions 35 was not necessarily clear. Keeping in mind these practical difficulties in pandemic policymaking, we proposed two categories of evidence base: (1) strongreferring directly to the scientific evidence favouring specific policies, such as studies of the virus transmission risks in different economic sectors that could inform restrictions and closures; and (2) weak -comparative analyses arguing in favour of specific tools that had already been implemented in other (presumably more advanced in terms of analytical and governance capacity) countries. As the Polish RIA template includes specific boxes on (1) "proposed tools of intervention" and (2) a summary of the "EU/OECD comparative analysis", the coding of these items is fairly straightforward. The fourth block of the proposed scorecard addressed the core of the "impact assessment" idea through the "Estimated impact of the imposed measures". The Polish RIA template requires the presentation of: (1) the monetised public finance (state and local budget) impact over a ten-year horizon; (2) the private-sector impacteither monetised or descriptive; 36 and (3) the labour market impact. Given the unprecedented scope and depth of the policy interventions (including closures of whole sectors of the economy such as shopping malls, beauty services, pubs and entertainment), this aspect seems particularly important. On the other hand, the very magnitude of the interventions makes sensible economic forecasting a daunting task. Nevertheless, one would expect a sound policy formulation process to deliver some data that, falling short of an explicit cost-benefit analysis, would inform decision-makers on the scale of expected side effects. Finally, in line with the "evidence-based policymaking" paradigm, the Polish RIA template includes an evaluation box that is coded in the last block of the proposed scorecard. It is expected to provide quantitative metrics and relevant targets that will be applied to assess the effectiveness of the implemented policies. As such, it conceptually refers to the "Problem identification" box. Specifically, the goals of the policies should be stated and indicators facilitating their evaluation provided, thereby enabling the evaluation of whether the problem was solved or adjustments to the policy were required. In the context of the anti-pandemic restrictions, one would expect a sound policy formulation process to at least specify ad hoc quantitative metrics appropriate to determining the success or failure of particular restrictionsand perhaps some ex ante decision decision rules to maintain or alleviate them (see the discussion in Section III of the attempt to link newly confirmed COVID-19 cases to the colour-coded restriction packages). The constructed scorecard is summarised in Table 1. 36 Specifically, three levels of analysis had been envisioned: qualitative description (immeasurable), quantitative and monetised (measurable). V. Results In applying the proposed scorecard to the sixty-four executive acts regulating Poland's COVID-19 restrictions, it turns out that almost all of the coded RIAs scored 0 (on a 0-2 scale) on all coded items. Therefore, the summary of findings willby necessitybe qualitative. The exceptions will be discussed below, ordered by coded item. First, as far as problem identification is concerned, vague descriptions dominate. The very first analysed act, issued by the Minister of Health and proclaiming an "epidemic risk condition" (Dz.U.2020.433), cites the "growing risk of SARS-CoV-2 infection" and already "identified COVID-19 cases", and it observes that "prophylactic measures are necessary". The very same phrase had been invoked in the RIA accompanying the Minister of Health act proclaiming the "epidemic condition" (Dz.U.2020.491). This phrase had been also used in the amendment of this act strengthening the restrictions (Dz.U.2020.522)this time "further prophylactic measures" had been "necessarily" implemented (the linguistic habit of providing categorical conclusionsalways in the passive voicewithout referring to any evidence base is typical in the analysed RIAs). Table 1. Scorecard applied to examine the regulatory impact assessments accompanying the executive acts introducing anti-pandemic restrictions. Block Item Coding Consultations Consultations 0no consultations whatsoever 1information on consultations, without any details 2full report indicating opinions submitted and the reactions to them Problem identification Problem identification 0no specific information (or obvious statements such as the occurrence of the pandemic) 1some information 2information containing references or data sources Evidence base Scientific evidence 0no reference 1a reference to studies, reports or papers Another major policy implemented by the Council of Ministers via executive act was the implementation of "red" and "yellow" zones (Dz.U.2020.1356) depending on the "higher risk of SARS-CoV-2 infection", and this was not accompanied by any reference to the quantitative indicators of this risk (in light of the public communication reviewed in Section III, it seems that "risk of infection" had been measured by the number of confirmed COVID-19 cases per 100,000 inhabitants; however, this factand the rationale behind selecting this particular metricis not discussed in the RIA). As zones were updated, one amendment's RIA (Dz.U.2020.1425) finally explicitly cited this metric 37and this was repeated in seven successive RIAs. Another RIA (Dz.U.2020.1972 cited "data indicating that social mobility substantially increased", but failed even to describe the sort of data being referred to (perhaps these were Community Mobility Reports provided by Google). In yet another RIA (Dz.U.2020.2353), one can read that the "analysis of the specific sectors of state activities and the economy, carried out by respective ministries, proved the necessity of" doing exactly what was done through this legal actwithout any further elaboration of the original RIA document. In line with the coding protocol devised in the previous section, all of these examples had been assigned a score of 0, as they failed to accurately refer to any evidence base or even data sources. The first RIA that scored 1 on the 0-2 scale modified the "red" and "yellow" zones (Dz.U.2021.436) and explicitly referred to the "number of inhabitants infected with SARS-CoV-2 per 100 000" (this metric was also cited in a subsequent amendment, Dz.U.2021.446). The same act regulated that people who had recovered from COVID-19 could be vaccinated after three months with a single vaccine dose, citing "clinical reasons". The only RIA that scored 2 on the 0-2 scale on problem identification (Dz.U.2021.512) introduced the metrics upon which restriction decisions would be based, specifically (1) the average daily number of cases per 100,000 inhabitants, (2) the share of (public) hospital beds occupied and (3) the share of critical care beds (ventilators) occupied in public hospitals. The regulation had been issued on 19 March 2021 amid a declining number of new COVID-19 cases (the previous indicator), which was interpreted as justifying lifting the restrictions. Second, as far as best practices are concerned, just four of the examined RIAs provide actionable comparative analyses. The first three referred to soccer games in the UK, Germany, Spain and Czechia (Dz.U.2020.750, Dz.U.2020.820 and Dz.U.2020, with the third one also cited lifting the border quarantine for Baltic states). The fourth (Dz.U.2020.904)scoring 2 on the 0-2 scalediscussed travel restrictions in Sweden, Czechia and South Koreano in-depth explanation was provided for the selection of these particular cases (although Sweden seems to offer the best-known European example of light-touch pandemic management, whereas South Korea had been widely praised for its application of a contact-tracking system to combat the pandemic). Third, as far as the public finance impact is concerned (traditionally one of the strongest points of Polish RIAs 38 ), the analysed RIAs were poorly informative. The first of themintroducing the "epidemic risk condition" (Dz.U.2020.433)proclaimed that it is impossible to estimate the "potential impact" on the "central budget". That phrase was repeated in three subsequent RIAs (Dz.U.2020.491, Dz.U.2020.522 and Dz.U.2020.531). In the case of another four RIAs, estimates of the fiscal impact were discussed explicitly, and they scored 37 In Polish: "wskaźnik zapadalności na COVID-19". 1 on the 0-2 scale. They dealt with rehabilitation holidays for servicemen and veterans (Dz.U.2020.1161), anti-stress holidays for civilians (Dz.U.2020.1356), COVID-19 testing in care homes (Dz.U.2020.1505) and rehabilitation holidays for disabled people and their custodians (Dz.U.2020.1758). Fourth, the examined RIAs neglected to discuss private-sector impacts, even in descriptive terms (as opposed to efforts to monetise these impacts to facilitate cost-benefit analyses). Only two of them provided any specific information: first, on the costs of introducing mandatory temperature measurements for aeroplane passengers and of sanitising aeroplanes (unit cost of PLN 3000); and second, proclaiming (without reference) that allowing aqua-parks to operate at 50% capacity would allow them to reach a break-even point (therefore no longer generating loses). Fifth, as far as monetisation of the private-sector impact is concerned, only one RIA (Dz.U.2020.792) scored positively, as it provided a monetary assessment of the Ekstraklasa soccer team revenues lost in the case of (1) maintaining game bans (PLN 192 million) and (2) a proposed lifting of some restrictions (limiting revenues lost to PLN 65 million). Sixth, the labour market impactone of the most important side effects of the lockdownbased pandemic management strategywas neglected in the analysed RIAs. In the case of imposing restrictions, the RIAs proclaimed that the proposed regulations "will have" or "could have" an impact on the labour market (particularly for the services in which restrictions would be imposed). Two RIAs proclaimed a positive impact on the labour market as far as the sports sector is concerned (Dz.U.2020.792 and Dz.U.2020.820); these are the same RIAs as those that discussed best practices in returning to soccer games being held. Finally, not a single one of the examined RIAs provided any guidance regarding the evaluation process, let alone indicators that should be applied to examine the effectiveness of the adopted restrictions or decision rules (milestones, holistic criteria) regarding lifting/imposing these restrictions or rules in the future. VI. Conclusions The reported qualitative resultsand, above all, the fact that the application of the proposed scorecard to the sixty-four executive acts failed to deliver any meaningful quantitative results (as virtually all of them scored 0 on all items)provides strong support for the conclusion that the RIA process of these acts failed to support the policymaking process with sort of evidence base as envisioned when compulsory RIAs had been introduced. The documents were shallow, not only failing to provide a sound evidence base for the specific anti-pandemic measures (indeed, a demanding threshold during the "learning by doing" in the first year of the pandemic) or estimates of their economic impacts (a daunting task for economic modellers and forecasters due to the unprecedented character of the introduced lockdown measures), but even failing to provide comparative data on the restrictions introduced in other EU/OECD countries. Moreover, we must stress the lack of any evaluation information in these documents (guiding either the effectiveness of the introduced measures or providing a roadmap for introducing new ones). In addition, the lack of consultations and the rapid pace of draft development highlight the extraordinary hurry that this process involved and the neglect of the interests and knowledge available to non-cabinet actors. Overall, the collected data demonstrate that the law-making processand its "evidence-based policymaking" tools such as the RIAs and public consultationsprovided to be mere window-dressing. This observation could be interpreted in two (not necessarily mutually exclusive) ways. First, one could hypothesise that the ad hoc pandemic management process crowded out the law-making process with tools such as RIAs and consultations. In other words, the genuine decision-making occurred elsewhere (with the exact process being largely invisible to public opinion and scholars, but plausibly fulfilling the requirements of rationality and evidence-based decision-making), and drafting legal texts simply codified decisions that had already been made, perhaps with in-depth consideration of the input from the Medical Council. Notably, beyond the timespan covered in this article, in mid-January 2022, "13 of the 17 members of the COVID-19 Medical Council explained their reasons for leaving the body. Complaining of a lack of cooperation from the government, : : : [they also] criticised 'the discrepancy between scientific and medical reasoning and practice' in the government's approach to past and current waves of infection." 39 Second, an admittedly more pessimistic interpretation suggests that the process simply had not been evidence-based, perhaps due to the scale of uncertainties and the pressure of time and the stakes involved. As a consequence, ad hoc politics and uncoordinated ideas from various centres of power (including specific cabinet ministers) could dominate sound decision-making. In this context, one could refer to the quixotic response of the Chancellery of the Prime Minister to the freedom of information request of the nongovernmental organisation Citizen Network regarding the background of the surprising decision to close cemeteries 40 before All Saints Day in 2020a holiday that is traditionally widely celebrated in Poland. The official denial included the following explanation: I hereby inform, you that the reconstruction of the thought processes associated with this particular decision of the Prime Ministerand identification, whose advice had been helpful of crucial in making this decisionis practically impossible : : : It occurred over one year ago, and tracing back the path to the abovementioned decision in a precise manner is impossible even for the decision-maker himself. The consultations lacked formal character and minutes had not been prepared. 41 Unfortunately, given the lack of reliable and publicly accessible evidence, as well as the level of political polarisation and the erosion of democratic practice that can be observed in contemporary Poland, the differentiation between these two alternatives would require some sort of official inquiryfollowing the example of UK's House of Commons, Science and Technology Committee and Health and Social Care Committee report entitled "Coronavirus: Lessons Learned to Date", which examined the initial UK response to the pandemic. 42
8,002.4
2022-10-13T00:00:00.000
[ "Economics" ]
An Automated Profile-Likelihood-Based Algorithm for Fast Computation of the Maximum Likelihood Estimate in a Statistical Model for Crash Data Numerical computation of maximum likelihood estimates (MLE) is one of the most common problems encountered in applied statistics. Even if there exist many algorithms considered as performing, they can su ff er in some cases for one or many of the following criteria: global convergence (capacity of an algorithm to converge to the true unknown solution from all starting guesses), numerical stability (ascent property), implementation feasibility (for example, algorithms requiring matrix inversion cannot be implemented when the involved matrices are not invertible), low computation time, low computational complexity, and capacity to handle high dimensional problems. The reality is that, in practice, no algorithm is perfect, and for each problem, it is necessary to fi nd the most performing of all existing algorithms or even develop new ones. In this paper, we consider the computing of the maximum likelihood estimate of the vector parameter of a statistical model of crash frequencies. We split the parameter vector, and we develop a new estimation algorithm using the pro fi le likelihood principle. We provide an automatic starting guess for which convergence and numerical stability are guaranteed. We study the performance of our new algorithm on simulated data by comparing it to some of the most famous and modern optimization algorithms. The results suggest that our proposed algorithm outperforms these algorithms. Introduction Let ℓðθÞ be a log-likelihood function where θ ∈ ℝ d is a parameter vector whose structure will be specified later. In computing the maximum likelihood estimate (MLE) which is the point b θ at which ℓðθÞ attains its maximum, the very first algorithm one can try is the Newton-Raphson (NR) algorithm which is no longer to be presented. The success of this algorithm comes from an enviable and unequalled property: that of quadratic convergence when the starting guess is well chosen (i.e., close to the unknown solution). However, it also has drawbacks: it is not globally convergent (i.e., its success depends on the starting guess); it can be costly or impossible to implement in high-dimensional problems because of its need of inverting the Hessian matrix at each iteration. Many other algorithms can be used whenever NR cannot. We refer the reader to [1][2][3][4] for a comprehensive review of these algorithms. Of all these algorithms, we can mention quasi-Newton (which use approximations of the inverse of the Hessian matrix rather than inverting it), Fisher scoring (a purely statistical method which consists in replacing the Hessian matrix with its mathematical expectation in order to ensure the ascent property and therefore numerical stability), block optimization, and derivative-free optimization algorithms. We can also mention minorization-maximization (MM) algorithms [5,6] which have made an extraordinary breakthrough in computational statistics and are increasingly used. The MM philosophy for maximizing a function ℓðθÞ is to define, in the first M step, a minorizing function for the objective function, and to maximize, in the second M step, the minorizing function with respect to the parameter vector θ. Even if all these algorithms are considered as performing, they can suffer in some cases for one or many of the following criteria: global convergence (capacity of an algorithm to converge to the true unknown solution from all starting guesses), numerical stability (ascent property), implementation feasibility (for example, algorithms requiring matrix inversion cannot be implemented when the involved matrices are not invertible), low computation time, low computational complexity, and capacity to handle high dimensional problems. The reality is that, in practice, no algorithm is perfect, and for each problem, it is necessary to find the most performing of all existing algorithms or even develop new ones. In this paper, we consider the computing of the maximum likelihood estimate (MLE) of the vector parameter of a statistical model of crash frequencies proposed by [7]. The vector parameter has the form θ = ðα, β T Þ T where α > 0 is the parameter of interest and β is a vector of secondary parameters. Although secondary, the subvector β can contain an important number of components (up to several hundreds) depending on the structure of the data. Thus, the classical algorithms mentioned above may need a great number of iterations (and thus have a slow convergence) or simply fail to converge. For the model considered in this paper, Newton-Raphson and Minorization-Maximization (MM) algorithms have been implemented by [7,8], but the numerical study of these algorithms has been limited to simple cases. The NR algorithm is known to be fast in simple cases and inefficient (or even impossible to implement) for high-dimensional problems. Moreover, its convergence depends on the starting guess (initial solution θ ð0Þ from which the algorithm starts). The MM algorithm, on the other hand, often requires a large number of iterations before converging to the solution. The main contribution of this paper is to build an estimation algorithm which converges faster than the other algorithms and whose convergence is guaranteed and is not affected by the dimension (the number of components of θ). For this purpose, we exploit the splitting of the parameter vector into two blocks, and we apply the profile likelihood principle (see for example [9], p. 231) to reduce the search of θ to only the search of the parameter of interest α. Then, we develop a new estimation algorithm for which we provide an automatic starting guess from which convergence and numerical stability (ascent property) are guaranteed. This automatization of the starting guess reveals to be a true advantage for our algorithm over the others because it allows to circumvent the difficulty of finding an adequate starting guess. We show using simulated data that our algorithm outperforms Minorization-Maximization (MM) and Newton-Raphson algorithms which are two of the most famous and modern optimization algorithms. The remainder of this paper is organized as follows. In Section 2, we describe the data and the estimation problem. The statistical model and the constrained maximum likelihood estimation of the parameters are also presented. In Section 3, we present the profile likelihood principle and then use it to design our new profile-likelihood-based algorithm (PLBA) for computing the MLE of θ. We also provide an automatic starting guess, and we prove that it guarantees convergence of the proposed algorithm. In Section 4, we prove that the proposed algorithm satisfies the ascent property. In Section 5, we report the results of the comparison of the PLBA with MM and NR algorithms. The paper finishes with some discussions and concluding remarks in Section 6. Data, Statistical Model, and Problem Setup The framework of this paper is the statistical analysis of crash data before and after the implementation of a road safety measure at s (s > 0) experimental sites (called treated sites) where crashes are classified by severity in r (r > 0) levels. This analysis is aimed at estimating the mean effect α > 0 of the safety measure simultaneously on all the s sites. This mean effect must be understood in the multiplicative sense and therefore must be compared to 1. A value α < 1 indicates a positive effect of the measure (an average reduction of 100 × ð1 − αÞ% in the number of crashes) while a value α > 1 indicates a negative effect of the measure (an average increase of 100 × ðα − 1Þ% in the number of crashes) and α = 1 indicates that the measure had no significant influence on the number of crashes. Let ð1Þ be a matrix of order s × ð2rÞ where x 1jk (respectively x 2jk ) is the number of accidents of severity level j (j = 1, ⋯, r) which occurred at site k in the period before (respectively, after) the implementation of the measure. Also let ð2Þ be a matrix of order s × r where z jk (j = 1, ⋯, r, k = 1, ⋯, s) is the ratio of the number of accidents of severity j in the "after" period to the number of accidents of the same severity in the "before" period on a control site (a site where the measure has not been implemented) paired with treated site k. Let S r−1 = fðp 1 , ⋯, p r Þ ∈ ½0, 1 r , ∑ r j=1 p j = 1g and n k = ∑ 2 i=1 ∑ r j=1 x ijk be the total number of accidents observed on the treated site k. N'Guessan et al. [7] proposed the following multinomial-based probability distribution of parameter vector θ for x: 2 Journal of Applied Mathematics where θ = ðα, β T Þ T is the parameter vector, α > 0 is the mean where x •jk = x 1jk + x 2jk (see [7] for more details). In this paper, we are interested in computing the maximum likelihood estimate (MLE) b θ of the unknown vector parameter θ defined by b θ = argmax θ∈ℝ * In the case s = 1, [10] built an algorithm to solve Equation (6). In the next section, we present a profilelikelihood-based algorithm (PLBA) for computing the MLE in the general case s ≥ 1. Our proposed PLBA is automated since we provide an automatic starting guess which guarantees convergence. The Automated Profile-Likelihood-Based Algorithm (PLBA) for Computing the MLE 3.1. A Brief Reminder on Profile Likelihood. Let us rewrite the log-likelihood as ℓðθÞ = ℓðα, βÞ. If, for a given α, the MLE b β of β may be written as a function b βðαÞ of α, that is, then the profile likelihood function (see for example [9], p. 231) is expressed as a function of α only. The maximization of ℓ p ðαÞ is equivalent to that of ℓðα, βÞ [11]. Computation of b β in Closed Form Proof. Given α > 0, [7] proved that for all k = 1, ⋯, s, the components of b β k satisfy the following system of r equations: Thus, for all k = 1, ⋯, s, we apply Theorem 3.5 of [12] to Equation (10) and get where is a block-defined square matrix of order r + 1, is the Schur complement of Δ α,k in M α,k (see for example [13] (p. 34) for a reminder on the use of Schur complement for inverting a block matrix). The proof is thus completed. Profile Likelihood Theorem 2. The profile likelihood function is defined, up to an additive constant independent of α, by where Proof. Expression (5) is equivalent to Journal of Applied Mathematics For all k = 1, ⋯, s, Equation (9) yields and the relationships (9) and (16) enable us to write After some manipulations on the first, second and fourth terms, we get: Removing the third and sixth terms and the constants (first and fifth terms), we get (14). Design of the PLBA where Proof. (i) On the one hand, function F is a one-to-one decreasing function (it is continuous and its derivative F ′ðuÞ is strictly negative for all u ≥ 0) and Since −x 1•• < 0 < x 2•• , Equation FðαÞ = 0 has a unique solution. (ii) On the other hand, the MLE b α, if it exists, is solution to the optimization problem The profile log-likelihood ℓ p ðαÞ being differentiable for every α > 0, the MLE b α is then solution to Equation and ∑ s k=1 ∑ r j=1 x •jk = x 1•• + x 2•• . Thus, Equation ℓ p ′ ðb αÞ = 0 is equivalent to Fðb αÞ = 0, and b α is the unique root of F. Equation FðuÞ = 0 seems fairly complicated to solve in closed form for any s > 1 and must therefore be solved numerically. Obviously, there are many root-finding algorithms (see for example [14] (Chapter 3) or [15] (Chapter 3)). Here, we propose a numerical approximation of b α using the following one-dimensional Newton-Raphson (NR) rootfinding algorithm: where the starting guess α ð0Þ should be chosen in ℝ + by the user. Our choice of NR algorithm is motivated by the fact that it converges quadratically to the solution if α ð0Þ is chosen near the unknown solution. To overcome the difficulty of the choice of α ð0Þ , we prove that, if we set α ð0Þ = 0 as an automatic starting guess, then, the convergence of NR iterations (23) is always guaranteed. 4 Journal of Applied Mathematics To prove Theorem 4, we need the following Lemma 5. Proof of Lemma 5. (i) For all u ≥ 0, we have Therefore, φ is differentiable and where for all u ∈ ½0,+∞½, So the sign of φ′ðuÞ is the same as the one of FðuÞ. As F is a decreasing function and Fðb αÞ = 0, we have It means that φ is increasing on ½0, b α and decreasing on ½b α, +∞½. (ii) Equation φðuÞ = u is equivalent to FðuÞ = 0, where F is defined by Formula (19). By Lemma 3, this equation has b α as unique solution; hence, b α is the unique fixed point of φ. We may now prove Theorem 4. Proof of Theorem 4. Assume that α ð0Þ = 0. Then, α ð0Þ ≤ b α and by item (iii) and (iv) of Lemma 5, α ð0Þ ≤ φðα ð0Þ Þ = α ð1Þ ≤ b α. It can be easily proved by induction that Thus, the sequence ðα ðmÞ Þ is increasing and bounded; hence, it converges to the unique fixed point of φ which is b α. Remark 6. Actually, from the proof of Theorem 4, it is clear that any starting value α ð0Þ ∈ ½0, b α will guarantee convergence of the sequence ðα ðmÞ Þ generated by NR iterations (23). However, since b α is unknown, it is difficult to find a value other than α ð0Þ = 0. We therefore propose an algorithm (see Algorithm 1) starting from α ð0Þ = 0. The MLE b α is computed using NR iterations (23); afterwards, b β is computed from b α. Ascent Property The ascent property of Algorithm 1 (the fact that the profile log-likelihood is increased monotonically by the algorithm) is given by Theorem 7. Theorem 7. The sequence ðα ðmÞ Þ generated by Algorithm 1 increases monotonically the profile log-likelihood ℓ p ðαÞ, that is Proof. From (22) and (29), we deduce that, for all α > 0, Hence, the 5 Journal of Applied Mathematics function ℓ p is increasing on the interval 0, b α and decreasing on ½b α, +∞½. From (30), we know that the sequence ðα ðmÞ Þ is increasing and still belongs to the interval 0, b α. Then, for all iteration m, we have α ðmÞ ≤ α ðm+1Þ and ℓ p ðα ðmÞ Þ ≤ ℓ p ðα ðm+1Þ Þ because ℓ p is increasing on 0, b α. The proof of Theorem 7 is then completed. Simulation Study We compare the performance of our PLBA with that of Newton-Raphson (NR) and MM algorithms in R software [16]. We implemented the NR algorithm using the R package pracma [17] and the MM algorithm using Theorem 3.3 of [8]. The choice of these two comparison algorithms is motivated by the following factors: (a) MM and NR algorithms are two of the most used algorithms in statistics for parameter estimation; (b) other algorithms such as quasi-Newton and derivative-free algorithms showed very low convergence proportions and their results are not reported here; (c) the results obtained for the particular case s = 1 in [10] suggest that MM and NR algorithms are much more efficient than quasi-Newton and derivative-free algorithms. For these different scenarios, the number of parameters (1 + sr) is given by Table 1. For the n k 's (k = 1, ⋯, s), we have chosen two common values: a low value (n = 50) and a great value (n = 5000). Except for the proposed algorithm whose starting guess is automated, the other algorithms were given randomly generated starting guesses. Results . Tables 2-7 present the average results obtained for the different scenarios over 1000 replications (i.e., 1000 repetitions of the data generation and computation of b θ). In these tables, CPU times are given in seconds and CPU time ratios are calculated as the ratio of the mean CPU time of a given algorithm to the CPU time of the PLBA. Thus, the CPU time ratio of the PLBA is always equal to 1. The Mean Square Error (MSE) is defined as Due to the increasing number of parameters, the MLE b θ has only been included for Scenario 1 ( Table 2). From these tables, it can be seen that PLBA and MM algorithms have always converged while the convergence proportion of NR algorithm decreases (from 55.9% to 19.6%) with the number of parameters (see Figure 1). As Journal of Applied Mathematics far as the MSE is concerned, the trend for all the algorithms is that the MSE decreases when the sample size n increases. By taking a look at the CPU times ratios, we notice that the CPU time ratios of the MM and NR algorithms are all well above 1. This means that the PLBA is significantly faster than these two algorithms. As shown on Figures 2 and 3, the CPU time ratios of MM and NR algorithms increase with the number of parameters. The PLBA is on average 8 to 50 times faster than MM and 6 to 74 times quicker than NR. Analysis of the Results. It appears that our proposed PLBA outperforms the Minorization-Maximization (MM) and the full Newton-Raphson (NR) algorithms. It always converges; it requires a low number of iterations and little computation time. The number of iterations varies slightly and seems to stabilize around six iterations whatever the starting guess and the number of parameters to be estimated. The MM algorithm also has a convergence rate of 100%, but its number of iterations is quite high, and this may indicate some sensitivity to the starting guess. The NR algorithm has Journal of Applied Mathematics a convergence proportion at most slightly higher than 50% and this proportion decreases when the number of parameters increases. This is not surprising because at each iteration of the NR algorithm, a square matrix of order 1 + sr is numerically inverted. This numerical inversion can be complicated or impossible when the matrix to be inverted is ill-conditioned or singular. The fact that NR does not converge for all the replications is also not surprising since it is well known that NR may converge to bad values or may not converge at all if the starting guess is far from the true parameter vector. The good results obtained by our proposed algorithm PLBA can be explained by several factors. First, the principle of profile likelihood enables to reduce the search for 1 + sr solutions to that of the single solution b α from which the remaining sr estimates b β jk (j = 1, ⋯, r, k = 1, ⋯, s) can be obtained by a simple formula. This also enables to reduce the computation time required by our proposed PLBA to converge. Secondly, the fact that the estimation of α relies on a one-dimensional NR enables our algorithm to enjoy the quadratic convergence of the NR algorithm. Thirdly, the definition of the automated starting guess ensures the convergence of our proposed algorithm PLBA, and the ascension property ensures its numerical stability. Conclusion In this article, we have built a profile-likelihood-based algorithm (PLBA) to compute, under constraints, the maximum likelihood estimate (MLE) of the parameter vector of a statistical model used in the analysis of a road safety measure applied to s sites presenting in total r accident severity levels. The parameter vector of the model is of the form θ = ðα, β T Þ T , where α is the parameter of interest and β is a vector of sr secondary parameters. Using the likelihood equations, we obtained the closed-form expression of the components of β as a function of the main parameter α and then used the principle of profile likelihood to express the log-likelihood only as a function of α. We then built an algorithm mixing a one-dimensional Newton method to compute the estimate of the parameter of interest α and the computation of the secondary parameters from their closed-form expressions. The starting guess of our proposed algorithm is automated in such a way as to guarantee its convergence towards the MLE b θ. The numerical studies suggest that our PLBA outperforms the Minorization-Maximization (MM) and the full Newton-Raphson (NR) algorithms in terms of computation time. Our PLBA converges to estimates close to the true values of the parameter vector even for small sample sizes. They also suggest that the problem addressed in this paper is difficult to tackle for the full NR algorithm (the latter having a convergence proportion at most slightly higher than 50%). Data Availability This study uses simulated data, and the data generation process is described in the paper. Conflicts of Interest The author declares that he/she has no conflicts of interest.
4,943.4
2022-10-26T00:00:00.000
[ "Computer Science", "Mathematics" ]
Giant Shear Displacement by Light-Induced Raman Force in Bilayer Graphene Coherent excitation of shear phonons in van der Waals layered materials is a non-destructive mechanism to fine-tune the electronic state of the system. We develop a diagrammatic theory for the displacive Raman force and apply it to the shear phonon's dynamics. We obtain a rectified Raman force density in bilayer graphene of the order of ${\cal F}\sim 10{\rm nN/nm^2}$ leading to a giant shear displacement $Q_0 \sim 50$pm for an intense infrared laser. We discuss both circular and linear displacive Raman forces. We show that the laser frequency and polarization can effectively tune $Q_0$ in different electronic doping, temperature, and scattering rates. We reveal that the finite $Q_0$ induces a Dirac crossing pair in the low-energy dispersion that photoemission spectroscopy can probe. Our finding provides a systematic pathway to simulate and analyze the coherent manipulation of staking order in the heterostructures of layered materials by laser irradiation. Introduction.-In van der Waals (vdW) layered materials, e.g. the family of graphene and transition metal dichalcogenides (TMDs), the stacking order of layers is crucial for the ground state properties. The properties of an AB stack bilayer graphene (A-sublattice on top of Bsublattice-see Fig. 1) are different from that of AA-stack one [1][2][3][4][5][6][7][8]. In trilayer graphene, two common structures are with ABA and ABC stacking with and without center of symmetry, respectively [9][10][11][12][13][14][15]. In twisted 2D materials with an asymmetric layer rotation, e.g., twisted bilayer and trilayer graphene [4,[16][17][18][19][20], the ground state strongly depends on the twist owing to the flat-band formation at magic twist angles [4]. The relative lateral layer shift in a small twist-angle incommensurate bilayers, with a large moiré lattice constant, can be gauged away by a unitary transformation [4]. However, its impact is significant for a large twist-angle. A rigid relative displacement along armchair direction by one C-C bond length a 0 ∼ 0.14nm or a 60 • relative layer twist can switch between AA and AB prototypes. The relative lateral displacement of top and bottom layers versus the middle one in twisted trilayer graphene can drastically change the density of states and superconducting critical temperature [15,21,22]. Although there is a surge of interest in twisted multilayers, the impact of the relative lateral shift in vdW layered materials is less highlighted in the literature. This work aims to fill this gap. In this paper, we study coherent shear phonon dynamics employing a diagrammatic framework in vdW layered materials. Collective macroscopic oscillation of atoms in a crystalline solid, i.e. coherent phonons, facilitates a non-destructive control of physical properties by irradiating ultrashort laser pulses and employing transient optical spectroscopy [39][40][41][42][43][44][45][46][47][48][49][50]. We provide a theory for the displacive Raman force and implement it to excite coherent shear phonon in bilayer graphene (BLG). The excitation of coherent shear modes can change the electronic structure and trigger a structural transition to another quasi-equilibrium state. We obtain highly efficient tunability of Raman force by altering electronic doping and temperature as well as laser frequency, polarization , and power. The harmonic equation of motion of the coherent phonon displacement Q is given by where ρ stands for the mass density of the 2D material and F (t) is the light-induced force density. For an ultrashort δ-function pulse, the Raman force density can have impulsive F I (t) ∼ Fδ(t) and displacive F D (t) ∼ FΘ(t) character where Θ(t) = t −∞ dt δ(t ) is the Heaviside step function [44,[48][49][50]. The impulsive force leads to the coherent vibration of ions around equilibrium positions while under a displacive force ions shift away from the equilibrium positions to a new local equilibrium and then vibrate around the new equilibrium positions (see Fig. 1a). The real and imaginary parts of the Raman susceptibility contribute to the impulsive and arXiv:2204.08060v1 [cond-mat.mes-hall] 17 Apr 2022 displasive forces, respectively [48][49][50]. For the laser frequency larger than the optical transition edge, the imaginary part is finite and thus the displacive Raman force is non-zero [50]. In what follows, we evaluate the displacive Raman force and demonstrate its relevance for the light-induced shear displacement in BLG. Model.-The dipole moment of Raman-active phonon µ a = α ab E b is linearly proportional to the light electric field E b where the polarisability tensor α ab depends on the phonon displacement vector Q. The electromagnetic potential energy then follows U = −µ a E a = −α ab E a E b . The corresponding Raman force thus reads [32,33]. We adopt Einstein convention for summation on repeated indices. Although the lowest-order Raman effect is a second-order nonlinear optical process, it does not require an inversion symmetry breaking. This is because the Raman-active mode in a centrosymmetric system is even under parity [33]. The displacive (rectified) force will displace ions to a new equilibrium position Q 0 ≈ F D /(ρΩ 2 0 ). Then ions vibrate around the new equilibrium with the phonon frequency Ω 0 , see Fig. 1. Apparently, the rigid displacement is more pronounced for the soft shear phonons with shallow frequency relative to other energy scales such as temperature and electronic chemical potential. The displacive force is governed by the rectification process F D a = dtF a (t) and it is given by where ω is the light frequency and χ R abc is the Raman response function that is the correlation function of electron-phonon and light-matter couplings. Having the in-plane displacement Q ( ) (r) of two layers = 1, 2, the shear phonon displacement is the asymmetric component: (2) , Q (1) } that leads to PQP −1 = Q. Therefore, it is a Raman-active and IRinactive phonon. The second quantised form of the shear phonon displacement Note that S stands for the area of the 2D material. The leading Hamiltonian of finite-q phononb λ,q interacting with electron spinor fieldsΨ k is given bŷ where the electron-phonon coupling is given in terms of the matrix-elementM λ asĝ λ (k, q) = FIG. 1. Schematic of coherent shear phonon in bilayer graphene. (a) Impulsive (blue curve) and displacive (red curve) coherent phonon. (b) Shear phonon mode in bilayer graphene. We illustrate fundamental hopping mechanism in bilayer graphene denoted by γis. The intralayer Carbon-Carbon bond length is a0 ≈ 0.14 nm, the interlayer distance is c ≈ 0.34nm and the bond length for γ3 and γ4 hoppings is , we arrive at the classical equation of motion for the coherent phonon Q λ,q (t) = Q λ,q . The coherent phonon equation motion follows Eq. (1) for q = 0. The Raman force density is given as the expectation value of the electron-phonon coupling F λ, This force is related to the excitation density, see also Ref. [47], where the lightinduced electron density generates a force acting on ions. For the normal incidence of light, only q = 0 phonon is a Raman-active mode. The light-matter coupling is incorporate by minimal coupling transformation k → k + eA(t) using a homogeneous dynamical vector potential A(t). The corresponding electric field reads E(t) = −∂ t A(t). The light-matter interaction Hamiltonian consists of two parts: photon-electron terms and photon-electronphonon terms: whereĵ a (k) is called the paramagnetic current operator andγ ab (k) is known as the diamagnetic current operator as well as the Raman vertex in the effective mass approximation [51]. The photon-electron-phonon interac- tion couplings are parametrised byΘ ab (k) and∆ abc (k). The photon-electron-phonon couplings originate from the minimal coupling transformation in the electron-phonon matrix-elementM λ (k + eA(t)/ , q) and then expanding it up to second order in the light field. We follow the standard many-body perturbation theory and utilize a diagrammatic framework [52][53][54][55]. Accordingly, the Raman force response function χ R abc consists of four diagrams, as shown in Fig. 2. Having defined the key aspects of the model, we calculate the displacive Raman force and the resulting shear displacement in bilayer graphene. Shear phonon in bilayer graphene.-Bilayer graphene consists of two single layers of graphene sheets offset from each other in the xy-plane. The low-energy quasiparticles in BLG follow a two-band Hamiltonian around the corners of the hexagonal Brillouin zone [56] where p = k is the momentum vector, τ = ± stands for two K and K valley points, the identity matrixÎ and Pauli matricesσ x,y are in the layer pseudospin basis, and µ is the chemical potential. The x-direction indicates a zigzag orientation of the hexagonal crystal in our convention [57]. The effective mass is given by 1/2m ≈ v 2 /γ 1 with v = 3γ 0 a 0 /2 ∼ 10 6 m/s. Note that γ i s are hopping energies in the lattice model illustrated in Fig. 1b. We neglect trigonal warping and effective mass asymmetry in the energy dispersion of chiral fermions in BLG. In pristine BLG, two degenerate shear modes correspond to the sliding motion and two Cartesian directions. In order evaluate Raman force, we consider the coupling of electrons to one and two photons given bŷ j α = −e∂ pαĤp andγ αβ = −e 2 ∂ pα ∂ p βĤ p , respectively. In the low-energy model, the couplings of electrons to shear phonons are given by The electron-phonon coupling are obtained using a fourband tight-binding model following the approach given in Refs. [58][59][60] -see Supplemental Material for the detail discussion on electron coupling to shear phonons in BLG. After neglecting electron momentum p, we obtain We set the Gruneisen parameter β 3 ∼ 2. The vertical hopping derivative ∂γ 1 /∂c does not contribute in the leading order electron-phonon interaction. The photon-electron-phonon coupling is obtained after neglecting electron momentum p. Results and discussion.-In analogy to the linear and circular photogalvanic current [61], we decompose the displacive Raman force into linear and circular components. The displacive Raman densities are thus formally given by (see Supplemental Material) For an electric field polarization in the xy-plane, the linear displacive Raman (LDR) and circular displacive Raman (CDR) response functions read For the circular case, we consider a generic elliptical polarization of the incident laser field E(t) = E 0 {cos(ϑ)x ± i sin(ϑ)ŷ}e −iωt . An elliptical polarization contains both linear and circular counterparts: [iE(ω) × E * (ω)] z = ± sin(2ϑ) and Re[E b (ω)E * c (ω)] = δ ab (δ ax cos 2 ϑ+δ ay sin 2 ϑ). Considering the inversion and rotational symmetries of the low-energy model, we find non-vanishing tensor elements Λ yyy = Λ xxy = Λ xyx = −Λ yxx . Accordingly, the symmetry implies that Λ axy is either zero (a = y) or real valued (a = x) leading to a vanishing circular displacive Raman force in BLG: F CDR = 0. In order to have the CDR force finite, we need to break rotational symmetry for instance by applying a uniaxial strain. The LDR force contribution owing to an elliptically polarized incident laser reads Accordingly, the Raman force vanishes for a circularly polarized light ϑ = ±π/4. Note that F 0 = N f M (eE 0 ) 2 /(8πµ 2 ) with N f = 4 for spin-valley degeneracy. For µ = 0.2eV, and E 0 = 1V/nm, we obtain the force density unit F 0 ≈ 0.8nN/nm 2 . For linear polarized incident light E(t) = E 0 {cos(θ)x + sin(θ)ŷ}e −iωt , the Raman force follows As seen from the above relation, the force vector is not necessarily parallel to the driving electric field. In Fig. 3a, the orientation of the Raman force is compared to the incident laser linear polarization . The frequency dependence of the Raman force is captured by the dimensionless function Λ(ω 1 ,ω 2 ). There are topologically distinct contributions to the Raman force, which are illustrated diagrammatically in Fig. 2. Using equilibrium Green's function method, we analytically calculate Raman response functions at zero electronic temperature T e = 0 (see Supplemental Material) Note thatω j = ( ω j + iΓ e )/|µ| where Γ e stands for the phenomenological scattering rate of electrons. The above expression stems from the triangular diagram and a bubble diagram, as shown in Fig. 2a and Fig. 2b, respectively. The contribution from the Feynman diagram depicted in Fig. 2c vanishes based on our low-energy model analysis. The last diagram shown in Fig. 2d, is frequency independent, and its value is fixed by enforcing the gauge invariance where the response to a static homogeneous gauge potential must vanish due to the gauge invariance, i.e. χ R abc (0, 0) = 0. For the displacive force, we set ω 1 = −ω 2 = ω and therefore it scales as F ∼ 1/Γ e . In Fig. 3b, we illustrate the magnitude of the displacive Raman force versus frequency ω/|µ| for different values of phenomenological scattering rate Γ e . As expected, the displacive force is finite in the interband regime when ω > 2|µ| and for the large frequency, the force density scales as F D = M (eE 0 ) 2 /(π ωΓ e ). Since the displacive force and interband optical absorption occur coincidentally, the effect of electronic temperature can not be abandoned. For an intense incident laser, the photo-excited electrons in a metal can reach a quasiequilibrium state with very high electronic temperature (e.g. T e ∼ 1000K) [62]. The results at finite electronic temperature T e are depicted in Fig. 3c,d, which shows a non-zero displacive force even in the intraband regime, and a robust Drude-like tail emerges at low frequency. Although the temperature depends on the optical absorption, we model it as an independent parameter to evaluate the leading-order impact of hot electrons. Using Q 0 ≈ F D /(ρΩ 2 0 ) and ρ = cρ gr for the mass density of BLG, with ρ gr ≈ 2.267kg/cm 3 being the threedimensional graphite density, we estimate the strength of the displacive shear displacement as depicted in Fig. 3d. This displacement is robust and tuneable by altering the Fermi energy, incident laser frequency, and laser intensity. We find giant values for Q 0 in our leadingorder theory considering realistic experimental values for laser intensity and carrier doping. The saturation value of shear phonon displacement measured to be around Q max ∼ 8pm in layered WTe 2 using intense infrared laser with electric field strength E 0 ∼ 7.5MV/cm [34] and E 0 ∼ 10MV/cm [38]. Although the earlier experiment [34] does not support a Raman mechanism, the later [38] measures a linear power dependence of the shear displacement consistent with the Raman force F D ∝ E 2 0 . Such linear power dependence is also reported in Ref. [35]. For a realistic doping µ = 200meV, electronic temperature T e = 1000K and scattering rate Γ e = 20meV in bilayer graphene, we find Q 0 ∼ 0.13a 0 ∼ 12pm for E 0 ∼ 5MV/cm and ω = 25THz that can grow up to Q 0 ∼ 50pm for E 0 ∼ 10MV/cm. This value is clearly immense, and it suggests the need for higher-order corrections in the electric field and phonon anharmonicity leading the saturation of displacement. The rigid shear displacement (frozen shear phonon) induces a perturbation to the electronic Hamiltonian according to Eq. (3) that for instance at the K-point (τ = +) it reads δĤ = M (Q 0,xσy + Q 0,yσx ). As a result of this frozen shear phonon, the band touching point at p = 0 splits by ∆ = 2M Q 0 and a Dirac crossing pair forms at p 0 = ±p 0 (cos φ 0 , sin φ 0 ) in the Cartesian coordinates where we find (see Fig. 3e) For linear polarized incident laser, the rectified shear displacement is Q 0 = Q 0 (sin(2θ), − cos(2θ)) with θ being the incident laser polarization angle. Therefore, we find φ 0 = −θ which implies that the new Dirac crossing points are aligned to the incident laser polarization . The energy splitting ∆ and the position of the crossing point p 0 depend on the rigid shear displacement as illustrated in Fig. 3f. In principle, this change in the dispersion is large enough to be experimentally measured utilising the angle-resolved photoemission spectroscopy (ARPES). The proposed mechanism of displacive Raman force can strongly impact the Raman-active shear phonon dynamics in twisted systems, particularly in trilayer graphene. The theory can be generalized to investigate displacive coherent dynamics of other collective modes, such as magnons [63] and superconducting Higgs mode [55], driven by a rectified light-induced force field. In future studies, we will develop a higher-order Raman force mechanism to manipulate chiral valley phonons [64,65] in hexagonal 2D materials. In a separate study, we will discuss the saturation of rigid shear displacement using a higher-order Raman force scheme.
3,915.8
2022-04-17T00:00:00.000
[ "Physics" ]
Measuring human capital: a systematic analysis of 195 countries and territories, 1990–2016 Background Human capital is recognised as the level of education and health in a population and is considered an important determinant of economic growth. The World Bank has called for measurement and annual reporting of human capital to track and motivate investments in health and education and enhance productivity. We aim to provide a new comprehensive measure of human capital across countries globally. Methods We generated a period measure of expected human capital, defined for each birth cohort as the expected years lived from age 20 to 64 years and adjusted for educational attainment, learning or education quality, and functional health status using rates specific to each time period, age, and sex for 195 countries from 1990 to 2016. We estimated educational attainment using 2522 censuses and household surveys; we based learning estimates on 1894 tests among school-aged children; and we based functional health status on the prevalence of seven health conditions, which were taken from the Global Burden of Diseases, Injuries, and Risk Factors Study 2016 (GBD 2016). Mortality rates specific to location, age, and sex were also taken from GBD 2016. Findings In 2016, Finland had the highest level of expected human capital of 28·4 health, education, and learning-adjusted expected years lived between age 20 and 64 years (95% uncertainty interval 27·5–29·2); Niger had the lowest expected human capital of less than 1·6 years (0·98–2·6). In 2016, 44 countries had already achieved more than 20 years of expected human capital; 68 countries had expected human capital of less than 10 years. Of 195 countries, the ten most populous countries in 2016 for expected human capital were ranked: China at 44, India at 158, USA at 27, Indonesia at 131, Brazil at 71, Pakistan at 164, Nigeria at 171, Bangladesh at 161, Russia at 49, and Mexico at 104. Assessment of change in expected human capital from 1990 to 2016 shows marked variation from less than 2 years of progress in 18 countries to more than 5 years of progress in 35 countries. Larger improvements in expected human capital appear to be associated with faster economic growth. The top quartile of countries in terms of absolute change in human capital from 1990 to 2016 had a median annualised growth in gross domestic product of 2·60% (IQR 1·85–3·69) compared with 1·45% (0·18–2·19) for countries in the bottom quartile. Interpretation Countries vary widely in the rate of human capital formation. Monitoring the production of human capital can facilitate a mechanism to hold governments and donors accountable for investments in health and education. Funding Institute for Health Metrics and Evaluation. Introduction Human capital refers to the attributes of a population that, along with physical capital such as buildings, equip ment, and other tangible assets, contribute to economic productivity. 1 Human capital is characterised as the aggregate levels of education, training, skills, and health in a population, 2 affecting the rate at which technologies can be developed, adopted, and employed to increase productivity. 3 The World Bank has brought new attention to this topic through its recently introduced Human Capital Project, 4 which aims to "understand the link between investing in people and economic growth, and to accelerate financing for human capital invest ments." A basic input needed for this aim to be fulfilled is an internationally comparable index of human capital, which currently does not exist. This study seeks to fill this global measurement gap. 3 Although evidence supports human capital as a driver of growth, the World Bank has argued that invest ments in human capital are too low in lowincome and middleincome countries. 5 Much of the World Bank's investments focus on physical rather than human capital. 5 Only 1·5% of the World Bank International Development Association concessional grants are for health and 1·9% are for education. 6 As countries graduate to borrowing from the nonconcessional International Bank for Reconstruction and Development framework, the shares for health increase to 4·2% and to 5·2% for education. 6 A focus on building physical assets might also be driven by time horizons; such projects can yield returns sooner than investing in children's health and education, and the political process in many nations might reward shortrun returns. 6 Despite the inclusive scope of the theory of human capital, much of the initial research has focused on the average number of years of completed schooling, 7 found to be associated with subsequent economic growth, 8,9 although the association is not consistent. 3 Research that uses the distribution of education has found that it might explain more variation in economic growth than a simple average. 10 In the past 5-10 years, analyses of around 50 countries 11-13 that further take into account the quality of education or learning, with the use of performance on international student assessments, find this measure is even more predictive of economic growth. Efforts to expand the measurement of human capital to also encompass functional health have been far fewer 14,15 but suggest that health could also be important for under standing economic growth. 3 Underinvestment in people might also be driven by a paucity of data; presently, regular and comparable reports on the rates of formation of human capital across all countries do not exist. 5 Monitoring the expected forma tion of human capital in the next generation, as a measure of the effect of nearterm investments in health and education, could facilitate a mechanism to hold countries and donors accountable to their populations for these investments. 5 Building on past efforts, we have produced a measure of human capital that incorporates educational attainment, education quality or learning, functional health, and survival for 195 countries, by age and sex, from 1990 to 2016. For each country, we estimated the expected years of human capital, defined for each birth cohort as the expected years lived from 20 to 64 years of age and adjusted for educational attainment, learning, and functional health, if exposed to periodspecific, age specific, and sexspecific rates of mortality, educational attainment, learning, and functional health status. Overview We did a systematic analysis of available data for 195 countries from 1990 to 2016 to measure educational attainment, by sex and 5year age groups (from 5 to 64 years) for the inschool and workingage population, and learning, as measured by performance on stand ardised tests of mathematics, reading, and science by 5year age groups (from 5 to 19 years) for schoolaged children. We constructed a measure of functional health status using the prevalence, by 5year age groups, of seven health conditions for which evidence suggests a link to economic productivity using estimates from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2016. 16 We also used mortality rates specific to location, age, sex, and year from GBD 2016. 17 Research in context Evidence before this study Previous studies have examined the association between a range of dimensions of human capital and economic growth. These studies have shown that the average number of years of completed schooling is associated with subsequent economic growth and that incorporation of measures of the distribution of education might explain more of this variation. More recent analyses from the past 5-10 years that use performance on international student assessments as a measure of educational quality or learning find it to be a more predictive measure of economic growth than attainment alone. Far fewer efforts have been made to expand the measurement of human capital so that it also encompasses health; however, these studies suggest that an expanded measurement might also be important for understanding economic growth. Despite the accumulated evidence of the associations between the core dimensions of human capital-education and health-and economic growth, no comprehensive measure presently exists for all countries globally. Added value of this study This study provides a new measure of expected human capital for 195 countries, consisting of four components: educational attainment, learning, health, and survival, based on a systematic analysis of all available data. This measure, in units of health, education, and learning-adjusted expected years lived between age 20 and 64 years, is estimated each year from 1990 to 2016 and can be updated annually. Compared with existing metrics of human capital, this more comprehensive measure provides a detailed characterisation of these differences across countries and over time, revealing marked variations in expected human capital for children born in different countries and differential progress in the improvement of expected human capital over the past 25 years. An inconsistent gender differential exists-for countries below approximately 10 years of expected human capital, this tends to be higher in males; for countries above this level, it is higher in females. Implications of all the available evidence Human capital is an important factor in economic development that requires improved metrics and regular monitoring. The systematic analysis of data on four componentseducational attainment, learning, health, and survivalestablishes the feasibility of an annual measurement of expected human capital, providing a means to monitor and assess investments in health and education. This more comprehensive measure of human capital has revealed variability across countries in building human capital that is independent of baseline levels of health and education, suggesting that building human capital is amenable to policy intervention. Using these four dimensions-educational attainment, learning, functional health, and survival-we constructed an indicator of expected human capital that is sensitive to recent investments in health and education. Expected human capital is defined as the expected years lived from age 20 to 64 years and adjusted for educational attainment, learning, and functional health, measured in units of health, education, and learningadjusted expected years lived between age 20 and 64 years. Expected human capital is calculated by exposing a hypothetical birth cohort to educational attainment, learning, functional health status, and mortality rates specific to time period, age, and sex. The measure is analogous to healthadjusted life expectancy. Expected human capital was calculated as follows: where nL xt is the expected years lived in an age group x, for year t, in which age groups are defined as birth-6 days, 7-27 days, 28 days-1 year, 1-4 years, and 5year age groups thereafter; FH xt is the functional health status in an age group x, in year t, transformed to a 0-1 scale; l 0 is the starting birth cohort; Edu xt is the years of education attained during an age group x, for year t; and Learn xt is the average standardised test score in an age group x, for year t, transformed to a 0-1 scale. In other words, for a birth cohort born, for example, in the year 2000, we exposed the birth cohort to age and sex specific mortality rates for the year 2000 from birth to 64 years. For each 5year group from 20 to 64 years, we adjusted years lived by the cohort in each interval for age specific and sexspecific functional health status and calculated the number of adjusted years lived from 20 to 64 years. From 5 to 24 years, we computed the expected number of learningadjusted years of education by exposing the cohort to agespecific and sexspecific educational attainment rates adjusted for learning estimated for the year 2000. We summed and divided these estimates by the maximum possible learning adjusted years of education; we used 18 years, which is the commonly used maximum for educational attainment data. 18 We used the subsequent ratio to adjust the health adjusted years lived from 20 to 64 years to produce the measure of expected human capital. We did a sensitivity analysis (appendix) in which we took the mean instead of the product of learningadjusted educational attainment and functional health when computing expected human capital. Educational attainment Estimates of average years of education were based on a compilation of 2522 censuses and household surveys. These data and the methods hereafter build on an approach used to produce a previously published dataset of international educational attainment. 19 All data were topcoded to 18 years of education based on the practices of a common data provider. 18 Each data source included information on the distribution of educational attainment by country, year, sex, and 5year or 10year age group. When years of schooling data were available only for multiyear bins-eg, the fraction of the population with between 6 and 9 years of completed education-we used a database of 1792 sources reporting single years of completed schooling to split these binned data into singleyear distributions from 0 to 18 years on the basis of the average of the 12 closest distributions in terms of geographical proximity and year. From each of the subsequent data sources, we calculated the mean years of schooling by age and sex. In the next step, we used age-cohort imputation to project observed cohorts through time, exploiting the relative constancy of education levels after 25 years of age. For any datapoint representing a cohort aged 25 years or older, we extrapolated the data forward and backward so that it was represented in all year-age combinations for that cohort. For example, a datapoint reflecting a cohort aged 35-39 years in 2000 was projected forward for people aged 40-44 years in 2005, aged 45-49 years in 2010, and so on. It was also projected backward for people aged 30-34 years in 1995 and people aged 25-29 years in 1990. After imputation, we fitted age-period models on all original input data and the imputed cohort data to estimate a complete singleyear series of educational attainment from 1950 through to 2016 by age, sex, and location. We separately calculated for each sex and GBD region the mean level of educational attainment of the country, age, sex, and year specific population (Edu c,a,s,t ), which was estimated as: where Edu maxa is the maximum mean educational attain ment for each age group, defined as three for ages 5-9 years, eight for ages 10-14 years, 13 for ages 15-19 years, and 18 for all age groups 20-24 years and older; β s,r is a sexspecific and regionspecific intercept; δ s,r captures the linear secular trend for each sex and region; I s,r is a natural spline on age to capture the nonlinear age pattern by sex and region, with knots at 15 and 25 years; and α c,s is a countrysexspecific random intercept. Finally, we used Gaussian process regression (GPR) to smooth the residuals from the age-period model, accounting for uncertainty in each datapoint. GPR also synthesises both data and model uncertainty to estimate uncertainty intervals. Learning Our estimates of learning or education quality are based on a systematic analysis of student testing data from major international assessments and national continuing 22 and several tests from the International Association for the Evaluation of Educational Achievement. 23,24,25 In addition to these programmes, we also used regional testing programmes, including the Southern and Eastern Africa Consortium for Monitoring Educational Quality, 26 the Latin American Laboratory for Assess ment of the Quality of Education, 27 and the Programme d'Analyse des Systèmes Educatifs de la Confem; 28 national standardised testing programmes, such as the US National Assessment of Education Progress, 29 and the India National Achievement Survey; 30 and repre sentative studies measuring in telligence quotient (IQ) in schoolaged children that largely included the Wechsler Intelligence Scale for Children, 31 the Raven's Standard Progressive Matrices, 32 and the Peabody Picture Vocabu lary test. 33 This database provides the most extensive geographical distribution and compilation of longterm temporal trends to date. Unlike several other studies, 34-38 which used similar data, we kept scores in different school subjects (ie, mathematics, reading, and science) separate. We also maintained data on the year the tests were done to understand trends through time and included demographic information such as grade level (for implied age) and sex. To generate comparable measures from these different tests, we rescaled subjectspecific test scores to a common reference test scale using linear regression, building on previous approaches. 39 We used TIMSS mathematics and science tests and PIRLS reading tests as the reference scale because they are large, international tests that cover most geographical regions and all three major testing subjects, and are already standardised to each other. 40 We implemented the rescale using all available data matched by country and approximate year for the reference tests and alternative tests. To estimate test scores for all countries, years, and ages (5year age groups from 5 to 19 years), we used spatiotemporal Gaussian process regression 41 using per capita mean years of education as a predictor (β=5·7, p=0 for boys; β=5·8, p=0 for girls), and with maths, science and reading test scores given equal weight to generate a combined learning measure ranging from 0 to 1000. This method draws strength across space, time, and age, incorporates both data and model uncertainty, and produces a fulltime series of estimates for all geographies with the use of covariate relationships and spatial and temporal patterns in residuals. Finally, we rescaled this measure to a 0-1 scale, with 1 set to one SD above the mean score (a score of 600) on the original TIMSS exam, 40 approximately the highest estimated average test score in any country. Functional health status For functional health status relevant to economic productivity, we used the prevalence of seven diseases and impairments identified in policy trials or observational studies to be related to learning or productivity (appendix). These include wasting, measured as the proportion of the population younger than 5 years below two SDs of the reference mean weight for height; 42 stunting, measured as the proportion of the population younger than 5 years below two SDs of the reference height for age; 42 anaemia, measured as the proportion of each age-sex group with a haemoglobin concentration defined by WHO as mild, moderate, or severe anaemia; 43 cognitive impairment, measured as the proportion of the population with moderate, severe, or profound develop mental delay; 44 vision loss, defined as the proportion of the population with moderate or severe vision impairment or blindness; 45 hearing loss, defined by WHO as the proportion of the population with hearing loss greater than 40 dB in the betterhearing ear (30 dB in children); 46 and infectious disease prevalence, with the use of three infectious disease aggregations from GBD 2016 classifi cation, which includes HIV/AIDS, tuberculosis, malaria, neglected tropical diseases, diarrhoea, and several other common infectious diseases. 47 We combined these seven functional health status outcomes into a single measure using principal components analysis (PCA). In the first step, we used countryspecific prevalence rates of anaemia, vision loss, hearing loss, intellectual disability, and years lived with disability per capita from infectious disease for 5year age groups (20-64 years), from 1990 to 2016. Because stunting and wasting are measured only in children younger than 5 years, we used the timeperiod measure of prevalence in children for these two conditions. We rescaled each of the seven conditions such that 0 represented the first percentile and 1 represented the 99th percentile observed across all age, sex, and country groups. We then applied PCA on the agestandardised value of the rescaled health conditions for the ages 20-64 years. Following standard practice, we selected the first n components of the PCA such that the sum of the variance explained by the components was greater than 80%. 48 In this case, the first component explained more than 85% of the variance. We determined weights for each condition by taking the average loading across factors, weighted by the explained variance. We rescaled this vector of weights so that it was equal to one. We then calculated the health component score for each observation specific to age, sex, country, and year by applying these PCAgenerated weights to the seven component prevalence values. Survival We estimated expected years lived between ages 20 and 64 years using sexspecific and agespecific mortality rates by country and year, produced for GBD 2016. This estimation procedure used a wide range of sources including, but not limited to, adjusted data from vital and sample registration systems and birth histories and sibling survival data collected in household surveys to populate abridged life tables and compute expected years lived by 5year age groups. These methods are described in detail in a previous publication. 17 Uncertainty analysis We estimated uncertainty in the measure of expected human capital by computing 1000 estimates of expected human capital using 1000 draws from the posterior distribution of each of the four components (educational attainment, learning, functional health status, and survival). The posterior distribution of each of the four com ponents reflects both the variance of the input data and predictors used in the estimation model of each component. Associations between expected human capital and gross domestic product We examined the association between GDP per capita and expected human capital in two ways, using GDP per capita data from a recently published health financing dataset. 49 First, we plotted the crosssectional association between GDP per capita and expected human capital, by country, in 1990 and 2016, using GDP per capita in both log and level space. Second, for countries in each quartile of expected human capital in 1990 and 2016, we computed the median and IQR of GDP per capita in 1990 and 2016. For quartiles formed by the absolute change in expected human capital between 1990 and 2016, we also computed the median and IQR of the annualised rate of change in GDP per capita from 1990 to 2016. The change in expected human capital between 1990 and 2016 ranged from less than 2 years of progress in 18 countries to more than 5 years of progress in 35 countries (figure 4). For example, the USA, which was ranked sixth in terms of expected human capital in 1990, dropped to rank 27 in 2016 because of minimal progress, particularly on educational attainment. In east and southeast Asia, which have generally seen rapid economic growth, many countries had notable improvements. South Korea increased from rank 18 in 1990 to rank 6 in 2016; Singapore increased from rank 43 to 13; China increased from rank 69 to 44; Thailand increased from rank 103 to 72; and Vietnam increased from rank 116 to 85. Differential progress, however, was seen in many other regions. Within Latin America, Brazil had much faster improvements in expected human capital than other countries in the region (improving from rank 91 to 71). The most rapid absolute improvements were seen in several countries in the Middle East (led by Turkey, Saudi Arabia, and Kuwait), although some countries in the region, such as Yemen and Iraq, experienced much slower progress. Countries had improved expected human capital from 1990 to 2016, and showed changes in each of the four components of expected human capital relative to 1990 levels ( figure 4). Although there is a clearer associa tion between improvements in educational attain ment and years lived between 20 and 64 years and their respective levels in 1990, these highly differential rates of progress suggest that changes are driven by a combination of policy factors and not just baseline levels. Several countries in north Africa and the Middle East with substantial improvements in expected human capital had a combina tion of notable increases in educational attainment, learning, and functional health status and to a lesser degree reductions in mortality. A similar picture can be seen for Latin America but at a lower overall magnitude, with improvements driven particularly by increases in educational attainment. In subSaharan Africa and to a lesser degree south Asia, improvements in expected human capital are due to improvements in educational attainment and expected years lived in the 20-64 year age range. Men and women had notable differences in expected human capital in 2016 (figure 5). Across the board, expected years lived between 20 and 64 years were greater in women than men. Similarly, functional health status was higher among women than men, with the exception of highincome countries. Conversely, learning was higher among men at lower and middle levels of learning-in regions such as subSaharan Africa, north Africa and the Middle East, south Asia, and Latin America-but this difference is minimal or nonexistent at higher levels and among highincome countries. A clear regional pattern is present for educational attainment, with higher levels for males throughout subSaharan Africa and below 9 years of education. In terms of the overall measure of expected human capital, this measure translates into a clear separa tion at a threshold of 10 years of expected human capital: below this threshold, expected human capital tends to be higher in men whereas above this threshold, expected human capital tends to be higher in women. Associations between expected human capital and gross domestic product We examined the correlation between both levels and change in expected human capital and corresponding levels and change in gross domestic product (GDP) per capita, at the country level ( figure 6, 7). Higher levels of expected human capital were associated with higher levels of GDP per capita in both 1990 (figure 6A, 6C, 7A) and 2016 ( figure 6B, 6D, 7B). Larger improvements in expected human capital from 1990 to 2016 were also associated with greater GDP growth over the same time period. The top quartile of countries in terms of change in expected human capital from 1990 to 2016 had a median annualised GDP growth of 2·60% (IQR 1·85-3·69) compared with a median annualised GDP growth of 1·45% (0·18-2·19) among the bottom quartile of countries (figure 7C) in terms of change in expected human capital. Although not a formal causal analysis, these differences suggest that both levels of human capital are associated with economic performance and improvements in the production of human capital are associated with faster economic growth. Discussion Our study quantifies levels of human capital in 195 countries from 1990 to 2016, generating a ranking of countries and highlighting huge variations in the pro duction of, and progress in building, human capital across countries. Human capital-educational attain ment, learning, functional health, and survival-in 2016 was highest in Finland, Iceland, Denmark, the Netherlands, and Taiwan (province of China), and lowest in Mali, Burkina Faso, Chad, South Sudan, and Niger. Over the past 25 years, progress has been slow in selected countries that started at a high baseline, such as the USA, but perhaps most importantly progress has also been slow in countries with historically low human capital, such as the bottom five countries in 2016. At the macro level, countries that have improved the production of human capital tend to have been more successful in fostering economic growth. In an article 5 by World Bank President Jim Y Kim, he states, "with the right measurements, an index ranking the human capital in countries will be hard to ignore, and it can help galvanize much more-and more effective-investments in people". As part of the Human Capital Project, the World Bank intends to support annual reporting on human capital to keep policy attention focused on investments in health and education that accelerate human capital formation and bring new emphasis to the importance of human capital for economic growth. This study fills this measurement gap by presenting the first ranking of countries by levels of human capital with the use of a comprehensive metric. Although health and education were prominent components of the Millennium Development Goals and remain a focus of the Sustainable Development Goals, the emphasis on human capital signals a shift toward greater consideration of the productive value of health and education, in addition to humanitarian objectives. By providing an annual measurement of human capital, these rankings can also be used by credit rating agencies in making loan decisions. Agencies that provide independent assessments of risks for national bonds already take into account some measures related to human capital, such as life expectancy. 50 The markets might incorporate better measures of human capital into borrowing schemes, recognising the challenge of economic growth in settings with low human capital. A virtuous cycle might ensue, in which financial markets reflect future human capital trajectories and create more timely incentives for ministries of finance and other development actors to invest in people today. Despite wide variations in past rates of human capital accumulation, countries have many available strategies to accelerate progress. To improve average education levels, policy options include reducing or eliminating school fees, shown to increase enrolment and attendance rates in many countries, [51][52][53] and carefully targeting infrastructure investments in alignment with needs-eg, building schools in areas with limited access or building latrines, especially for girls. Policies that can improve learning and educational quality include ongoing teacher trainings that incorporate regular followup visits and support, 53,54 improving diagnostics to inform teaching tailored to students' levels, 55,56 and grouping students by ability. 57 To improve survival and the aspects of functional health status studied here, many effective interventions exist for the major infectious diseases: insecticidetreated mosquito nets and artemisinin combination therapy for protection from malaria, 58 antiretroviral therapy for HIV/AIDS, 59 directly observed treatment of tuber culosis, 60 rotavirus vaccine to prevent diarrhoea, 61 and pneumococcal vaccine and antibiotics for lower respiratory disease, 62 among many other costeffective interventions. 63 For vision and hearing, effective inter ventions include vitamin A supplementation to combat childhood blindness, 64 corrective lenses, hearing aids, and more advanced tech nologies, such as cochlear implants. To address chronic malnutrition, available interventions include zinc supple mentation and public provision of comple mentary food for children, and for irondeficiency anaemia, ante natal micronutrient supplementation and staple food fortification. 65 Given the wide range of evidencebased policy options with proven effectiveness, the rate of human capital accumulation could accelerate dramatically; however, getting the priority for health and education in national budget discussions correct, might be the main challenge. 27 Examination of countries with the most rapid im provements in human capital revealed specific policy reforms that probably contributed to observable growth in human capital. Starting in 1995, Brazil implemented a series of education reforms, which included ensuring equal funding across all localities, expanding student testing, and ensuring educational opportunities for poor families, leading to impressive increases in educational attainment. Improving learning is the current national priority, with the aim of achieving Organisation for Economic Cooperation and Developmentlevel test scores by 2021. 66 The education system in Singapore, which has the highest student test scores in the world, has emphasised quality over the past 20 years, starting with the Thinking Schools, Learning Nation framework in 1997. This encompassed many initiatives to improve learning, including tailoring teaching to students' level of ability. 67 Poland's student performance on international tests dramatically improved after the country imple mented educational reforms in 1999, incorporating additional hours of language training and delayed tracking into vocational training. 68 Thailand was one of 69 Turkey initiated the Health Transformation Program in 2003, which entailed separating the purchasing and provision of health services, mandating insurance coverage, and reforming provider payment, leading to expanded access and improved patient satisfaction and health outcomes. 70 Changes in the nature of the world economy, including the increased importance of digital technology, sometimes referred to as the fourth industrial revolution, 71 might make human capital even more important for future economic growth. The potentially rising role of highly skilled and healthy workers in the future puts a premium on investing in people now to accelerate human capital formation. This investment requires longterm planning to make major changes in the proportion of children who are healthy and spend up to 18 years attending high quality educational institutions. Health investments can have a double effect on human capital: improved functional health can directly increase the productivity of workers at each age and can also facilitate children attending school and effectively learning. A review 72 of studies examining the association between early childhood stunting and cognitive development found that an increase of one SD in heightforage of children younger than 2 years old is associated with an increase of 0·22 SD in cognition at ages 5-11 years. A metaanalysis 73 of 14 randomised clinical trials of iron supplementation suggested an association between haemoglobin concen tration and intelligence, with anaemic premenopausal women experiencing a 2·5point (95% CI 1·24-3·76) improvement in IQ following iron supplementation. The estimates of educational attainment presented in this study are highly correlated with other widely used sources, particularly the well known Barro and Lee estimates; 74 however, the present dataset offers several important advantages over this source and other available sources. First, the data underlying these estimates are based on a systematic synthesis of censuses and household surveys, whereas most other estimates rely heavily on enrolment data, which are subject to a range of inaccuracies; for example, enrol ment rates can be well over 100% due to students repeating levels and older learners returning to school. Second, the number of unique data sources informing these estimates is greater than that used in past studies. 75 Barro and Lee, for example, used 621 unique sources versus the 2522 under lying our estimates. 74 Third, in the estimation methods, no other study to our knowledge has attempted to parse binned education data-ie, by completed levelinto individual years of completed schooling, thereby generating more precise estimates of average years of schooling. Finally, because of the data sources and modelling approach used, we were able to generate annual estimates for 5year age groups-a level of detail that is not available from most other sources, which more often report estimates in 5year increments or for more aggregated age groups. One limitation of this analysis to note, however, is that we have not produced estimates of the distribution of educational attainment in countries, only mean levels. The human capital value of educational attainment might not be a linear function of years of education. The learning estimates presented here would be substantially strengthened through the expansion of inter national student assessments. Although these results represent the best estimates given all available data, 56 countries have not participated in any internationally similar tests, necessitating substantial reliance on adjusted national tests and covariates to generate a comparable estimate of learning for all countries. Given the importance of educational quality for economic growth, expanding participation in inter national student tests is a priority for new data collection. Estimates of learning could also be improved with an expansion into quality measures for both tertiary education and onthejob skills training. For health, we have used a simple PCA to reduce the complexity of information on several learningrelated and productivityrelated outcomes, but these PCA weights might not capture the potential for different health outcomes to have differential effects on economic pro ductivity. We have, for example, included some outcomes that are related to cognitive performance, such as stunting, wasting, anaemia, and infectious dis eases, but other health outcomes could also be important for productivity, such as mental health and substance abuse. For stunting and wasting, however, we were only able to incorporate period measures of these indicators and not cohort measures because of a paucity of historical data on the prevalence of these conditions. Future work should explore more health outcomes in greater detail and explore their intercon nections to economic output. Formal examination of the association between eco nomic growth and expanded measures of human capital that incorporate broader dimensions, such as the measure presented in this study, is another area of future work. In this study, we examined associations between levels and trends in our measure of human capital and levels and trends in GDP. Although these simple analyses suggest a correlation, we do not make claims of causality because they are not causal analyses. Future work in this area will need to address the potential problem of reverse causality. In other words, do improvements in human capital lead to faster economic growth or does faster economic growth allow countries to better invest in human capital? Our study focused on national levels of human capital, but geospatial analyses have shown disparities in average years of schooling. 76 In future work, we believe it will be a useful planning tool to measure human capital at this high spatial resolution. Within a geographical location, measurement of mean levels of human capital might not be enough; measurement of the full distribution will allow testing of hypotheses about the comparative importance of secondary and tertiary education access and quality. Such granular information could be used to target communities that are the worst off and to assess efforts to reduce inequalities as part of the Sustainable Development Goals framework. The World Bank argues that countries are not investing enough in health and education to benefit from the potential of their own human capital. We provide the first comprehensive assessment of expected human capital for 195 countries from 1990 to 2016. Countries have varied substantially in the pace of improving human capital, holding out the promise that wider implemen tation of targeted policies and funding focused on improving health and education can accelerate human and economic development.
8,584.8
2018-09-24T00:00:00.000
[ "Economics", "Education" ]
Dopamine Transporter Loss in 6-OHDA Parkinson’s Model Is Unmet by Parallel Reduction in Dopamine Uptake The dopamine transporter (DAT) regulates synaptic dopamine (DA) in striatum and modulation of DAT can affect locomotor activity. Thus, in Parkinson’s disease (PD), DAT loss could affect DA clearance and locomotor activity. The locomotor benefits of L-DOPA may be mediated by transport through monoamine transporters and conversion to DA. However, its impact upon DA reuptake is unknown and may modulate synaptic DA. Using the unilateral 6-OHDA rat PD model, we examined [3H]DA uptake dynamics in relation to striatal DAT and tyrosine hydroxylase (TH) protein loss compared with contralateral intact striatum. Despite >70% striatal DAT loss, DA uptake decreased only ∼25% and increased as DAT loss approached 99%. As other monoamine transporters can transport DA, we determined if norepinephrine (NE) and serotonin (5-HT) differentially modulated DA uptake in lesioned striatum. Unlabeled DA, NE, and 5-HT were used, at a concentration that differentially inhibited DA uptake in intact striatum, to compete against [3H]DA uptake. In 6-OHDA lesioned striatum, DA was less effective, whereas NE was more effective, at inhibiting [3H]DA uptake. Furthermore, norepinephrine transporter (NET) protein levels increased and desipramine was ∼two-fold more effective at inhibiting NE uptake. Serotonin inhibited [3H]DA uptake, but without significant difference between lesioned and contralateral striatum. L-DOPA inhibited [3H]DA uptake two-fold more in lesioned striatum and inhibited NE uptake ∼five-fold more than DA uptake in naïve striatum. Consequently, DA uptake may be mediated by NET when DAT loss is at PD levels. Increased inhibition of DA uptake by L-DOPA and its preferential inhibition of NE over DA uptake, indicates that NET-mediated DA uptake may be modulated by L-DOPA when DAT loss exceeds 70%. These results indicate a novel mechanism for DA uptake during PD progression and provide new insight into how L-DOPA affects DA uptake, revealing possible mechanisms of its therapeutic and side effect potential. Introduction In striatum, the dopamine transporter (DAT) is a vital component for maintaining sufficient dopamine (DA) levels for release [1,2]. Thus the degree of striatal DAT loss in Parkinson's disease (PD) when locomotor symptoms appear (,70-80%) [3,4] would be expected to be a major factor in the deficit in DA that produces locomotor impairment. During the loss of DA-regulating proteins in PD progression, there is evidence that compensatory changes in DA regulation [5][6][7] may delay symptom presentation. For example, loss of DAT is concomitant with diminished DA release, which would be expected to sustain extracellular DA concentrations [8]. Increased TH activity may also maintain sufficient DA for some time during TH loss [5], [9,10]. However, it is possible that DAT activity, like TH activity, could increase as a compensation mechanism to maintain cytosolic DA during DAT loss. Thus, the resulting increase in DA reuptake could diminish extracellular DA availability, thereby reducing synaptic concentrations necessary to bind post-synaptic DA receptors and drive locomotor activity. From the therapeutic perspective, it has been proposed that despite DAT loss, the efficacy of L-DOPA is first via its transport through other monoamine transporters. However, an overactive DA clearance mechanism, through remaining DAT, could conceivably also facilitate the transport of therapeuticallyderived L-DOPA to produce DA via aromatic acid decarboxylase (AADC). Therefore, determining DA uptake dynamics when DAT loss is at and beyond the loss associated with locomotor symptoms is critical to understand the longevity of synaptic DA and the impact of L-DOPA in this context. DAT function can regulate locomotor activity. DAT knockout mice exhibit hyperkinetic locomotor activity [11] and DAT blockade increases locomotor activity [12]. DAT levels are associated with DA turnover in the PD patient, implying that DAT plays an important role in maintaining DA bioavailability [13]. In advanced Parkinsonian monkeys and in PD patients, DAT function may be altered by the disease, but other monoamine transporters could also participate in DA uptake. For example, DAT inhibitors, particularly those with high norepinephrine transporter (NET)-, but low serotonin transporter (SERT)-affinity, provide increased locomotor benefits to monkeys with severe DAT loss (80%) compared with those with moderate DAT loss (46% loss) [14]. Serotonergic projections from the midbrain raphe nuclei to the striatum may regulate DA through the conversion of L-DOPA to DA in animals with 6-OHDA lesions [15,16]. Still, despite DAT protein loss, the activity of remaining DAT may increase during PD progression to maintain intracellular DA levels in the face of decreased DA synthesis and storage capacity, due to loss of TH and VMAT2. Indeed, elimination of DAT by gene knockout drastically reduces DA tissue content in striatum [1]. Thus, DAT blockade may prove beneficial in PD patients. For instance, methylphenidate may provide modest improvement in locomotor deficiency in combination with L-DOPA [17]. Striatal DAT loss correlates with the less motorically-affected side of PD patients [18] suggesting that the more degenerated hemisphere has compensatory functions occurring that may affect accurate determination of DAT loss. Together, these data suggest the possibility of an overactive DA clearance mechanism in the nigrostriatal pathway when DAT protein loss reaches 80%that could diminish the synaptic DA levels that are required to drive locomotion. Other monoamine transporters can transport DA in the CNS, particularly when DAT abundance is relatively low, as would be the case when locomotor symptoms present in PD. Although NET does not play a primary role in the clearance of DA in normal striatum, DA uptake occurs through NET in sparsely dopaminergic innervated regions such as the frontal cortex [19]. Moreover, selective NE uptake inhibitors can increase extracellular DA levels within the prefrontal cortex [20][21][22][23][24][25]. Conceptually, it is feasible that, NET or a NE-sensitive transport mechanism could potentially contribute to the clearance of DA in the DAT-impoverished Parkinsonian striatum, given that there is noradrenergic innervation of the striatum. In a therapeutic context, there is the possibility that an overactive DA clearance mechanism could be a conduit for L-DOPA delivery into the aromatic acid decarboxylase (AADC)expressing cells in the CNS, particularly since L-DOPA does not possess locomotor-enhancing properties until the threshold of loss occurs at ,70-80% [26,27]. Cell cultures expressing functional NET and DAT transport L-DOPA when it is present in high concentrations [28]. Systemic administration of the selective NET inhibitor desipramine increases extracellular DA derived from L-DOPA in 6-hydroxydopamine (6-OHDA)-lesioned rats, indicating that NET could play a significant role in DA clearance in the PDlike striatum and, consequently, may be involved in L-DOPAderived DA synthesis in PD pathogenesis [29]. We determined differences in DA uptake in crude synaptosomes prepared from the 6-OHDA lesioned striatum versus inherentlymatched contralateral intact striatum to determine the relationship of DAT loss to DA transport differences and potential involvement of monoamine transporters in lesioned terminals. We examined the extent by which 5-HT, NE, DA, (representing the endogenous monoamines) and L-DOPA (representing the gold-standard for PD treatment), affected [ 3 H]DA uptake and also determined NET expression and impact of its inhibition on NE uptake to elucidate potential mechanisms by which DA is removed from the synapse with DAT loss at PD symptom levels, with and without L-DOPA present. Animals Male Sprague Dawley rats purchased from Harlan were used in all experiments. All rats were 4-8 months old in the study, and were housed under controlled lighting conditions (12:12 light:dark cycle) with food and water available ad libitum. All animals were used in compliance with federal and the institutional Animal Care and Use Committee guidelines at LSU Health Sciences Center-Shreveport. 6-OHDA Lesions Each animal underwent survival surgery to deliver the neurotoxin 6-OHDA to the medial forebrain bundle. Rats were anesthetized with 40 mg/kg Nembutal intraperitoneal (i.p.) (pentobarbital Lundbeck Inc, Deerfield, IL) with supplement of 9.0, 0.6, and 0.3 mg/kg ketamine, xylazine, and acepromazine, respectively. Animals were immobilized in a stereotaxic frame to target the medial forebrain bundle at coordinates ML +1.5, AP 23.8, DV 28.0 relative to Bregma according to Paxinos and Watson rat brain atlas, 4 th ed. [30]. A total of 9 or 16 mg of 6-OHDA in a total of 4 ml in 0.02% ascorbic acid (concentrations of 2.25 or 4 mg/ml) was infused unilaterally at a rate of 1 ml/minute. Notwithstanding possible bilateral effects of the 6-OHDA infusion, the contralateral striatum was left intact as a naïve tissue control. The syringe was left in place for 10 min before removal to allow for maximal diffusion of drug and to avoid further mechanical damage to the tissue. Body temperature was maintained at 37u during surgery using a temperature monitor with probe and heating pad (FHC, Bowdoingham, ME). Amphetamine Testing for Lesion Verification Lesions were confirmed with amphetamine-induced rotation ipsilateral to the lesioned side. Rotational behavior was monitored for 60 minutes after a single i.p. injection of amphetamine (2 mg/ kg) 7 days post 6-OHDA infusion. While the amphetamineinduced rotation is not as precise as apomorphine to detect lesion at 90% [31], we employed the amphetamine-induced rotation to be able to detect at least 50% lesion. Rats were sacrificed for DA uptake analysis and subsequent DAT or TH analyses 2 days after the amphetamine test to allow for near-complete clearance of amphetamine. Preparation of Synaptosomes Synaptosomes were prepared according to the protocol previously described [32] with the following modifications: Tissue dissected from dorsal striatum and substantia nigra was homogenized in 5 mL of 0.32 M sucrose solution using a Teflon/glass homogenizing wand (Glas-Col, Terre Haute, IN) then spun at 10006g for 10 minutes in a chilled (4uC) centrifuge. The resulting pellet was stored as the P1 fraction while the supernatant was spun further at 16,5006g for 30 minutes at 4uC, yielding the P2 fraction. An aliquot of the P1 fraction was saved for determination of TH protein from the 6-OHDA-lesioned and contralateral (control) striatum against a standard curve of TH protein standard [33]. The supernatant was aspirated and resuspended in 1 mL of Kreb's buffer (118 mM NaCl, 4.7 mM KCl, 1.2 mM KH2PO4, 25 mM NaHCO3, 1.0 mM Na2EDTA, 1.7 mM CaCl2, 10 mM glucose, 100 mM parglyline, 100 mM ascorbic acid). Protein concentration was determined using a BCA colormetric assay (Thermo Scientific, Rockford, IL). All tissue was kept on ice or at 4uC from the moment of brain excision until the uptake assay took place. [ 3 H]DA and [ 3 H]NE Uptake into Synaptosomes Synaptosomes were distributed in ice-cold test tubes to prepare for dopamine uptake. Given sufficient yield on protein recovery from the tissue for uptake, an aliquot of synaptosomes was also saved for later determination of the protein quantities of DAT. The determination of [ 3 H]DA uptake in the crude synaptosomes from dorsal striatum harvested from the contralateral and 6-OHDA-infused hemispheres was conducted simultaneously and included assessments of uptake capacity in the presence of unlabeled 1 mM NE versus 1 mM DA, and 1 mM 5-HT or 1 mM L-DOPA. Each determination was done in triplicate for each assay condition and uptake was determined comparing the lesioned striatum with the contralateral control striatum. Nonspecific uptake was determined by counts obtained in synaptosomes incubated with 500 nM DA (all as labeled DA) on ice during the time period of uptake. The determination of [ 3 H]NE uptake in the crude synaptosomes from dorsal striatum harvested from the contralateral and 6-OHDA-infused hemispheres was conducted simultaneously at a final [NE] of 250 nM (all as labeled NE). Background was determined and subtracted in the same manner as in the DA uptake studies. Synaptosomes (30 mg protein per replicate) were added to 4uC oxygenated Kreb's buffer and test ligand (if indicated) to reach a total volume of 100 mL volume. The synaptosomes were then warmed to 35uC for 5 min, then 100 mL of pre-warmed 1 mM 3 Hdopamine, prepared from one of two sources of labeled DA; 1) ViTrax, [7-, 8-3 H-DA], specific activity of 25 Ci/mmol or 2) Amersham, [7-, 8-3 H-DA], specific activity 47 Ci/mmol, was added to the synaptosome preparations (giving a 500 nM final [ 3 H]DA concentration), allowed to incubate for uptake, and terminated after 120 seconds with an excess volume of ice-cold Kreb's buffer and re-immersing the tubes in the ice-bath. The uptake time for DA was chosen to be as close as technically and practically possible to the approximately 2-minute uptake time of striatal dopamine observed in vivo [34]. Labeled NE was purchased from Perkin-Elmer (levo-[7-3 H]-norepinephrine; specific activity 14 Ci/mmol). We also conducted NE uptake for 2 minutes. Synaptosomes were washed extensively to remove excess labeleddopamine with equal-osmolarity PBS buffer through a Brandel M24-TI (Gaithersburg, MD) cell harvester using Brandel GF/C filter paper pretreated with a 2% polyethylenimine solution to reduce non-specific binding of label. The filter paper containing the rinsed synaptosomes was transferred into scintillation vials containing 5 mL of biodegradable scintillation cocktail (Research Products International, Mount Prospect, IL) and counted with a Beckman Coulter LS6500 scintillation counter (Brea, CA). Calculating DA and NE Uptake To determine the quantity of DA uptake, the percent of Tissue Preparation and Western Immunoblotting Synaptosome pellets (to determine DAT protein, when available, ,70% of experiments) and the processed preparatory sample (for TH protein assessment) were sonicated in a 1% sodium dodecyl sulfate solution (pH ,8) using a Branson Sonifier 150 (Danbury, CT). Protein concentration was determined using the bichinchoninic acid colometric assay. Following gel electrophoresis, proteins were transferred for 500 volt hours in a Tris/glycine/ methanol buffer onto nitrocellulose membranes (Bio-Rad Laboratories, Hercules, CA). The nitrocellulose membrane was stained with Ponceau S to reveal relative protein staining in each sample lane. These lanes were scanned and quantified by Image J to normalize protein in each sample. This relative total level then served as an additional normalizing value to determine the quantity of each protein assayed [33]. To continue processing, the membranes were blocked in PVP buffer (1% polyvinylpyrrolidone and 0.05% Tween 20) for a minimum of two hours to reduce nonspecific antibody binding. The membrane was soaked in primary antibody for 1-3 hours. Specific primary antibodies were as follows: DAT (Santa Cruz, cat # sc-1433, 2 mg/ml), TH (Millipore, cat # AB152), and NET (Alpha Diagnostics Intl., cat# NET11-A). Protein loads for linear detection were 30 mg total protein for DAT and TH on the lesioned side, and 10 mg on the contralateral control side. Protein loads for NET were 60 ug in both lesioned and contralateral control side. After primary treatment, blots were exposed to secondary antibody (swine anti-rabbit IgG for TH and NET, swing-anti-goat IgG for DAT) for signal enhancement, followed by 1 h incubation with [ 125 I] protein A (PerkinElmer, Waltham, MA). Statistics All dopamine and norepinephrine uptake studies were done in conjunction with assessment of TH loss, and when possible, DAT loss, as assessed in aliquots of synaptosomes that were used to determine DA and NE uptake. Tissue harvested from the striatum contralateral to 6-OHDA-lesion served as the inherent control to the lesioned striatum for each rat/test subject. Therefore, a Student's paired t-test was used to compare DA and NE uptake between the two sides, as well as to ascertain the degree of TH and DAT loss caused by 6-OHDA lesion. With the exception of comparing DA uptake (as per equal synaptosomal protein) between the two striata, the paired t-test was two-tailed. Given the expectation that there would be a decrease in DA uptake caused by the lesion, in the instance of comparing uptake as per equal protein, a one-tailed paired t-test was used. Dopamine Uptake in Non-lesioned Tissue and Consistency with Endogenous DAT We first established that dopamine uptake in control tissue reflected the endogenous quantities of transporter, wherein there is greater DA uptake [35][36][37] and DAT protein [38] in dorsal striatum versus substantia nigra. In the control non-lesioned tissue, DA uptake was significantly greater in synaptosomes from the striatum than from the substantia nigra (SN) (Figure 1). This difference is in agreement with previous findings [35][36][37]. Dopamine Uptake in Relation to Loss of Tyrosine Hydroxylase To verify the degree of lesion in association with DA uptake in the lesioned versus contralateral control tissue, we determined TH loss using tissue not utilized for the synaptosome fraction for reuptake studies in all test subjects. When tissue recovery in the synaptosome fraction was adequate to do so, we also determined DAT loss in aliquots to normalize DA uptake to the loss of DAT. There was a significant correlation of TH to DAT loss, ranging from 61 to 99% loss (9 observations, Pearson r = 0.921, p = 0.0004, two-tailed; data not shown), so the degree of TH loss, when not possible to determine DAT loss, reflected DAT loss. As our assay revealed differences in DA uptake based upon inherent DAT levels ( Fig. 1), we found an unexpected result in DA uptake in the verified lesioned neuropil. We expected to observe a significant decrease in DA uptake in the lesioned neuropil. However, there was only a trend toward a decrease in DA uptake in rats with confirmed lesion varying between 30 to 60% loss ( Fig. 2A). Even more striking was that while there was a significant decrease in DA uptake in rats with at least 70% loss, the magnitude of TH or DAT loss was much greater than the reduction in DA uptake of ,26% (Fig. 2B, 2C). These findings reveal the possibility that remaining DAT protein could have greatly increased DA uptake capabilities or that another monoamine transporter is active in the lesioned striatum for DA uptake. Role of DAT in 6-OHDA Lesioned Striatal Uptake When we normalized DA uptake to the respective DAT protein at $70% DAT loss, DA uptake per DAT protein remaining was increased ,6-fold (Fig. 3A). There was also a significant relationship in DA uptake with lesion progression, in that as the lesion severity increased, so did the DA uptake as per remaining DAT protein ( Figure 3B,C). These results partially explain why DA uptake in the lesioned synaptosomes does not decrease in concert with DAT protein loss, as shown in figure 2B and suggests that another monoamine transporter may be more active in DA uptake under these conditions. Monoamine Inhibition of DA Uptake in Lesioned Striatum The 6-fold increase in DA uptake per remaining DAT protein indicates the possibility that in lesioned striatum, DAT affinity for DA increases or another monoamine transporter could have involvement in DA uptake. To investigate these possibilities, we determined the relative Ki of the monoamines endogenous to striatum, between DA, NE, and 5-HT, and determined the efficacy of the unlabeled monoamines to inhibit [ 3 H]-DA uptake in our striatal synaptosome preparation. As expected, DA was most effective at inhibiting DA uptake in naïve striatum, followed by NE, and serotonin (5-HT) (Fig. 4). With regard to the relative affinities of DA versus NE, our finding was supported by previous work [39], wherein K m for DA and NE in striatum were ,400 nM and 2 mM, respectively, and suggests that DAT affinity for DA is greater than NE in intact striatum. Full kinetics were not performed in lesioned rats due to limited availability of dissected intact striatal tissue, thus hindering the execution of complete pharmacokinetic profiles. However by performing this experiment in intact striatal tissue, we were able to determine that a concentration of unlabeled monoamine of 1 mM inhibited DA uptake to different and discernable degrees and thus help to discern the potential involvement of other monoamine transporters in DA reuptake in the lesioned striatum. To determine if remaining DAT had increased affinity for DA in the .70% lesioned striatum, 1 mM unlabeled DA was used to compete with uptake of [ 3 H]DA (500 nM). Compared to 53% inhibition in the contralateral control striatal synaptosome, the inhibition of [ 3 H]DA uptake by unlabeled DA (1 mM) was significantly reduced in lesioned striatum to 34% (Fig. 5A). Therefore, increased affinity for DA may not play a role in enhanced DA uptake after $70% loss of DAT protein. However, in simultaneously-run uptake experiments derived from synaptosomes prepared from the same 6-OHDA-lesioned rat, NE, at an equal concentration to DA, inhibited DA uptake to a 40% greater extent in synaptosomes from the lesioned striatum (Fig. 5B). Given that others have found that NET is involved in DA uptake in DAT-impoverished regions of brain [19,24], these results indicate that NET or another NE-sensitive transporter could mediate DA uptake in dopaminergic neuropil when loss of DAT exceeds 70%. It has been shown that SERT binding is decreased in PD patients [40] which would argue that one possible route of L-DOPA uptake into an AADC source is diminished. In order to investigate the contribution of SERT in DA uptake, we also examined the ability of 1 mM of 5-HT to inhibit [ 3 H]-DA uptake. There was no significant difference in the ability of 1 mM 5-HT to inhibit [ 3 H]DA uptake in lesioned compared to control striatum (Fig. 5C). Impact of 6-OHDA Lesion on NET Expression and NE Uptake In separate studies, we examined the impact of our 6-OHDA protocol on striatal NET expression and function, as well as monoamine tissue content in the same tissue sources to determine if the lesion impacted NE or 5-HT terminals. We observed that NET protein expression significantly increased in the 6-OHDA lesioned neuropil with .70% loss of TH (Fig. 6A, B). Desipramine, a NET-specific inhibitor, inhibited NE uptake to a significantly greater extent in lesioned striatum (Fig. 6C). In the tissues wherein we determined NET protein expression, we also determined relative monoamine tissue content by HPLC using a protocol that can analyze monoamine content and recovered proteins from the same sample [33]. In the intact, Figure 1. Dopamine uptake between striatum and substantia nigra. The inherent differences in DA uptake between striatum and substantia nigra in non-lesioned tissue, as per our synaptosome preparation and uptake protocol are illustrated. Our results reflect the previous observations that DAT expression is significantly less in SN than in striatum and in vivo assessments of DA clearance also show less DA uptake in the SN compared to striatum. Statistics, **p = 0.001, twotailed paired t-test of 16 matched observations in synaptosomes prepared from striatum and substantia nigra dissected contralateral to medial forebrain bundle 6-OHDA lesion. doi:10.1371/journal.pone.0052322.g001 non-lesioned striatum, relative monoamine tissue content (per mg protein) was predictably dominated by DA (215617 ng), then NE (13.0 ng), and 5-HT (4.1 ng). Given at least 70% loss of DA caused by our lesion, our lesion produced no significant effect on NE tissue content (Fig. 7). Serotonin tissue content, which was significantly less than NE tissue content in striatum, was not significantly affected, although there was a notable trend toward a decrease (p = 0.055). Given that our lesion protocol did not reduce NE tissue content, we speculate NE-terminal proteins, such as NET, were not likely affected by the lesion. However, the increase in NET expression, despite no loss of NE tissue content, suggests that increased NET expression may be from a non-neuronal source. Impact of L-DOPA on DA Uptake The primary pharmacological treatment for patients with PD is L-DOPA, the biosynthetic product of TH. Aromatic acid decarboxylase (AADC) immunoreactive cells have been identified in conjunction with presence of DA in denervated striatum following the administration of L-DOPA [41]. L-DOPA crosses Figure 2. Dopamine uptake profiles per equal synaptosomal protein related to percent loss of tyrosine hydroxylase. A. DA uptake with TH loss at 30-60%. Statistics, p = 0.055, one-tailed paired t-test of 6 matched observations in synaptosomes prepared from striatum ,9 days following mfb 6-OHDA lesion. TH loss was confirmed in a tissue fraction during synaptosome preparation. B. DA uptake with TH loss at 30-60% at 70-99% loss. Statistics, p,0.05, one-tailed paired t-test of 26 matched observations in synaptosomes prepared from striatum ,9 days following mfb 6-OHDA lesion. TH loss was confirmed in a tissue fraction during synaptosome preparation. C. Representative western blot depicting TH loss. TH loss by the 6-OHDA lesion (L) is shown versus quantity in contralateral striatum (C) and interpolation by accompanying standard curve of TH protein (0.5 to 2.0 ng TH). Associated Ponceau stain (below TH bands) on same blot before TH antibody blotting demonstrates similar striatal protein loading. doi:10.1371/journal.pone.0052322.g002 the blood-brain barrier to reach the denervated nigrostriatal pathway in PD, but how L-DOPA is transported into AADCexpressing cells is not completely understood. We examined whether L-DOPA affected [ 3 H]DA uptake differently in the 6-OHDA lesioned striatum versus intact striatum. L-DOPA (1 mM) was nearly twice as effective at blocking [ 3 H]DA uptake in lesioned striatum (19% inhibition), compared to 11% in contralateral control striatum (Fig. 8A). This increased ability of L-DOPA to block [ 3 H]DA uptake when DAT loss $70% support the idea that L-DOPA itself may extend the life of DA in the Parkinson's synapse and its reuptake may be mediated by a transport mechanism distinct from the DAT, as suggested by results presented in Figures 5 and 6. In naïve striatal tissue, L-DOPA was significantly more effective at inhibiting the uptake of NE compared to DA (Fig. 8B), which suggests that L-DOPA has a greater affinity for NET than DAT. This finding may also support the possibility that if NET is active in DA uptake in lesioned striatum, then it would be expected that L-DOPA would be more effective at inhibiting DA uptake, as indicated in Fig. 8A. Discussion The results presented here contribute to the increasing body of literature that supports that synaptic DA levels are still regulated in DAT-impoverished regions of the CNS. This has been observed in the PD patient and in PD models [42][43][44]. Our study extends the observations that the function of remaining DAT changes with lesion severity by the notion that other monoamine transporters could contribute to the regulation of DA in the synapse [43,45,46]. In our hands, when .70% loss of DAT protein was produced by 6-OHDA, there was a paradoxical 6-fold increase in [ 3 H]DA uptake per remaining DAT protein in the lesioned striatum. Similarly, Khakimova and colleagues [47] also observed increased [ 3 H]DA uptake by the remaining nigrostriatal neurons (both in striatum and substantia nigra) in a mouse PD model at presymptomatic and early symptomatic stages. In conjunction with our finding that DA was less effective at blocking its own uptake in the lesioned versus intact striatum, it is possible that other monoamine transporters account for the enhanced [ 3 H]DA uptake observed in this study. This possibility has implications for modifying therapies that target other monoamine transporters to improve the longevity of DA in the synapse. The ability of L-DOPA to inhibit DA uptake in lesioned striatum also has therapeutic implications. First, from the standpoint of its locomotor benefits, our results suggest the partial blockade of DA uptake by L-DOPA that occurs paired only with DAT loss associated with PD symptoms, may help to extend the longevity of DA in the synapse. However from the standpoint of L-DOPAinduced dyskinesia, a common side effect of chronic L-DOPA use, the results suggest that any L-DOPA in excess of what is needed for DA synthesis could impair DA reuptake and increase synaptic levels of DA, exacerbating DA receptor hypersensitivity seen in dyskinesia pathophysiology. We acknowledge that our conclusion is incomplete from the perspective of clearly identifying the monoamine transporters involved with DA reuptake at the severe lesion stage. However, the combination of several independent results suggest a role for NET in DA uptake in PD progression. Blockade of [ 3 H]DA uptake by . Dopamine uptake profiles with monoamine competition in 6-OHDA-lesioned striatum versus intact striatum. A. Dopamine. 1 mM DA was added in striatal synaptosomes prepared from at least 70% lesioned striatum and from the operationally-matched contralateral control. After 5 min preincubation period, 500 nM [7-, 8-3 H-DA] was added and uptake was determined for 2 min. In the lesioned striatal synaptosomes, DA was significantly less effective (30% less inhibition than in control) effective to inhibit DA uptake, as compared to the control. Statistics: *p,0.05, t = 3.47, two-tailed Student's paired t-test, n = 6 paired observations. B. Norepinephrine 1 mM NE was added in striatal synaptosomes prepared from at least 70% lesioned striatum and from the operationally-matched contralateral control. After 5 min preincubation period, 500 nM [7-, 8-3 H-DA] was added and uptake was determined for 2 min. In the lesioned striatal synaptosomes, NE was significantly more effective (38% greater inhibition than in control) to inhibit DA uptake, as compared to the control. Statistics: *p,0.05, t = 2.59, two-tailed Student's paired t-test, n = 6 paired observations. C. Serotonin 1 mM 5-HT was added in striatal synaptosomes prepared from at least 70% lesioned striatum and from the operationally-matched contralateral control. After 5 min preincubation period, 500 nM [7-, 8-3 H-DA] was added and uptake was determined for 2 min. There was no significant difference in the ability of 5-HT to inhibit DA uptake in lesioned striatal synaptosomes, as compared to the control, n = 6 paired observations. doi:10.1371/journal.pone.0052322.g005 NE was more effective in the lesioned striatum compared to contralateral control striatum, suggesting a NE-sensitive compensatory mechanism for DA uptake. Given the drastic loss of DAT observed with the 6-OHDA lesion and the decreased effectiveness of DA to block [ 3 H]DA uptake in lesioned striatum, the most straightforward explanation is increased DA clearance by NET, which may transport DA with higher affinity than the DAT itself in some cases [48][49][50]. Support for a NET-mediated mechanism is further evidenced by NE uptake being inhibited by desipramine to a greater extent in 6-OHDA lesioned striatum (Fig. 6C). Additionally, the increase in NET protein levels seen with .70% TH loss (Fig. 6 A,B) also indicates that this mechanism may be compensatory thereby augmenting DA reuptake through NET when DAT is sparse. Previous work gives some support to our results that NETmediated DA uptake can occur when DAT levels are inherently low. Initial studies performed with cloned hNET expressed in transfected cells indicate that the NET has a greater affinity for DA than for NE [51]. Spatial differences in desipramine-sensitive DA clearance in the substantia nigra positively correlate with dopamine-b-hydroxylase in naïve brain slices, suggesting that the NET-mediated DA reuptake in some [35], but not other [52], regions are likely due to a much larger quantity of DAT. However, when the DAT protein is diminished, as with 6-OHDA, the primary route of DA uptake may be shifted to NET, or at least NE-sensitive transporters. For instance, NET-mediated DA uptake occurs when the DAT is genetically or pharmacologically inactivated [19] or in brain regions of low dopaminergic innervation like the prefrontal cortex or hippocampus [53][54][55]. Thus, our results could reflect how DA is regulated in the synapse with low DAT expression. Our results were not supportive of, but did not eliminate the possibility, that 5-HT-sensitive mechanism may be at work for the paradoxical increase in DA uptake, given remaining DAT protein. The serotonin transporter (SERT) may transport both NE and DA, particularly at high DA concentrations [56][57]. However, SERT-mediated DA uptake was apparently not altered in the 6-OHDA lesioned striatum, at least at the DA concentration chosen (500 nM), because if SERT was more active in lesioned striatum, 5-HT would inhibit the accelerated DA uptake (Fig. 5C). This observation may at first seem at odds with previous studies, that suggest serotonin terminals convert L-DOPA to produce DA [58][59]. However, if L-DOPA is predominantly converted to DA in serotonin terminals, it is still conceivable that remaining L-DOPA could block uptake of extracellular DA, as indicated by our results. It is important to note, however, that SERT levels do decrease during PD progression [60]. Thus, our data indicates that NET also plays a role in DA clearance dynamics and the fate of L-DOPA, in addition to that previously demonstrated by DAT or SERT in PD progression. Another alternative explanation for the observed NE-sensitive uptake of [ 3 H]DA in lesioned striatum is transport activity from high-capacity but low-affinity transporters, such as the plasma membrane monoamine transporter (PMAT) or the organic cation transporters (OCTs). Uptake activity by the PMAT is sensitive to NE and DA, but is most sensitive to 5-HT [61][62]. The PMAT is likely insensitive to blockade of [ 3 H]MPP + uptake by L-DOPA [63]. Thus, at least the literature support the idea that it is unlikely that the PMAT is the NE-sensitive transport mechanism revealed in our study, because 5-HT was least effective at blocking [ 3 H]-DA uptake compared to NE and DA. However, the OCT subtypes 2 and 3 have affinity for NE, DA and 5-HT [64]. The OCT3 binds these monoamines more effectively than the OCT2, with IC 50 values being lowest for NE and over threefold greater for 5-HT [65][66]. Therefore, we cannot definitively rule out involvement of OCT3 in the observed NE-sensitive [ 3 H]DA clearance in lesioned striatum and this possibility merits further examination. Given that we did not observe any change in NE or 5-HT tissue content (Fig. 7), we presume that this would signify little change in proteins, like NET and SERT, expressed by these terminals. Therefore, it is logical to ask what cellular entity could contribute to increased DA uptake and NET expression in the lesioned striatum. One possibility is the glial cell. Astrocyte and microglia cell numbers may increase in PD as a part of the inflammatory response associated with the progressive loss of dopaminergic neurons (for review see [67]). Astrocytes may also regulate extracellular DA, as they functionally express DAT, NET, and OCT3 [68][69][70]. Astrocytes also express AADC, and may convert L-DOPA to DA [71][72] thus serving as a source of DA, via uptake of L-DOPA. Indeed, NET, but not SERT, blockers may inhibit both [ 3 H]DA and [ 3 H]NE uptake in astrocytes [68]. The increase in NET expression in the 6-OHDA-lesioned striatum, in conjunction with no increase in NE tissue content, may suggest that the cellular source of increased NET is from the astrocytes, rather than NE terminals that sparsely innervate the dorsal striatum. Therefore, it is possible that as PD progresses, increased numbers of astrocytes or microglia in striatum may provide an additional route of DA uptake or L-DOPA transport, which would panel). Statistics: DA, ***p,0.0001, t = 10.87. n = 7 paired observations for all monoamines. doi:10.1371/journal.pone.0052322.g007 Figure 8. Impact of L-DOPA on monoamine uptake A. Dopamine uptake in lesioned striatum in presence of L-DOPA. 1 mM L-DOPA was added in striatal synaptosomes prepared from at least 70% lesioned striatum and from the operationally-matched contralateral control. After 5 min preincubation period, 500 nM [7-, 8-3 H-DA] was added and uptake was determined for 2 min. In the lesioned striatal synaptosomes, L-DOPA was significantly more (77% above inhibition in control) effective to inhibit DA uptake. Statistics: *p,0.05, t = 3.31, two-tailed Student's paired t-test, n = 5 paired observations. B. Impact of L-DOPA on DA versus NE uptake. 1 mM L-DOPA was added to naïve (unlesioned) striatal synaptosomes and after 5 min preincubation, either 500 nM 3 H-DA or 250 nM [ 3 H] NE was added and uptake was determined for 2 min. Statistics (*p,0.001, t = 12.75, unpaired two-tailed Student's t-test, n = 4 for NE, 5 for DA). doi:10.1371/journal.pone.0052322.g008 either reduce synaptic DA available for neurotransmission or be a cellular entity that produces DA from exogenous L-DOPA. L-DOPA The use of L-DOPA in the treatment of the PD patient remains the primary pharmacological tool to ameliorate locomotor dysfunction [73], and its efficacy lies, in part, in its ability to increase DA in the PD patient [74]. However, the question remains as to why L-DOPA is effective when the proteins involved with its handling, (1) DAT, which would transport exogenous L-DOPA into remaining DA neuropil, and (2) AADC, which would catalyze the conversion of L-DOPA to DA, are diminished to the same degree as TH [75]. Our data may provide some additional insight into how L-DOPA could benefit the PD patient, as we observed that it produced a nearly two-fold greater ability to inhibit [ 3 H]DA uptake in the lesioned striatum over the intact striatum. It might even be possible that L-DOPA itself is subject to greater uptake in the striatum of PD patient. Either possibility lends itself to a therapeutic benefit from a first glance, notwithstanding complications of L-DOPA therapy, notably L-DOPA-induced dyskinesia [76][77][78] over long-term use. Given the propensity for chronic L-DOPA therapy in PD treatment to induce L-DOPA induced dyskinesia, the mechanism by which L-DOPA works, and ultimately fails, remains a clinically relevant issue. There is evidence of noradrenergic involvement in the pathogenesis of L-DOPA-induced dyskinesia [79]. In the striatum of the 6-OHDA lesioned rat, L-DOPA-derived DA is cleared from the extracellular space primarily by the NET [29]. This result is complemented by other work demonstrating that DAT blockade has no effect on DA that originates from L-DOPA in 6-OHDA lesioned striatum [15]. Very recent evidence also shows that alphasynuclein, a protein that is implicated in PD pathogenesis, may interfere with DAT transport capabilities [80]. In line with these studies, and our data, a NE-sensitive transporter like NET could therefore be a clinically relevant therapeutic target in alleviating L-DOPA induced dyskinesia. Conclusions Our results show that in spite of considerable loss of DAT, there remains a measurable quantity of DA uptake that is not diminished to the degree of DAT protein loss, and is preferentially inhibited by NE and L-DOPA. An increase in desipraminemediated inhibition of NE uptake in conjunction with increased NET expression supports the possibility that DA uptake in lesioned striatum may be mediated, to a large degree, by NET. The preferential inhibition of DA uptake by L-DOPA in lesioned striatum suggests L-DOPA could enable extracellular DA to remain in the synapse for a longer period of time. However, that NE tissue content was not affected by our lesion, suggests that NET-mediated DA uptake may not be mediated by NE terminals, but another cellular source such as glia. This leads us to speculate that, given glia express NET and thus could represent an abundant source of NET in striatum, that DA could be regulated by more than the monoamine transporters expressed on monoamine terminals. Further investigation of the cellular and molecular mechanisms of NET-mediated DA reuptake when DAT loss is at and beyond the degree associated with PDassociated motor symptoms could prove beneficial for locomotor capabilities in addition to providing a potential therapeutic target in the treatment of L-DOPA induced dyskinesia.
8,895.6
2012-12-26T00:00:00.000
[ "Medicine", "Biology" ]
Exploring Kinetics of Phenol Biodegradation by Cupriavidus taiwanesis 187 Phenol biodegradation in batch systems using Cupriavidus taiwanesis 187 has been experimentally studied. To determine the various parameters of a kinetic model, combinations of rearranged equations have been evaluated using inverse polynomial techniques for parameter estimation. The correlations between lag phase and phase concentration suggest that considering phenol inhibition in kinetic analysis is helpful for characterizing phenol degradation. This study proposes a novel method to determine multiplicity of steady states in continuous stirred tank reactors (CSTRs) in order to identify the most appropriate kinetics to characterize the dynamics of phenol biodegradation. Introduction Phenol and associated phenolic compounds are common constituents of aqueous effluents derived from various industrial processes such as polymeric resin production, petroleum refining, coal gasification, coking, and the manufacture of pharmaceuticals, explosives, plastic and varnish [1,2]. Phenol (or carbolic acid; C 6 H 5 OH), which is both water soluble and highly flammable, is produced on a large scale as a precursor to many compounds. Phenol has a vapor pressure of 0.41 mm Hg and a log octanol/water partition coefficient (Log K ow ) of 1.46. Phenol and its vapors are corrosive and are rapidly absorbed from the lungs. Orally ingested phenol is highly toxic to humans. Ingestion of 1 g is reportedly lethal, and smaller quantities can still cause symptoms such as muscle weakness and tremors, loss of coordination, paralysis, convulsions, coma, and respiratory arrest [3]. However, due to its low volatility, the inhalation hazard should be limited. Wastewater contains phenol in the range of 5-500 mg/L [4]. Wastewater containing phenol can be treated by adsorption, stripping, chemical oxidation, solvent extraction and biotreatment [5,6]. Although physico-chemical processes are highly efficient, one disadvantage is dilution by some physical processes, and another is formation of toxic intermediates by chemical oxidation [5]. Biodegradation was initially considered the most cost-effective solution to these problems. However, in phenol concentrations of ca. 5-500 mg/L, many wild-type microbes use phenol as a carbon and energy source for cell propagation in soil and water [7]. Phenol degradation begins when microorganisms express monooxygenase activity to hydroxylate phenol to form catechol [8]; catechol is then cleaved via dioxygenase oxidation to form cis-cis-muconic acid or 2-hydroxymuconate semialdehyde (2-HMS) [9,10]. Evidently, the kinetics of phenol biodegradation is inevitably crucial for efficient removal of phenol. That is, understanding the kinetics of cell growth and biodegradation of phenolic compounds is essential for system optimization. Specifically, a good experimental design and careful mathematical interpretation of data are helpful for understanding the dynamic characteristics of phenol degradation when using Cupriavidus taiwanensis [10]. Additionally, the rhizobial bacterium C. taiwanensis is reportedly efficient in degrading phenol and trichloroethylene (TCE) [10]; it is particularly appropriate for in situ or on-site bioremediation of contaminated soil as its nodulation characteristics and nitrogen fixation with its host plants may enhance biodegradation of soil pollutants [10]. Here, the influence of phenol on biodegradation was quantitatively described by several kinetic models (e.g., the Halden model and the Yano model). This study is the first to perform stability analysis of a chemical reactor [11] in order to identify the appropriate kinetics for phenol biodegradation. Microbial Growth The specific growth rate of cells in a batch system, μ (h -1 ), is defined as [2] dt X d dt where X is the cell concentration (g/L). The value of μ is determined at the exponential phase of the growth curve. The change in substrate concentration is defined by where S is substrate concentration (mg/L), and Y x/s is cell yield (g cell/g substrate). The relationship between cell mass formation and substrate consumption can be determined by Although Y x/s is constant, Pirt proposed the following model to determine substrate utilization for cell maintenance [9,12]: where Y G is the theoretical cell yield (g cell/g substrate), indicating the maximal conversion of unit substrate to cell mass, and m is the specific maintenance coefficient (g substrate/g cell-h). If cell maintenance is not considered (i.e., m = 0), cell yield (Y x/s ) is equal to the theoretical cell yield (Y G ). Substrate Inhibition Model The model most commonly used to describe the dependence of specific growth rate (μ) on the concentration of an inhibitory substrate (S) is the Haldane model [13,14]: where μ max is the maximum growth rate (h -1 ), Ks is the substrate-affinity constant (mg/L), and K I is the substrate-inhibition constant (mg/L). Several modified models with two or more parameters have also been developed. Other models of enzyme kinetics or alcohol fermentation with inhibitory substrate(s) or product(s) may also be applicable for phenol degradation. Three alternatives (Table 1) were considered in this study. Table 1. Examples of kinetic models for substrate-inhibition. Source Model Reference Yano et al. Cell Growth and Phenol Degradation in Batch Culture As indicated in Figure 1, in time course experiments investigating cell growth and phenol degradation, C. taiwanesis 187 degraded phenol to very low concentrations. When the phenol substrate was depleted, the bacterial cells gradually grew to stationary phase. It was observed that the lag phase was extended when the initial phenol concentration was higher due to the slower cell adaptation. This indicates that the acute toxicity of phenol inhibited C. taiwanesis 187 at high concentrations [15]. Evaluation of Kinetic Parameters Kinetic parameters were determined using a series of batch cultures at various initial phenol concentrations. The specific growth rates were calculated for different initial phenol concentrations according to the slopes of time-series plots of lnX at the exponential growth phase. However, this approach proved unsatisfactory for estimating μ max , Ks, K I or K, as asymptotes could not be extrapolated accurately at high phenol concentrations. Thus, the validity of the kinetic model could not be confirmed. Inverse polynomials for linear regression technique were therefore applied to the kinetic model ( Table 2) so that each asymptote between the experimental data and model prediction could be evaluated via a curve fitting method in MATLAB 6.5 ( Figure 2; Table 3). As indicated in Figure 3 and Table 3, specific growth rates could be expressed in terms of the effects of different phenol concentration and the best fit to kinetic parameters. These findings indicated that all kinetic models seemed to be suitable to depict the characteristics of phenol biodegradation and accurately characterized phenol biodegradation. The next question was which kinetic model(s) were mathematically feasible and dynamically viable for characterizing phenol degradation in industrial reactor operations. The Haldane model is frequently cited in the literature due to its mathematical simplicity and wide applicability [2]. The underlying mechanism of the Haldane model can be expressed as follows: where X, S, and P denote C. taiwanensis cell, phenol substrate and degraded product (e.g., 2-HMS), respectively. Verifying the validity of this mechanism requires further biochemistry and cell biology studies. However, stability analysis [11] is still applicable for validating which kinetic model is mathematically viable (Figures 2 and 3) and dynamically appropriate for predicting characteristics during practical operation. Rearranged equation Haldane model max max Table 3. Verifying Feasible Kinetic Models According to Appendix A and Appendix B, the uniqueness condition of steady states (i.e., inequality (A-3d)) in CSTR operation [11] for phenol biodegradation under different kinetic conditions can be formulated as follows: If the inlet phenol concentration is S 0 = 1000 mg/L, the uniqueness conditions of the steady state (SS) can be obtained through root searching for all equations (unit for S: mg L -1 ) as follows: To verify the existence of multiple steady states, our earlier study proposed using the experimental technique of dilution shift-up and shift-down [16] in continuous cultures for phenol degradation. If multiplicity of SSs was observed in only one kinetic model and not in others, this kinetic model would clearly be suitable for characterizing phenol biodegradation. As phenol biodegradation is growth-associated [17], the CSTR mode of operation with dilution shift-up and shift-down was used to determine which kinetic model is more appropriate for analyzing phenol biodegradation. These experimental results will be discussed in follow-up studies. For example, if the multiplicity of steady states in CSTR using the technique of dilution shift-up/down would not take place near the perturbation of steady state phenol concentration of ca. 120 mg/L, Aiba et al.'s model would be the best kinetic model for phenol degradation by C. taiwanensis 187. Culture Conditions For cell activation, an appropriate amount of frozen C. taiwanesis 187 culture was transferred to LB medium agar and incubated overnight at 30 °C. A single colony of C. taiwanesis 187 was then transferred to 3mL LB medium for overnight incubation at 30 °C, 200 rpm. For preculture, 1.0 mL of C. taiwanesis 187 was transferred to 50 mL LB flask cultures (ca. pH 7.0) and then incubated at 30 °C, 200 rpm for ca. 12 h. After preculture, 2 mL of the precultured broth was inoculated into 100 mL mineral salt medium for batch flask cultures under similar culture conditions. During fermentation, samples were taken at designated time intervals to measure cell and phenol concentrations. Analytical Methods Cell concentration was measured within the linear range of absorbance (ca. 0.1-0.7) using a VIS-UV spectrophotometer (Milton Roy Spectronic 601, Ivyland, PA, U.S.) to determine the optical density at 600 nm (OD 600 ). The relationship between dry cell weight (X) and OD 600 of the culture was 1.0 OD 600  0.2853 g DCW/L. The phenol concentration was determined via reverse-phase high performance liquid chromatography (HPLC) (LC-10AT, Shimadzu, Tokyo, Japan) equipped with a Merck C18 column (5 m, Merck, Germany). The samples were acidified by equivalent volume 2N H 2 SO 4 and then passed through a 0.2 μm filter. Phenol concentration was analyzed under a linear elution gradient using a solution of methanol, acetic acid and water in a ratio of 50:50:1 (%, v/v). An aliquot of 10 μL of the sample was injected and analyzed by HPLC, and the wavelength for phenol was set to 280 nm as described in [2]. Data Analysis The specific growth rate (μ) was determined during the exponential growth phase by plotting a graph between lnX versus time where the slope μ was approximately constant. The initial substrate concentration (S 0 ) was determined at the beginning of exponential growth phase [4]. The kinetic parameters (e.g., μ max , Ks, K I and K) were determined using curve fitting method in MATLAB 6.5. Cell growth and phenol degradation were simulated using software "ode23" of MATLAB function. The state equations of X and S was as follows: In the Haldane model, the equations were reformulated as For the other kinetic models, the above equations were rewritten as Conclusion The growth kinetics of C. taiwanesis 187 during the biodegradation of phenol and the resulting inhibitory effect on cell growth were studied. The duration of the lag phase correlated with phenol concentration. Kinetic parameters of various models were determined using inverse polynomial technique (i.e., Lineaweaver-Burk plot). Time course data were used to compare kinetic models of phenol biodegradation at varying initial phenol concentrations (100-500 mg/L) in batch cultures. We proposed a novel method of identifying kinetics that is most appropriate for characterizing phenol biodegradation in CSTRs. from (a, b). If F(a) = F(b), then there exists at least one number c (a, b) such that F'(c) = 0. That is, if there exists no number c (a, b) such that F'(c) = 0, then the relationship F(a) = F(b) is simply nonexistent (i.e., steady state multiplicity is not present). Consider an isothermal CSTR described by the following equation (A-1) for phenol biodegradation. Since the concept of a balance only requires that the time derivative be zero, any solution to
2,706.2
2010-12-07T00:00:00.000
[ "Biology", "Engineering" ]
Interior Point Algorithm for Multi-UAVs Formation Autonomous Reconfiguration Here the problem of designing multi-UAVs formation autonomous reconfiguration is considered. Combined with three kinds of cost functions, nonlinear dynamic equations, and four inequality constraints, one nonlinear multiobjective optimization problem is constructed. After applying weighted sum method and separating all equality or inequality constraints, the former nonlinear multiobjective optimization problem can be converted into a standard nonlinear single objective optimization problem. Then the interior point algorithm is applied to solve it. Further some improvements are proposed to avoid rank deficiency of some matrices. The equivalence property between multiobjective optimization and single objective optimization through weighted summethod is proved. Finally the efficiency of the proposed strategy can be confirmed by the simulation example results. Introduction Multi-UAVs formation is the basis and prerequisite for the UAV mission.When formation task or battlefield environment changes with time, the entire multi-UAVs formation needs to be adjusted.This adjustment is called formation autonomous reconfiguration.In reconfiguration process, each UAV adjusts its own position as a new geometry and plans the flight trajectory from the original location to the new terminal location.Thus the planned flight path ensures each UAV is secure while considering the nonlinear dynamics of UAV, various formations cost function, and a variety of constraints.So multi-UAVs formation autonomous reconfiguration is designed as a mathematical optimization problem in which decision variables are the control input sequences.The cost functions include three kinds of functions such as investigation UAV cost function, interference drone missile costs function, and radar jamming UAV cost function.Moreover the constraints include radar threat constraint, missile threat constraint, artillery positions constraint, and formations anticollision constraint. An overview about the auxiliary role of the multi-UAVs formation trajectory design in a cooperative investigative process is given in [1].In [2] an intelligent multiagents system is introduced and this multiagent system is represented as the communication structure between various UAVs.In [3] descriptions of many various methods of UAV path planning are summarized.In [4] the weights which exist in many intelligent algorithms are adjusted based on the gray system theory.In [5] the methods about how to fuse the multiple sensor information are analyzed to obtain state estimates from the perspective of information fusion.In [6] the particle filter algorithm is applied to track path under the non-Gaussian condition.In [7] one interacting multiple model algorithm based on particle filter is proposed.In [8] the ant colony algorithm from the multiobjective optimization is studied.In [9] one consensus genetic algorithm is used to solve the multiobjective optimization problem.In [10] the use of game theory can transform the original multiobjective optimization problem into a Nash bargaining process.In [11] the communication delay is considered and one information compensated method based on the information filter algorithm is presented. Here the interior point algorithm is applied in multi-UAVs formation autonomous reconfiguration to design the autonomous trajectory.In the optimization model, three different cost functions are established and the dynamic equations of each UAV are regarded as equality constraints.After combining radar threats, missile threat, artillery positions threat, and formations anticollision, four kinds of inequality constraints are obtained.Based on these cost functions and equality and inequality constraints, an multiobjective optimization problem with equality and inequality constraints is constructed.The multiobjective optimization problem is solved to obtain the control input sequences which are used to plan trajectory.The main contribution of this paper is to analyze this multiobjective optimization problem by using operations algorithm.For this multiobjective optimization problem, the weighted sum strategy is applied to transform that multiobjective optimization into a single objective optimization and the feasibility of the weighted sum strategy is proved theoretically.In order to rewrite the original optimization problem into a standard form of nonlinear optimization, the kinetic equation for each equation UAV is further expanded and some forms are used to describe the set of state variables and control inputs.For the standard nonlinear optimization form, the major steps of the proposed interior point algorithm are given here.After calculating the optimal input sequences of nonlinear optimization problem by using the interior point algorithm, the idea of nonlinear receding horizon predictive control can be introduced.This idea is to choose the first term in the obtained optimal control input sequence and discard the remaining terms. Multi-UAVs Formation Autonomous Reconfiguration Multi-UAVs formation autonomous reconfiguration means that at initial time instant, multiple UAVs fly in a formation pattern.When the battlefield environment changes with time, the UAV swarm adjusts each UAV's former formation pattern and chooses one new flying pattern independently. In formation reconstruction processes, each UAV's position in the new formation mode requires to be redesigned in order to produce a new flight path.During the process of designing new flight path, the dynamics of each UAV, flight cost function, and constraints are considered.Multi-UAVs formation autonomous reconfiguration process is shown in Figure 1, where four different formation modes are described.It means that the groups of multiple UAVs sequentially select their own new formation modes according to the different surrounding environment.Assume there are UAVs flying in formation, and the length of time is .Since each UAV is decoupled with each other, this assumption means dynamic coupling phenomenon does not exist.Set state vector of the UAV as follows: The control input sequence is that The time invariant discrete time state space equation is given as The main contribution of this paper is to study how to transform the multiobjective optimization problem into a standard nonlinear optimization problem, and the classical interior point algorithm is applied to analyze the equivalence between the multiobjective optimization and single objective optimization by using weighted sum strategy. When UAV formation leaps across the battlefield environment, there are many methods to define the cost function.For example, here define the position of the virtual lead aircraft at time as The position of UAV V whose role is to carry out the reconnaissance mission is denoted as The cost function corresponding to UAV V during the whole reconnaissance mission is defined as In (6), represents a positive definite weighting matrix; the second term is used to normalize the original optimization problem and ensure that the optimal solution will not depart the true solution.Similarly two other cost functions coming from the missile and radar interference are given: In (7), the cost function 2 ( , ) is constructed by the distance between the current position 1 and ideal position 1 of the missile interference that interferes the UAV. 1 is the ideal position of the missile interference.In (8), the cost function 3 ( , ) is constructed by the distance between the current position and ideal position 2 of the radar interference that interferes the UAV.This cost function 3 ( , ) can achieve the maximum protection under formation flight path. In total there are UAVs lying in the formation geometry.Combining ( 6), (7), and (8), we consider the following cost function: In ( 9), the total number of elements in the minimization operation is 3.The minimum solution which guarantees all 3 elements can achieve their own minimization simultaneously does not exist.Fortunately one compromise solution can be found and this compromise solution is called the efficient solution in multiobjective optimization theory. Under not any constraints, the optimization problem ( 9) can be solved by the weighted sum strategy.But in multi-UAVs formation autonomous reconfiguration problem, four kinds of inequality constraints are considered; the interior point algorithm can only be used to get an efficient solution. All kinds of constraints include the radar threat constraint, missile threat constraint, artillery positions threat, and formations anticollision constraint.Their respective inequalities are defined as follows in turn. Setting the position and detection radius of the radar and (), respectively, then the radar threat constraint corresponding to the UAV V is defined as follows: Setting the position of missile as , the safety distance and safety angle cosine after disturbing missile are and (), respectively.Then missile threat constraint corresponding to the UAV V is defined as follows: Assuming that the radiation radius of artillery positions is not influenced by external disturbance, we set this radiation radius as a constant.The position and radiation radius of artillery are defined as and (), respectively; then artillery positions threat corresponding to the UAV V is defined as follows: The minimum safe distance among UAV formation is denoted by min , and then formations anticollision constraint corresponding to the UAV V is defined as follows: Unifying the above multiobjective function (9), nonlinear dynamical equations (3), and four inequality constraints ( 10)-( 13), multi-UAVs formation autonomous reconfiguration problem can be formulated as a nonlinear multiobjective optimization problem: Combining all the inequality constraints in (14) and vectoring them, we get where the state variable and control input vector are given, respectively, as follows: Similarly combining all the nonlinear dynamics equations of each UAV, we get . . . Using the vector form (15) Before solving (18) by the interior point algorithm, we transform the multiobjective optimization problem (18) into a nonlinear optimization problem with equality and inequality constraints. Standardized Model In (18), the cost function is a multiobjective vector: Applying the weighting sum strategy and transforming to one single objective optimization problem, In (20), vector { 1 , 2 , 3 } =1 indicates positive weighted scalar values and the following conditions are satisfied: The nonlinear dynamic equations of UAV (18) can be rewritten as follows: In ( 22), the first equation represents the initial state.For the set of feasible state Ξ and control input constraint set Θ , define the following constraints: The two above functions ( , ) and ( , ) do not depend on two variables and simultaneously. Combining all the equations and inequalities together and using (20), we obtain a standard nonlinear optimization problem: In order to solve the standard nonlinear optimization problem (20), the interior point algorithm is proposed. Interior Point Algorithm The purpose of the interior point algorithm is to generate an iterative sequence ; here the superscript symbol is different with control input () at time instant .This generated sequence will be included in the control input set. In the iteration process of each generating sequence, each element of the inequality constraint () is considered.After introducing the slack variable , those inequality constraints can be converted into equality constraints [12].The standard nonlinear optimization problem (24) can be rewritten as min () In (25), the slack variable is chosen as a vector with appropriate dimension, and each of its elements is nonnegative.Then we construct one Lagrangian function corresponding to (25) as follows: (, , V, ) = () − V () − ( () + ) . According to the necessary condition from the optimality theory, the equation ()+ = 0 holds at the minimum value. Applying the optimality Karush-Kuhn-Tucker sufficient and necessary condition [13], we obtain From ( 27), we have that . (28) In ( 27), define the following two matrices: In ( 27), the introduction of perturbation parameter is to guarantee that if we choose > 0, the iteration sequence will be far away from the boundary of the control input set.The choice of the perturbation parameters satisfies that In order to improve the performance of the optimal conditions (27), we use the interior point algorithm to solve (25) and introduce barrier function to eliminate the nonnegative condition ≥ 0: The barrier function − ∑ =1 log is added here to prevent each element of slack variable to approach 0 closely.Also applying the generalized optimality KKT sufficient and necessary condition [14] in (31) with one barrier function, we obtain Using Newton increment steps in (32), we obtain the following system equation: After calculating the increment, The new recursive iteration values are calculated as where two steps max where ∈ (0, 1), and take = 0.995.To ensure the second block matrix in (33) has full row rank and this matrix is not singular, we rewrite (33) as follows: where ∑ = −1 , as in the iterative process, the normed function is a decreasing function.It means that the incremental vector [Δ Δ ΔV Δ] is a descent direction, so it is desirable to choose matrix as a positive definite matrix which is on the null space with respect to matrix From the construction of ∑, it means ∑ is a positive definite matrix.But Hessian matrix ∇ 2 (, , V, ) may be negative definite.To compensate for this defection, we use ∇ 2 (, , V, )+ to replace Hessian matrix ∇ 2 (, , V, ).Scalar > 0 is chosen sufficiently large to ensure the positive definiteness of the Hessian matrix.Additionally in the interior point algorithm, the rank deficiency of the gradient matrix ∇() is considered.So we make the following modifications to the primal-dual matrices: One normalized parameter > 0 is added in the first matrix (40).As the iterative expressions (35) do not terminate in a limited period of time, one error criterion function can be applied to determine when to stop the entire iterative algorithm.This error criterion function may be that With this error criterion function as a judge to terminate the iterative algorithm, we summarize the basic steps of the interior point algorithm. Step 1. Assume a pair of initial value is given as ( 0 , 0 ), and let = 0. Step 2. Calculate Lagrange multipliers V 0 and 0 and define the parameters Step 3. Verify whether the error criterion function holds: is a very small positive number; if above equality does not hold, then stop the iterative algorithm.Then the optimization variables ( , , V , ) at this time can be regarded as the optimal solution to a nonlinear optimization problem. Step 5. Use (36) to determine two steps max and max . Step 6. Use the iterative expression (35) to solve the new iterative value. In above steps, we make some modifications to compensate the drawback of the usual interior point algorithm.These modifications can guarantee the optimal solution of the original nonlinear optimization problem [15]. Equivalence between the Two Optimization Problems When solving the multiobjective optimization problem, firstly we apply the weighted sum strategy to convert it into one single objective optimization problem.Under the conditions of the positive weighted scalars (21), the equivalence between these two optimization problems holds.As there are 3 elements in (19), rewrite equality (19) as follows: Similarly rewrite equality (20) as follows: Using ( 44) and (45), the conditions of equivalence between (44) and (45) can be summarized as the following proposition. Proof.Let û be an optimal solution of (45), and it satisfies ( ) ≤ (û).Applying the use of positive weights ( ) ≤ (û), = 1, 2 ⋅ ⋅ ⋅ 3, we obtain the following equation: Choose a sufficiently large positive number such that the negative result about optimal solution û holds.This positive number is chosen as Next we use the contradiction method to prove this proposition.Let û be not an effective solution of (44), then there exists one ∈ {1, 2 ⋅ ⋅ ⋅ 3} and such that For all ∈ {1, 2 ⋅ ⋅ ⋅ 3}, we have Continuing to formulate, we get Multiplying both sides of (50) by /(3 − 1) and taking sum operations, we have In (51), the last inequality contradicts the assumption that û is an optimal solution of (45).So it means that û is an effective solution of (44). Numerical Example To verify the interior point algorithm in multi-UAVs formation autonomous reconfiguration, the formation includes three UAVs: one radar jamming UAV, one missile jamming UAV, and one investigation UAV.The initial positions of three UAVs are all located at starting point coordinates (0,0) and the terminate positions are concentrated on coordinates (700,700).The vector of the flight maximum speed, minimum speed, and the speed deviation is that Surrounding battlefield environment contains radar threat, missile threat, and antiaircraft positions threat.The deployment coordinates of the radar threat are (300,300), and the deployment coordinates of the missile threat are (250,200).The region of the artillery positions threat is a rectangle range with a height of 300 and a width of 300.This rectangle range belongs to the no-fly zone. Applying the interior point algorithm in multi-UAVs formation autonomous reconfiguration, the weighted factor for each UAV is The weighted matrix in the cost function is The discrete time sampling period is taken as The length of time is = 500 s, the number of UAVs is = 3, and the initial value during the interior point algorithm is chosen as follows: ( 0 , 0 ) = (0.01 0.01 0.01 0.01) . Perturbation parameter is 0 = 0.05, a positive number is = 0.01, the normalized parameter is = 0.5, and the scalar is = 1.5. The simulation trajectory is shown in Figure 2. The coordinates of threat in surrounding battlefield environment are set as (296,346) and (229,173); the effect radius of the threat is 100 m.Assume the threat of the threat zero is infinite.In order to increase the flight performance, the density of sampling points in the vicinity of the threat can be higher than the area of no threat.In Figure 2, I denotes the global optimal navigation point and n denotes the position of the optimal navigation point.Two annular regions represent a range of threats. From Figure 2, three UAVs formations experience a total of three formation autonomous reconfiguration processes.The first autonomous reconfiguration occurs in coordinates (200,100); the second occurs in coordinates (280,280).After these two formation autonomous reconfigurations, three UAVs can bypass these three threats and fly from initial position to terminate position. Figure 3 shows the iterative convergence curves of cost functions with respect to three UAVs.Here each cost function is written as formula (20) with the weighted scalar value being = 1/9.From Figure 3, with the interior point algorithm runs, after we substitute the optimal control input solution into the corresponding weighted sum cost function, the cost function will approach to zero closely with increasing number of iteration steps. Conclusion After establishing one nonlinear multiobjective optimization problem in multi-UAVs formation autonomous reconfiguration, we use weighted sum strategy and combine all the equations or inequalities to derive a standard single objective nonlinear optimization problem.Furthermore the interior point algorithm is proposed to solve the optimization problem and some improvements are made to ensure the optimal solution of the original nonlinear optimization problem.But in this paper, the asymptotic property and sensitivity analysis are not studied, so these two are the next research subjects. Figure 3 : Figure 3: Convergence curves of each UAV cost function. (3)3), is a nonlinear mapping; this nonlinear mapping combines the state vector () at time and control input vector () to predict one new state variable ( + 1) at the next time instant + 1; Ξ and Θ are the feasible state set and control input constraint set with respect to the UAV.
4,335.8
2016-08-01T00:00:00.000
[ "Mathematics" ]
EFFECT OF GALLERIES ON THE WIND FLOW STRUCTURE AND POLLUTANT TRANSPORT WITHIN STREET CANYONS WITH OR WITHOUT FACADE ROUGHNESS ELEMENTS (BALCONIES) © 2020 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. 45 EFFECT OF GALLERIES ON THE WIND FLOW STRUCTURE AND POLLUTANT TRANSPORT WITHIN STREET CANYONS WITH OR WITHOUT FACADE ROUGHNESS ELEMENTS (BALCONIES) INTRODUCTION Air pollution in an urban canopy presents an important environmental problem and the study of pollutant dispersion in cities is not an easy matter. The interaction between the overhead atmospheric flow and the urban obstacles, such as buildings, generates a complex flow inside the street canyons that affects the pollutant dispersion. Air pollution measured as particulate matter (PM) may lead to increased mortality rates (Dockery et al., 1993;Hales et al., 2010). The fine particulate matter, i.e. the particles with a diameter smaller than 2.5 mm (PM2. 5), has been estimated to cause about 3.3 million premature deaths per year worldwide (Lelieveld et al., 2015). The mass of The façade roughness elements such as balconies can significantly affect the near-façade airflow patterns (Chand et al., 1998;. A better understanding of the impact of façade elements on the near-façade airflow patterns and the pressure distributions on the façade is essential for the accurate evaluation of wind-induced natural ventilation (Gullbrekken et al., 2018;Ramponi et al., 2014), pollutant dispersion (Karkoulias et al., 2019) and convective heat transfer coefficient expressions for isolated buildings (Montazeri and Blocken, 2018). Wind tunnel experiments and computational fluid dynamics (CFD) simulations have been employed to investigate the impact of building façade geometrical details on the near-façade airflow. The mean velocity and the Air change rate per hour (ACH) in 5-story building configuration (15m building height) with balconies using Steady RANS as a turbulence modeling approach was studied by Ai The older studies were limited to a single aspect ratio, for which they studied the mean velocity, the mean surface pressure and the distribution of the carbon monoxide as a gaseous pollutant. Our previous study (Karkoulias et al., 2019), presented results of the vertical distribution profile of aerosol traffic-generated particulate matter with respect to the different floor heights, in typical multistorey buildings forming deep street canyons (AR>2). It demonstrated the following: a) Inside the canyon three vortices were formed. One of them dominated the cross section and rotated in a counterclockwise direction. The smaller vortices rotated in the opposite direction and were forced near the extremities of the canyon. The smaller second vortex appeared inside the canyon top, while the slightly larger third appeared just above the street, both their centers lying closer to the leeward side of the canyon. Additional (much smaller) ones were observed inside the balcony cavities. b) The vortices (e.g. the main vortex) appeared to possess an inner core rotating as a "forced vortex" (i.e. solid body rotating air mass). In the leeward side they appeared to form "Rankine vortices", while in the windward side a shear layer zone was formed between the vortex and the façade. c) The central vortex rotated in the counter-clockwise direction with an angular velocity of ω1=+0,324 rad/s, the vortex near the leeward building and street corner rotated with an angular velocity of ω2=-0,094 rad/s. The smaller vortex near the leeward top corner rotated at a rate of ω3=-0,072 rad/s. d) The actual pollutant transport within a canyon cavity started from a dead zone near the street level, followed by an exponential reduction due to the flow diffusion imposed by the vortex structures. This was followed by a smaller dead zone near the roof while the final "wash out" was driven by the shear layer formed between the cavity flow and the outer wind. In general, the balconies on the leeward building façade were the worst cases in the pollutant concentration levels, especially in the lower parts of the canyon. It was concluded that the balconies create vortices which trapped the air pollutants at lower heights. The multi-balcony configurations modified significantly the flow field and the relevant pollutant transport mechanisms. The present study focused on a computational investigation of the effect of the galleries on the flow field structure and the vertical distribution of the concentration both near the building facades and on the centerline of the canyon. Numerical results are presented for various gallery and balcony combinations for an aspect ratio of h/w=2.33. They illustrate the formation of the vortices (i.e. number, nature, rotation rate) inside the canyon and the vertical concentration profiles of the aerosol particles which attributed to the complex wind flow structure and the physical layout of the street canyon. Finally, it is demonstrated that the presence of the galleries did not help the pollutant to escape from the canyon due to the lower dispersion of the pollutant and its accumulation inside the cavity. DOMAIN MODELING, GRID CHARACTERISTICS AND BOUNDARY CONDITIONS In the present study the street level galleries were added into the reference geometry (a narrow city canyon without balconies and an aspect ratio 2.33, as discussed in our recent paper (Karkoulias, et. al., 2019) to investigate how they affect the flow field inside the cavity and the distribution of the concentration on the building facades and the centreline of the canyon. The investigation was extended to other similar configurations by studying the combination between galleries and balconies. The results are presented in section 3 of this paper. The canyon geometries employed in the present study are described in Table 1. The street canyon configuration named "Geometry A" with balconies upon the leeward façade and galleries on both facades is illustrated in Figure 1 The undisturbed flow inside the computational domain, the distribution of the vehicular exhaust, the local source strength and the corresponding average PM10 emission rate, were defined in our previous paper (Karkoulias et al., 2019). The geometric characteristics of the canyon and the facades elements (e.g. the geometry of the balconies, the geometry of the source and the characteristics of the traffic) were described in the same paper. The galleries penetrated 2.5m into the interior of the building and they were 4.5m in height ( fig.1b). The prevailing wind direction was perpendicular to the long street canyon axis. As a result, the 3-D spatial domain was simplified into a two dimensional (2-D) one, while the wind speed above the roof-top level was set to be equal to 1.5 m/s (Fig. 1b). The building walls, the roofs and the street pavements were defined as "wall boundaries" (zero velocity and impermeability conditions), while the top boundary of the computational domain opposite the street was assigned as a symmetrical one. The closure of the dynamic flow equations employed the steady state κ-ε RNG (Re-Normalized Group theory) method (Kim and Baik, 2004). In order to achieve a greater accuracy in a shorter computational time, the flow domain was divided into 2 regions, each with its own grid. The street canyon (height = 28 m, width = 12 m) incorporated a finer structured Quad (Cartesian) grid of uniform spacing whereas for the remaining domain (above the canyon) incorporated a coarser structured mesh of tetrahedral elements (Fig. 1c). The mesh of the computational domain had 182098 cells, 365114 faces and 183025 nodes. MODEL VALIDATION The validity of the numerical simulation employed in the present study is discussed in our previous publication (Karkoulias et al., 2019). It was assumed that the validation holds for the modified geometry as well. The fluent RANS code for complex architectural geometries was evaluated against wind tunnel experiments and field measurements in our previous paper. Comparisons demonstrate that the RANS results were overall in good agreement with the wind tunnel experiments and with the field measurements and could be used as basis for understanding the detailed flow dynamics. THE EFFECT OF THE GALLERIES IN THE WIND FLOW STRUCTURE The four cavity geometries (Table 1) studied in our previous publication (Karkoulias et al., 2019) which highlighted the influence of the balconies on the flow field shown in Figure 2. The narrow cavity without balconies with aspect ratio h/w=2.33 Geometry A The Reference case with balconies on the leeward building façade Geometry B The Reference case with balconies on the windward building façade Geometry C The Reference case with balconies on both building facades These same cavity geometries were employed in the present study to investigate the effect of galleries, in combination with the balconies, on the flow field and on the distribution of the concentration on both buildings facades and the canyon cavity centerline. Figure 3: Flow field streamlines inside the canyon cavity with galleries and aspect ratio 2.33, (a) without balconies, (b) balconies on leeward side, (c) balconies on windward side, (d) balconies on both sides. The climate of street canyons is primarily controlled by the micro-meteorological effects of the urban geometry rather than the mesoscale forces controlling the climate of the boundary layer (Hunter et al;1992). The outer wind and the flow structures inside the canyon cavity do not correlate in a simple manner. The pollutant transfer requires the detailed knowledge of both the outer wind and the inner cavity flows to be analyzed. The number of vortices, the position of their center and the angular velocity (ω) (to be discussed below) for the vortices inside the cavities without and with galleries are shown in Table 2: ωupper = -0.20 rad/s ωmiddle = +0.34 rad/s ωlower = -0.06 rad/s * The x coordinate was measured from the leeward side while the y coordinate was measured from the street level. As it will become apparent, the presence of the galleries modifies the vortex structures and the associated particulate mass convection mechanism quite drastically. The nature of the vortices may be studied as follows: (FIG.3A) The addition of galleries into the reference Geometry formed three vortices ( fig.3a). Fig. 4 illustrates the variation of the magnitude of the vertical component of the flow velocity along the horizontal line that pass through the centres of the vortices. This figure implies that: THE REFERENCE GEOMETRY WITH GALLERIES ON BOTH SIDES The main vortices appeared to have an inner core rotating as a "forced vortex", i.e. the vertical velocity component (uy) increased with distance (r) from the centre. In other words, the vortex contained a nearly solid mass that rotated with a nearly constant angular velocity (ω= The main middle clockwise rotating vortex extended down from the level of the road up to the roof of the canyon and occupied most of its cross section. This vortex rotated with an angular velocity of ωmain= -0.75 rad/s which implies a period of rotation of the order of 8.37s. Fig.4 demonstrates that in near the leeward side "Rankine" vortices were formed while near the windward side a shear layer zone was formed between the vortex and the façade. The small counterclockwise rotating vortices inside the galleries rotated with an angular velocity of ωleeward gallery =+0.13 rad/s and ωwindward gallery =+0.31 rad/s. The leeward gallery vortex had a period of rotation of the order of 48.3s while the windward gallery vortex had a period of the order of 20.2 seconds, i.e. at a much lower pace than the main one. Figure 4: The y-velocity along horizontal line passing through the center of the main middle vortex, leeward and windward gallery vortex located at 17.72m, 1.45m, and 2.08m respectively away from the street level THE GEOMETRY A WITH GALLERIES ON BOTH SIDES (FIG.3B) In Geometry Α with galleries, five vortices were formed ( fig.3b). Three main vortices (i.e. the upper, the middle and the lower vortex) and two secondary vortices (i.e. the leeward and the windward gallery vortex) were formed inside the cavity. Fig. 5 illustrates the profile of the uy along the horizontal line that pass through the centres of the vortices. The results indicated that the lower main vortex rotated in the clockwise direction with an angular velocity of ωlower =-0.23 rad/s which implies a period of rotation of the order of 27.30s. In other words, the centre of the vortex rotated at a constant rate of about 13.2 degrees per second. Fig.5 demonstrated that in the leeward side the nature of the vortex structures formed "Rankine vortices" (i.e. it appeared to be composed of an inner solid body rotating air mass that diffuses at the outer limits). On the other hand, near the windward side a shear layer zone was formed between the vortex and the façade. The angular momentum of the rotating air masses dissipates within the shear zones, leading to a weakening of the linear vortices inside the canyon. Eventually the canyon vortices loose coherence as the core radius increases and the vortices expand to fill the entire region (Kingdon R., 2008). The inner part of the flow corresponds to a rigidly rotating core while the outer region becomes a free vortex. This prevents the velocity from becoming infinite at the center of rotation. At the same time, for a radial distance r >R (where R is the core radius), the model reverts to a free vortex, which allows for the velocity to decay at large distances. A Rankine vortex constitutes an amalgamation of the forced and free vortices profiles (Katopodes, N., 2019). The results indicated that something similar occurs with the middle main vortex rotated in the counter clockwise direction with an angular velocity of ωmiddle =+0.14 rad/s (i.e. a period of rotation equal to 44.85s). The centre of this vortex rotated at a constant rate of about 8.04 degrees per second. Τhe upper main vortex rotated in the clockwise direction with an angular velocity of ωupper =-0.76rad/s while the central one rotated at a constant rate of about 43.6 degrees per second. Apparently, the uppermost vortex rotated very fast (when compared to the other two) and this implies the presence of very strong shear phenomena all around its periphery. The most intensive upper main vortex transported the pollutants from the middle part of the canyon to the higher levels. The counter-clockwise vortices inside the galleries rotated with an angular velocity of ωleeward gallery =+0,13rad/s and ωwindward gallery =+0,38rad/s. The centre of the leeward gallery vortex rotated at a constant rate of 7.46 degrees per second so it rotated at a similar rate with the lower main vortex. The centre of the windward gallery vortex rotated at a rate of 21.8 degrees per second which is much faster than the corresponding magnitude of the lower vortex. Table 2 it is obvious that the galleries affect the flow structure (i.e. number, position, and size of the vortices) inside the cavity. These changes are discussed below: Inside the reference geometry with galleries the number of vortices and their position changed (three vortices versus two). Inside the cavity there was only one main vortex that occupied the cross section of the cavity. Its angular velocity was about six times higher than the velocity of the leeward gallery vortex and twice than the velocity of the windward gallery vortex. In the corresponding geometry without galleries the lower vortex had a higher angular velocity than the upper one which means that it had the potential to transfer mass to higher levels. Hence, it may be deduced that the lower vortices have a reduced potential to convect mass towards the main vortex. In Geometry A with galleries, five vortices appeared (i.e. three inside the main section of the cavity and two in the galleries) against the three inside the respective geometry without galleries. The presence of the galleries did not influence the flow structure of the three main vortices significantly but only changed the size and their angular velocities. The angular velocity of the upper and lower vortex was higher than the middle one. In the configuration without galleries the opposite had been observed. The galleries in Geometry B modified the number of the vortices (three versus two) creating a vortex inside the leeward building gallery and shifting the center of the lower vortex downwards and closer to the leeward facade. The angular velocity of the lower vortex was somewhat greater than the upper while in the case without galleries the lower vortex had the half angular velocity than the upper one. The galleries in the Geometry C were also modified the flow field structure (i.e. three vortices versus five). The lower vortex extended from the leeward gallery to the entrance of the windward gallery, while the two middle vortices were replaced by a larger one. The angular velocity of the middle vortex was higher than the upper and the lower vortex. Especially the angular velocity of the lower vortex was about five times smaller than the middle one. In the corresponding geometry without galleries the upper and the lower vortex had the same angular velocity which was much higher than the angular velocity of the middle ones. INVESTIGATION OF THE AIR POLLUTANT TRANSPORT MECHANISM AND COMPARISON BETWEEN THE GEOMETRY A WITH AND WITHOUT GALLERIES A further discussion of Geometry A with galleries is given below because the corresponding geometry without galleries had been studied numerically and experimentally in our previous work. Also, the addition of the galleries did not significantly change the structure of the cavity flow. That is, the number of the main vortices in the cavity (except the vortices created in the galleries) was the same. Also, the centers of the three vortices as shown in Table 2 are approximately in the same position. Therefore, the main effect comes from the relative sizes of the vortices. In the Geometry A with galleries ( fig.3b) the results indicated that the middle vortex became smaller in size while the size of the other two was increased. The lower vortex transported the pollutant to the leeward side from the street level up to the 8m level. The middle vortex transported the pollutant to the windward side from 8m level up to the 25m level and the upper one transported the pollutant out of the canyon. In general, in the geometry A with galleries the mechanism of the vortices makes the ventilation of the canyon difficult. The effect of the galleries was immediately apparent by comparing the angular velocities of the vortices presented in the above paragraph. Therefore, the vortex in the gallery of the leeward building had a lower angular velocity than the lower main vortex. Thus, the vortex had a little potential and the pollutant could not be transported out of the gallery. The result was the accumulation of the pollutants in this area. Also, the vortex in the gallery of the windward building had a higher angular velocity than the lower main vortex. The pollutant was therefore transported from the windward gallery to the bottom of the cavity. The lower main vortex was more intensive than the middle one and transported the pollutant to the leeward side and to the upper levels. Finally, the pollutant was concentrated on the windward side because the middle vortex had lower angular velocity than the upper one. So, the middle vortex did not have the potential to transport the pollutant to the upper vortex. Due to the high intensity of the upper vortex the concentration near the top of the cavity decreased because the pollutant escaped from this area. The leeward gallery vortex generated a maximum in the pollutant distribution of the order of 15.74μgr/m 3 near the entrance of the gallery. The minimum mass concentration near the leeward wall was 15.66μgr/m 3 . Inside the opposite gallery (i.e. the windward gallery), the vortex generated a maximum mass concentration near the windward wall of the order of 12.53μgr/m 3 and a minimum near the entrance of the gallery of the order of 12.5μgr/m 3 . These data imply that pollutant accumulated inside both galleries. In other words, the galleries do not help the pollutants to escape. On the contrary, the mass concentration is increased within the bottom of the canyon. Table 3 records the percentage reduction of the vortex extreme concentrations from vortex to vortex, resulted from a comparison of the data presented in Figure 8 for Geometry A with galleries and Geometry A studied in our previous paper (Karkoulias et. al., 2019). (fig. 9a). The concentration exhibited to remain constant up to the height of 7m above the street level and then decreased up to the height of 18m. The uy profile justifies this reduction. From the height of 18 m up to the 23 m, the concentration remained almost constant while the uy was decrease. Then the concentration was followed by a reduction while in the last meter, a sharp decrease was observed due to the upward flow and the external flow. The PM10 Mass Concentration vertical profile near the leeward building façade ( fig. 9b) was interpreted as follows: The increase of the uy ( fig. 9a) along the positive direction led to the reduction of the concentration up to 3m. The distribution of the concentration up to 6 m was almost constant while then the increase in velocity along the negative direction led to a further decrease in concentration up to 12 m. From there the concentration did not change due to the uy which was constant. The concentration continued to remain almost constant with a slight decrease up to 27 m. A sharp decrease in the concentration was observed over the uppermost meter due to the upward flow -external flow interaction. In review, it is concluded that the air pollutant transport-dispersion mechanism consisted of three steps: 1) A dead zone was generated at the bottom of the canyon and the concentration of the pollutant in this region was increased by the presence of the galleries. This presence affected the flow field in such a way that made more difficult the upward movement of the pollutants to higher levels. 2) The enhanced mixing in the middle of the canyon due to the rotation of the vortices constituted the second step. The solid mass rotation of the vortices generated very strong shearing among their outer boundaries that not only diffused pollutant mass from vortex to vortex but in addition convected the pollutants forcing them to attach on the buildings facades. 3) A shear layer zone was created at the top of the canyon between the roof top level of the canyon and the upper wind external flow. THE VERTICAL DISTRIBUTION OF THE CONCENTRATION ON BOTH BUILDING FACADES AND THE CENTERLINE OF THE CANYON In the Reference geometry with galleries the middle-height vortex initially transported the pollutant towards the leeward facade, to be followed by a similar convection on the windward facade. The concentration of the PM10 near the latter remained almost constant from the ground level up to the height of about 24 meters. Above this height, the concentration decreased sharply up to the 28 meters, because the pollutants diffused to the outer atmosphere, as the vortex interacted with the outer free wind flow through the free shear layer. In the leeward building façade, the concentration decreased from the street level up to the 3 m one, remaining constant above it up to the height of 15 m. Finally, it increased up to 23m to be followed by a sharp drop up to the top of the canyon due to the vortex contact with the upper boundary layer (fig.10a). The Geometry A with galleries data were discussed in detail in section 3.2 and the distribution of the PM10 mass concentration near the leeward and windward façades as well as along the canyon centerline are illustrated in figure 9b. The Geometry B with galleries flow field was also modified so that the vertical distribution of concentration on the building facades was as follows: near the windward façade the concentration decreased between the street level and the 16 m height, followed by a sharp decrease up to the 22 m height. Finally, it remained constant from the 22 m height up to the height of 26 m. Along the last two meters appeared a sharp decrease in concentration up to the roof of the canyon. At the leeward facade the concentration evolved as follows: initially it increased from a low magnitude on the street level up to approximately the 8 m height, followed by a sharp decrease up to the height of 16 m, above which it remained was almost constant up to a height of 24 m. Finally, along the last four meters the concentration increased steadily up to the top of the canyon (fig. 10b). The leeward gallery vortex contributed to the increased particle concentration within the low heights of the building. On the opposite side, the upper main vortex contributed to an increased particle concentration along the upper floors of the same building. In the most complex scenario (i.e. Geometry C with galleries), the mechanism of the vortices does not help the pollutant to escape from the canyon. From the results above, the deviation of the mass concentration between the street level and the roof of the canyon in the leeward and windward side and in the centerline was calculated. Table 4 below presents the results for the geometries studied above with galleries and those without galleries studied in a previous work (Karkoulias et al., 2019). The major findings may be reviewed as follows: 1) In general, the galleries proved to be another important factor affecting the wind flow structures. In contrast to the simple geometry without roughness, the multi-balcony configurations with galleries significantly modified the flow field and the relevant pollutant transport mechanisms. The presence of the galleries did not help the pollutant to escape from the canyon. 2) In many of the above figures ( fig. 9b, 10) illustrated the same tendency as the Geometry A in which the field measurements were made. More specifically, the vertical distribution of the concentration did not decrease exponentially with the height but there were intervals in which it increased and burdened the adjacent area. 3) The addition of the galleries changed the number, the position and the rotation rate of the vortices. In Geometry A with galleries the middle vortex acquired less intensity than the corresponding one in the Geometry A. This means less dispersion of the pollutant and its accumulation in the middle of the cavity. 4) In Geometry A with galleries the concentration decreased by 8.4% at the height of the center of the middle vortex on the leeward side, compared to the 80% observed in the Geometry A with no galleries. On the windward side, at the same height, the concentration decreased by 12.6% compared to the 25% observed in the corresponding geometry without galleries. 5) The deviation of the mass concentration between the street level and the roof of the canyon in the leeward and windward side as well as the centerline of the canyon, in geometries with galleries, was smaller than that of the corresponding geometries without galleries (Table 4). 6) The pollutant transport within a canyon cavity started from a dead zone near the street level, followed by an exponential reduction due to the flow diffusion imposed by the vortex structures. This was followed by a smaller dead zone near the roof while the final "wash out" was driven by the shear layer between the cavity flow and the outer wind. CONCLUSIONS Galleries incorporated on both building facades of a street canyon influence the air quality in it. The presence of galleries modified the flow field inside the canyon and indicates a significant influence on the mass concentration distribution of the polluting particles. The most relevant effect on the mass transfer rate from the street level to the roof level was the presence, induced by galleries and balconies, of several vortices in the street canyon that reduced the overhaul mass transfer. The surface roughness of the building facades affect the flow features over and within the urban street canyons and as a consequence influence the mean and turbulent exchanges at the pedestrian level. The façade elements such as galleries produced lower mixing and so, they made the escape of pollutants difficult. Therefore, it appeared reasonable to suggest that through a formal exploration of buildings geometries, the ventilation potentional of urban canyons could be increased leading to an improvement of the air quality within the street canyon. SOURCES OF FUNDING This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
6,609.4
2020-12-27T00:00:00.000
[ "Engineering" ]
Type II Supernovae as Distance Indicators at Near-IR Wavelengths Motivated by the advantages of observing at near-IR wavelengths, we investigate Type II supernovae (SNe II) as distance indicators at those wavelengths through the Photospheric Magnitude Method (PMM). For the analysis, we use $BVIJH$ photometry and optical spectroscopy of 24 SNe II during the photospheric phase. To correct photometry for extinction and redshift effects, we compute total-to-selective broadband extinction ratios and $K$-corrections up to $z=0.032$. To estimate host galaxy colour excesses, we use the colour-colour curve method with the $V\!-\!I$ versus $B\!-\!V$ as colour combination. We calibrate the PMM using four SNe II in galaxies having Tip of the Red Giant Branch distances. Among our 24 SNe II, nine are at $cz>2000$ km s$^{-1}$, which we use to construct Hubble diagrams (HDs). To further explore the PMM distance precision, we include into HDs the four SNe used for calibration and other two in galaxies with Cepheid and SN Ia distances. With a set of 15 SNe II we obtain a HD rms of 0.13 mag for the $J$-band, which compares to the rms of 0.15-0.26 mag for optical bands. This reflects the benefits of measuring PMM distances with near-IR instead of optical photometry. With the evidence we have, we can set the PMM distance precision with $J$-band below 10 per cent with a confidence level of 99 per cent. INTRODUCTION Type II supernovae (SNe II) are the explosive end of massive stars (MZAMS > 8 M ⊙ ) that retain an important amount of hydrogen in their envelopes at the moment of the explosion. These events, consequence of the gravitational collapse of their iron cores, are characterized by a luminosity comparable to the total luminosity of their host galaxies, which make them interesting objects for distance measurements. ⋆ This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. † E-mail<EMAIL_ADDRESS>The pioneering work of Kirshner & Kwan (1974) marks the beginning of the use of SNe II as distance indicators. In their work they applied the Expanding Photosphere Method (EPM, a variant of the Baade-Wesselink method) to two SNe II, using optical photometry and spectroscopy during the photospheric phase (the phase between the maximum light and the transition to the radioactive tail) to estimate angular and physical sizes, respectively. For the first implementation of the EPM, Kirshner & Kwan (1974) assumed SNe II emit like blackbodies. Years after, Wagoner (1981) demonstrated that the flux of SNe II is diluted as a consequence of their scattering-dominated atmospheres, making necessary SN II atmosphere models to quantify that effect and thus c 2018 The Authors to correct derived distances. Since then, the EPM has been applied using different theoretical atmosphere models (e.g., Eastman et al. 1996;Dessart & Hillier 2005) to an ample number of SNe II (e.g., Schmidt et al. 1992Schmidt et al. , 1994Dessart et al. 2008;Jones et al. 2009;Bose & Kumar 2014;Gall et al. 2016Gall et al. , 2018, where the typical EPM distance precision is found to be about 15 per cent. Empirically, Hamuy & Pinto (2002) found a correlation between the bolometric luminosity at 50 d since explosion and the expansion velocity of the photosphere at the same epoch. This is due to the fact that a more energetic explosion corresponds to a more luminous SN with higher envelope expansion velocities. The latter correlation is the basis of the Standardized Candle Method (SCM), which allows to estimate distances using photometry and expansion velocities inter-or extrapolated at 50 d since explosion. The SCM has been applied to several SN II sets (e.g., Nugent et al. 2006;Poznanski et al. 2009;Olivares E. et al. 2010;D'Andrea et al. 2010;de Jaeger et al. 2015de Jaeger et al. , 2017bGall et al. 2018), yielding a distance precision about 12-14 per cent. Despite apparent differences between the EPM and the SCM, Kasen & Woosley (2009) showed that the SCM is a recasting of the EPM at 50 d since explosion. Additionally, by means of SN II models, they proposed a generalization of the SCM, which can be applied in any epoch during the photospheric phase. The same idea was investigated empirically by Rodríguez et al. (2014, hereafter R14), who called it the Photospheric Magnitude Method (PMM) to the SCM generalization. Measuring distances with all expansion velocities available during the photospheric phase, decreases observational errors and reduces uncertainties introduced by the interpolation/extrapolation at a certain fiducial epoch. The PMM distance precision is around 6-11 per cent (R14). For the EPM, SCM, and PMM, optical spectroscopy is necessary in order to estimate expansion velocities. Since the spectroscopy is more time consuming than photometry, expansion velocities are not always available. For this reason, de Jaeger et al. (2015) proposed a method based solely on photometry to standardize SNe II, known as the Photometric Colour Method (PCM). de Jaeger et al. (2017b) applied the PCM to a SN II sample with redshift up to 0.5, finding that the PCM distance precision is around 17 per cent. Most of distances measurements with the latter methods have been performed with optical photometry. However, observing at near-IR wavelengths has two clear benefits that in principle can improve their use as distance indicators: 1. Near-IR light is less affected by dust. Methods to measure colour excess due to SNe II host galaxies (e.g., Schmidt et al. 1992;Krisciunas et al. 2009;Olivares E. et al. 2010;Poznanski et al. 2012;R14;Pejcha & Prieto 2015) are still not well established. Therefore, it is propitious to observe at near-IR wavelengths in order to reduce the effect of miscalculation of the colour excess. Moreover, the estimation of a representative extinction curve along the SN II line of sight is still controversial. Assuming the family of extinction curves of Cardelli et al. (1989), some studies are in favour of a Galactic RV = 3.1 (e.g., Pejcha & Prieto 2015), while other authors found results in favour of lower values (Poznanski et al. 2009;Olivares E. et al. 2010;de Jaeger et al. 2015). Since the choice of a certain extinction curve has more impact at optical than at near-IR wavelengths (e.g., Schlafly et al. 2016), it is preferable to perform photometric observations at those wavelengths in order to diminish systematics induced by the assumption of an incorrect extinction curve. 2. Contamination by metal lines is less severe at near-IR wavelengths. Among the few metal lines identified in the near-IR, we remind: in the J-band range there is a feature at λ = 1.2 µm possibly due to a Si i multiplet (Valenti et al. 2015), Mg i λ1.53 µm is detected in the H-band range (Maguire et al. 2010b;Valenti et al. 2015;Yuan et al. 2016), while in the K-band range the Brackett γ is possibly blended with Na i (Dall'Ora et al. 2014). The low number and weakness of metal lines reduce the risk of systematics effects produced by differences in progenitor metallicity (e,g., Dessart et al. 2014;Anderson et al. 2016). Schmidt et al. (1992) had already pointed out the benefits of measuring distances to SNe II using near-IR photometry. However, at present, there have been very few systematic studies (e.g., Schmidt et al. 1992 for the EPM; Maguire et al. 2010a, de Jaeger et al. 2015 for the SCM). In particular, Maguire et al. (2010a) suggested that it may be possible to reduce the scatter in the Hubble Diagram (HD) to 0.1-0.15 mag (distance precision of 5-7 per cent) using near-IR instead of optical photometry. However, this result is based on the analysis of a set of 12 SNe II, 11 of them at z < 0.01, so being highly affected by peculiar velocities. To test this promising result, de Jaeger et al. (2015) applied the SCM to a set of 24 SNe II at 0.01 < z < 0.04, obtaining a HD rms of 0.28 mag (distance precision of 13 per cent) for the J-band and therefore questioning the improvements of the SCM distance precision using near-IR photometry. The goal of this study is to investigate the PMM distance precision using near-IR photometry. We organize our work as follows. In Section 2 we describe the photometric and spectroscopic data. In Section 3 we present the PMM developed in R14. In Section 4 we develop an algorithm to achieve nonparametric light curve fitting. In section 5 and 6 we compute Galactic total-toselective broadband extinction ratios and K-corrections for BVIJHK bands, respectively. In Section 7 we compute host galaxy total-to-selective broadband extinction ratios and host galaxy colour excesses through the analysis of colourcolour curves. In Section 8 we estimate expansion velocities and explosion epochs. In Section 9 we apply the PMM to our SN II sample, constructing HDs for BVIJH bands. Discussion about the PMM distance precision and systematics are in Section 10. In Section 11 we present our conclusions. OBSERVATIONAL MATERIAL We base our work on data obtained over the course of the Carnegie Type II Supernova Survey (CATS; PI: Hamuy, 2002Hamuy, -2003, a program whose main objective was to study nearby (z < 0.05) SNe II. Optical photometry and spectroscopy, along with some near-IR photometry, were obtained with the 1-m Swope, 2.5-m du Pont, and 6.5-m Magellan Baade and Clay telescopes at Las Campanas Observatory. A few additional optical images were obtained with the 0.9-m and 1.5-m telescopes at Cerro Tololo Inter-American Observatory. During the CATS survey, 34 SNe II were ob-served. Optical photometry and spectroscopy of these SNe II, along with the description of the data reduction, is presented in Galbany et al. (2016) and Gutiérrez et al. (2017), respectively. Next, we briefly summarize the general techniques used to obtain the near-IR photometric data, which will be released in a forthcoming publication. Near-IR Photometric Data The near-IR photometric observations were obtained with the JHK bands mounted in the Swope Telescope IR camera and the Wide Field IR Camera on the du Pont Telescope. Images where processed with a collection of IRAF 1 tasks. These include dark subtraction, flat field correction, sky subtraction, image registration and stacking. Instrumental magnitudes were obtained using the point spread function (PSF) technique, implemented in the SNOoPY 2 package. The near-IR magnitudes of the reference stars were calibrated using standard star fields obtained soon before or after the target field with an airmass similar to the target field. Sample of Supernovae Among the 34 SNe II observed over the course of the CATS survey, we select a subset of 10 SNe II which comply with the following requirements: (1) having at least two photometric measurements in the BVIJH bands at 35-75 d since explosion (see Section 9), and (2) having at least one measurement of the expansion velocity at an epoch covered by the photometry mentioned in point 1. To this sample, we add 14 SNe II from the literature. Table 1 lists our final sample of 24 SNe II, which includes the SN name (Column 1), the name of the host galaxy and its type (Column 2 and 3), the heliocentric SN redshift and its source (Column 4 and 5), host galaxy distance measured with Cepheids, the Tip of the Red Giant Branch (TRGB), or SN Ia (Column 6), Galactic colour excess (Column 7), and references for the data (Column 8). We also use optical and near-IR spectra of SNe II with the purpose of computing total-to-selective broadband extinction ratios (Section 5 and 7) and K-corrections (Section 6), and to estimate explosion epochs (Section 8.2). PHOTOSPHERIC MAGNITUDE METHOD The absolute magnitude of a SN II during the photospheric phase depends strongly on the temperature and the size of the photosphere (e.g., Kasen & Woosley 2009;R14;Pejcha & Prieto 2015). The latter can be estimated from the velocity of the material instantaneously at the photosphere (hereafter, photospheric velocity, v ph ) and the time since the SN explosion epoch t0, under the assumption of homologous expansion (e.g., Kirshner & Kwan 1974). R14 1 IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. 2 SNOoPy is a package for SN photometry using PSF fitting and/or template subtraction developed by E. Cappellaro. A package description can be found at http://sngroup.oapd.inaf.it/snoopy.html found that the time since explosion works better than the V −I colour (used as a proxy for temperature) to standardize the brightness of SNe II (see Fig. 9 in R14), showing that for a given band x the absolute magnitude in any moment ti during the photospheric phase, Mx,∆t i ,v ph,i , can be parametrized as (1) Here, ∆ti ≡ (ti − t0)/(1 + z) is the elapsed time since the explosion in the SN rest frame at redshift z, and ax,∆t i is a function that can be calibrated empirically. Previously, Kasen & Woosley (2009) found similar results for the SN II brightness standardization, but using SN II models. With the knowledge of t0 and a measurement of v ph in any stage of the photospheric phase, we can compute the absolute magnitude at ti (equation 1) and, therefore, compute the SN distance modulus given by m corr Here, mx,i is the apparent magnitude, A G x,i and A h x,i are the Galactic and host galaxy broadband extinction, respectively, and Kx,i is the K-correction. If more than one measurement of v ph is available, then we compute the distance modulus through a likelihood maximization (see Section 9.1). LIGHT CURVE FITS In equation (2) we need all quantities at the same epoch. Being more time consuming, spectroscopy is in general less abundant than photometry, so performing photometric interpolations is a reasonable choice. Previous efforts to fit SN II light curves use both parametric and nonparametric methods. Parametric methods assume parametric functions that capture the behaviour of the light curve from early to late stages, where parameters are obtained through least-square minimization (e.g., Olivares E. et al. 2010) or through Bayesian methodologies (e.g., Sanders et al. 2015). Nonparametric methods are based on nonparametric regressions (NPR) like local regressions (e.g., Olivares 2008) and Gaussian processes (e.g., de Jaeger et al. 2017a). Since the light curves of the SNe in our sample are in general well sampled, we prefer to use NPR methods for the light curve fitting, thus avoiding the use of heuristic models. In this work we make use of loess, a NPR method that performs polynomial fits over local intervals along the domain (Cleveland et al. 1992). To perform a loess fit, we have to specify: (1) the class of the local polynomial, which can be linear or quadratic, (2) the smoothing parameter, which defines the neighbourhood size around each element of the independent variable, where data can be well approximated by the aforementioned local polynomial, and (3) the distribution of random errors, which can be normal or symmetric (for more details, see Cleveland et al. 1992). We assume the null hypothesis that residuals are normally distributed, which can be checked with a normality test. An optimal value for the smoothing parameter can be obtained from data using the "an" information criterion (AIC, Akaike 1974, see Appendix A). Therefore, to perform a loess fit, we only have to decide the local polynomial. We choose a Leonard et al. (2002); S05: Sollerman et al. (2005); here: this work. ⋆ Host galaxy distance moduli measured with Cepheids (⊘), TRGB (⊗) with the Jang & Lee (2017b) calibration, or with SN Ia (△). ⋄ Galactic colour excesses from Schlafly & Finkbeiner (2011), with an error of 16 per cent (Schlegel et al. 1998). ⊳ Saha et al. (2006) quadratic polynomial in order to give more freedom to the loess fitting procedure. When the loess routine cannot perform a fit (e.g., for light curves with less than six points), we perform a low-order (linear or quadratic) polynomial fit. To test whether photometry errors can account for the observed dispersion around the light curve fit, f fit x,t (in flux units), we compute its log-likelihood given by where fx,i and σ f x,i are the apparent magnitude and its error in flux units, respectively, and σx,int is the intrinsic error. If an intrinsic error is necessary to maximize the loglikelihood, then we add it in quadrature to the photometry errors and perform again the light curve fitting. We repeat this process until an intrinsic error is not necessary. To test the normality of the residuals, we use the Rescaled Moment (RM; Imon 2003) test (see Appendix A). Among all light curves fits, 80 per cent have residuals with RM p-values ≥ 0.05, for which the hypothesis that residual are normally distributed cannot be rejected within a confidence level (CL) of 95 per cent. For the remaining 20 per cent, light curve fits are still unbiased and consistent, but the confidence interval (CI) of the parameters may be un-trustworthy (Doane & Seward 2016). Anyway, in this work, to prevent any shortcoming related to the non-normality of the residuals, we perform simulations to compute CIs. To compute the CI around a light curve fit, we perform 10 4 simulations varying randomly the photometry according to its error. For each realization we perform a loess (or a low-order polynomial) fit, thus obtaining 10 4 simulated light curves per band. These simulations will allow us to compute its probability density function (pdf) at different epochs. Fig. 1 shows results of the aforementioned fitting procedure applied to the SN II light curves used in this work, where solid lines are the loess (or low-order polynomial) fits, while shaded regions indicate values between the 10th and the 90th percentile, i.e., the 80 per cent CI. GALACTIC BROADBAND EXTINCTION In equation (3), the Galactic broadband extinction in a photometric band x is given by (Olivares 2008). Here, λo is the wavelength in the observer's frame, λr = λo/(1 + z) is the wavelength in the SN rest frame at redshift z, S x,λo is the x-band transmission function, F λr,i is the spectral energy distribution (SED) of the SN at epoch ti. A G λo and A h λr are the Galactic and host galaxy monochromatic extinctions, respectively, given by where R λ ≡ A λ /E(B −V ) is the extinction curve for our Galaxy (R G λ ) and hosts (R h λ ), and EG(B −V ) and E h (B −V ) are the Galactic and host galaxy colour excess, respectively. Since the SED of SNe II evolve with time, we expect that the broadband extinction A G x,i also evolve with time 3 . As the SED of SNe II has a blackbody nature, hereafter we 3 We remark the difference between a monochromatic extinction A λ , which is constant for a fixed wavelength λ, and a broadband use the intrinsic B −V colour (a proxy for temperature) to represent its evolution. In a previous work, Olivares (2008) In this work, in order to convert Galactic colour excesses directly into Galactic broadband extinctions suitable for SNe II, we compute the Galactic totalto-selective broadband extinction ratios R G x,i , such that With the purpose of obtaining representative R G x,i values for a local SN II sample through equation (8) and (5), we use: (1) a library of dereddened and deredshifted SN II spectra (see Appendix B), (2) colour excesses and redshifts from the following representative ranges: EG(B −V ) = 0.0-0.36 mag, E h (B −V ) = 0.0-0.83 mag, which were taken from the extinction A x,i , which depends on the SED and the x-band transmission function. Figure 2. Galactic total-to-selective broadband extinction ratios for BVI (left) and JHK (right) for SNe II as a function of the intrinsic B −V colour, along with residuals. Solid black lines correspond to the polynomial fits, red short-dashed lines correspond to loess regressions to the residuals, while blue long-dashed lines correspond to residuals between the m15mlt3 model (Dessart et al. 2013) and polynomial fits from observations. Gray regions indicate values within one rms, while black dash-dotted lines are the inner fences. SN sample reported in R14, and z = 0.0-0.032, and (3) an extinction curve to redden the spectra for both our Galaxy and hosts. For the latter, since a representative extinction curve along the line of sight of SNe II is still controversial, we adopt the Fitzpatrick (1999) extinction curve with RV = 3.1. For each spectrum, we perform 10 4 simulations picking randomly values of EG(B −V ), E h (B −V ), and z from the aforementioned ranges, adopting the median as the R G x,i representative value and the 80 per cent CI as its error. The left side of Fig. 2 shows the R G x,i values as a function of B −V for BVI bands. There is a clear dependence of The y-axis scale at the left of Fig. 2 is the same in the three panels, so we can see that the redder the band the less the dependence on B −V , with R G I,i being nearly constant. This behaviour is due to the blackbody nature of the SN II SED, where for the longer wavelengths the less the dependence of the SED slope on temperature. To express the dependence of R G x,i on B −V we perform polynomial fits. The latter, unlike NPR methods like loess, allow to perform corrections in an easy and less timeconsuming way (see Appendix C). To determine the optimal degree for the polynomial fit, we consider two criteria: the AIC and the Bayesian information criterion (BIC, Schwarz 1978). For more details, see Appendix A. Based on evidence ratios (Table F1), for the B-, V -, and I-band, the AIC favours degrees ≥ 3, ≥ 1, and ≥ 1, respectively, while the BIC favours degrees between 2 and 6, 1 and 3, and 1 and 4, respectively. Results for both criteria are consistent. By the principle of parsimony (a.k.a. Occam's razor), we adopt the lowest degrees, i.e., 2, 1, and 1 for the B-, V -, and I-band, respectively. For JHK bands (right of Fig. 2) we adopt constant values. Although the small number of near-IR spectra means that the results are not fully statistically robust, we are confident about the negligible dependence of R G x,i on B −V for JHK bands based on the small rms values we obtained ( 0.001). Once the optimal polynomial degrees for R G x versus B −V are determined, we perform 10 4 bootstrap resampling of the data in order to compute the polynomial fit parameters and their errors, adopting the median as the representative value. Results are summarized in Table F2. The bottom of each panel in Fig. 2 shows the residuals of the polynomial fit. To identify possible outliers we use the Tukey (1977) rule, where values below Q1−1.5(Q3−Q1) or above Q3 + 1.5(Q3 − Q1) (known as inner fences, where Q1 and Q3 are the first and third quartile, respectively) are considered outliers. The few points detected as outliers are consistent with being within inner fences considering their errors, so we do not discard them from the analysis. To analyse possible trends not captured by the polynomial fit, we perform a loess regression (red short-dashed line) to the residuals. Variations in the loess fits are mostly within one rms, meaning that the evolution of R G x on B −V can be well represented by a polynomial fit of degree determined with the AIC/BIC. For all bands we obtain RM p-values > 0.05, which means that we cannot reject the null hypothesis that residual are normally distributed (95 per cent CL). Based on this, we can treat the R G x rms error as a normal one. For comparison, we compute R G x for BVIJHK bands using synthetic spectra of the m15mlt3 model of Dessart et al. (2013). Residuals between the m15mlt3 model and polyno-mial fits from observations (blue long-dashed lines in Fig. 2) are mostly contained within one rms. K-CORRECTION The K-correction in a photometric band x is given by (Olivares 2008), being K s x,i the selective term. We proceed in the same way than in Section 5, but now the evolving SED is modified by SN host galaxy colour excess and redshift. As in Section 5 we aim for an analytical expression for K s x , for which we perform polynomial surface fits as a function of B −V and z. Since Kx = 0 for z = 0, any zindependent term on the K s x polynomial fit is zero. Dividing by z, the polynomial surface to adjust will be of the form being OB−V and Oz,j 1 the orders in B −V and z, respectively, and aj 1 ,j 2 the fit parameters. To determine the orders OB−V and Oz,i, we generate 10 5 spectral samples, where for each sample we assign to each spectrum a random redshift up to 0.032. For each realization, we obtain optimal order values using the AIC/BIC and the principle of parsimony. In all cases we obtain that K s x /z depends only on B −V , i.e., it is z-independent for z ≤ 0.032. Fig. 3 shows the K s x /z values as a function of B −V for BVI (left) and JHK (right). We perform the same analysis than in Section 5. For BVI bands we adopt straight lines, while for JHK bands we fit constant values (see Table F1). Results are summarized in Table F2. Variations of the loess fits to the residuals are within one rms, meaning that the dependence of K s x /z on B −V can be well represented by the polynomial fit of degree determined with the AIC/BIC. For all bands we obtain RM p-values > 0.05, which means that we cannot reject the null hypothesis that residual are normally distributed (95 per cent CL). Based on the latter, we can treat the K s x /z rms error as a normal one. HOST GALAXY BROADBAND EXTINCTION In equation (3), the host galaxy broadband extinction in a photometric band x is given by (Olivares 2008). We proceed in the same way than in Section 5 and 6, but now the evolving SED is modified only by the SN host galaxy colour excess. Similar to Section 5, we define the host galaxy total-toselective broadband extinction ratios R h x,i , such that The optimum R h x versus B −V polynomials and their parameters are summarized in Table F2. R14 showed that for SNe II the B −V versus V −I colour-colour curve (C3) can be used to estimate E h (B −V ) through the method proposed by Natali et al. (1994), which was originally developed to estimate interstellar colour excess for open clusters. The C3 method states that, under the assumptions that (1) the C3 can be well-represented by a straight line, and (2) all SNe II have the same C3 (which means the same slope and intercept), the host galaxy colour excess can be estimated with the formula (e.g., Munari & Carraro 1996;R14). Here, S ≡ {cx, cy} indicates the colours used as x-and y-axis in the colourcolour diagram, corrected for Galactic colour excess and K-correction. M{mS} is the median of a set of SN II C3 slopes {mS}, nS,i = cy,i − M{mS} × cx,i and nS,zp are the y-intercept of the C3 linear fit using a fixed slope M{mS} and that of the SN II less affected by colour excess, respec- (14) must be evaluated separately at each point of the C3. In principle, one colourcolour observation is enough to estimate the colour excess, however more observations allow to check internal consistency and reduce observational errors. The C3 method relies strongly on the aforementioned two assumptions. In a previous work, R14 assumed the linearity of C3s based on the blackbody nature of the SED of SNe II during the photospheric phase, while assumption 2 was adopted based on the dependence of the emergent flux mainly on temperature displayed by SN II atmosphere models (e.g., Eastman et al. 1996;Jones et al. 2009). In this work we show that C3s can indeed be expressed as straight lines for several colour combinations (see Appendix C). Therefore, the major source of systematics comes from assumption 2. There are indeed some effects, like the line blanketing, that modify the SED continuum shape. In addition, differences in photometric systems (Scorrection; Stritzinger et al. 2002) are expected to produce further changes on C3 parameters. Therefore it is propitious to search for a colour combination where the effect of a colour excess on a C3 is greater than the effect of systematics. An analysis of the effect of systematics on the C3 yintercept is beyond the scope of this work because it requires an ample set of unreddened SNe II. However, the effect of systematics on C3 slopes and its consequent effects on the E h (B −V ) estimation through the C3 method can be quantified in a simple way. The presence of dust along the line of sight produces a vertical displacement of the C3 (for a graphical representation, see R14) where, following equation (14), the magnitude of the displacement and its rms error are respectively. In equation (16), we do not include the error . Selective term of the K-correction over redshift for BVI (left) and JHK (right) for SNe II as a function of the intrinsic B −V colour, along with residuals. Solid black lines correspond to the polynomial fits. Red short-dashed lines correspond to loess regressions to the residuals, while blue long-dashed lines correspond to residuals between the m15mlt3 model (Dessart et al. 2013) and polynomial fits from observations. Gray regions indicate values within one rms, while black dash-dotted lines are the inner fences. induced by errors in γS,i, which is lower that 17 per cent of the uncertainty induced by the error in M{mS}. In order to find the colour combination that maximizes the dust effect (equation 15) and minimizes its error (equation 16), we define the quantity (a signal-to-noise ratio) Therefore, the most appropriate colour set S to compute E h (B −V ) with the C3 method is one that maximizes ξS,i. Fig. 4 shows the ξS,t values for all possible independent colours combinations with the BVIJH bands, using the M{mS} and rms{mS} values computed with our SN II sample (see Appendix C), and using B −V = 0.0 and 1.4, which are typical colours at the start and end of the photospheric phase, respectively. We do not include the K-band in this analysis because of the scarcity of SNe II with photometry in that band. The best colours combinations, independent of the intrinsic B −V , are those involving B −V , with V −I versus B −V the best. For this combination we obtain M{mS} = 0.45 ± 0.07. We remark that colours combinations that do not include the B-band have ξS,t 1.0, which indicates that the noise induced by intrinsic differences of C3 slopes is greater than the effect of host galaxy dust, and therefore those combinations are not suitable for E h (B −V ) measurement through the C3 method. We point out that colours combinations under the diagonal correspond to those above the diagonal but with the axes exchange. In principle they give the same information. However, by construction, they maximize displacement in x-axis instead of y-axis. To compute the pdf of nS,zp for S = {B −V , V −I}, we use the data of SN 2003bn and SN 2013ej, which are affected by a negligible host galaxy colour excess (R14), maximizing the likelihood of a straight line with slope 0.45 ± 0.07. With this process, we obtain a pdf with median of 0.107 mag and rms = 0.053 mag. Since the RM p-value for the latter distribution is > 0.05, we treat it as a normal distribution. #SNe=13 offset=-0.01 mag rms=0.05 mag Table F3), which are based on the fit between observed spectra and SN II models. We measure a median offset of −0.01 mag, meaning that our estimations of E h (B −V ) are slightly lower than those estimated by Olivares E. et al. (2010). Both methods are consistent within ±0.05 mag. EXPLOSION EPOCH AND PHOTOSPHERIC VELOCITY The explosion epoch and the photospheric velocity are, under the assumption of homologous expansion, the unique parameters determining the actual size of the photosphere (Kirshner & Kwan 1974). Photospheric Velocities The most widely used method to estimate SN photospheric velocities consists of measuring the blueshift of P Cygni absorption minima in SN spectra (Kirshner & Kwan 1974;Eastman & Kirshner 1989). Weak lines, like those from Fe ii species, are typically used under the assumption that they are formed near the photosphere (e.g., Leonard et al. 2002). A more confident method to estimate photospheric velocities is through the cross-correlation technique Takáts & Vinkó 2012), where observed spectra are compared to those from SN models which have known photospheric velocities. The application of the latter method is beyond the scope of this work, therefore we will use velocities derived from the Fe ii λ5169 line absorption minima as a proxy for the photospheric velocity. To estimate Fe ii λ5169 absorption minima with appropriate errors, we have to consider the uncertainties induced by the noise and spectral resolution (∆λ) of each spectrum, and also by the endpoints we choose for the line profile. We estimate the noise on the Fe ii λ5169 line profile of each spectrum performing a loess fit and then removing it to the observed line profile. Then we generate 10 4 simulated line profiles, varying randomly the noise over the loess fit, wavelengths within ∆λ, and endpoints. For each realization we apply a loess fit, registering the minimum value. The output of this process is a distribution of absorption minima, which we convert to velocities using the relativistic Doppler equation. With this process we obtain typical v ph rms errors between 30 and 230 km s −1 , with a median of 76 km s −1 . Photospheric velocities are estimated from spectroscopic data, corrected for the SN heliocentric redshift. In some cases, SN II spectra shows narrow emission lines as result of a superposed H ii region at the SN position. These narrow lines allow a good estimation of the SN heliocentric redshift, under the assumption that the SN is spatially close to the H ii region (e.g., Anderson et al. 2014a). When those lines are not present in the SN spectra, the heliocentric redshift of the host galaxy is used as a proxy for the SN heliocentric redshift. However, since most of the SNe II in our set explode in spiral galaxies, the SN heliocentric redshift has a component due to the galaxy rotation. Anderson et al. (2014a) computed heliocentric redshifts of 72 SNe II using H ii region narrow emission lines, and comparing with heliocentric redshifts of the host galaxy nucleus, they obtained a zero-centred distribution with a rms of 162 km s −1 , which is attributed to the galaxy rotation effect. In our sample, 11 SNe II (SN 2002gd, SN 2002gw, SN 2002hj, SN 2003B, SN 2003E, SN 2003bl, SN 2003bn, SN 2003ci, SN 2005ay, SN 2009N, and SN 2014G) show H ii region narrow emission lines in the spectra, which we use to estimate the heliocentric redshift. Another six SNe (SN 2003T, SN 2004et, SN 2005cs, SN 2008in, SN 2012aw, and SN 2013ej) exploded within nearly face-on galaxies, in which case we adopt the redshift of the host galaxy nucleus. For SN 1999em we adopt the value from Leonard et al. (2002), and for SN 2003hn we use the average of the Na i D velocities measured by Sollerman et al. (2005). The remaining five SNe (SN 2003cn, SN 2009ib, SN 2009md, SN 2012A, and SN 2012ec) did not occur within nearly face-on galaxies, and do not show H ii region narrow emission lines in the spectra. For those cases we adopt the redshift of the host galaxy nucleus, with an error of 162 km s −1 (that we assume normal) to take into account the host galaxy rotational velocity. Adopted SN heliocentric redshifts are listed in Table 1. Explosion Epoch The SN explosion epoch can be estimated by means of photometric information; it can be constrained between the last non-detection t ln and the first detection t fd (e.g., Nugent Valenti et al. 2016), or estimated through a polynomial fit to the rise-time photometry when it is available (e.g., González-Gaitán et al. 2015;. The spectroscopy of a SN can also provide information about its explosion epoch by means of the comparison with other spectra of SNe with explosion epoch estimated through photometric information (e.g., Anderson et al. 2014b;Gutiérrez et al. 2017). Column 4 and 5 of Table F3 lists the t ln and t fd values of the SNe in our set, respectively. The explosion epochs for our set are typically constrained within 14 d using photometric information, which is twice the range suggested by R14 (namely, 7 d) to reduce errors induced by t0 errors over PMM distances. We need, therefore, to include spectroscopic information in order to better constrain the explosion epochs. As was done by Anderson et al. (2014b) and Gutiérrez et al. (2017), to estimate t0 we use optical spectroscopy along with the Supernova Identificator code (SNID; Blondin & Tonry 2007), which finds by crosscorrelation the spectra from its SN library that are more similar to the input spectrum. For a good estimation of t0 with SNID, we need a library with spectra of an ample amount of SNe II that sample the high spectral diversity displayed by SNe II (e.g., Gutiérrez et al. 2017) and with t0 constrained by photometric information. In this work, we compile optical spectroscopy of 59 SNe II with t0 constrained within 10 d (for more details, see Appendix D). To estimate the explosion epoch of a given SN (SNinput) with N spectra ({spec}) through SNID and using our SN II templates library, we perform the following procedure: 1. We run SNID using as input the N spectra of SNinput earlier than 40 d since the first detection. The SNID output for each spectrum is a list with the best-matching templates, their phase since explosion, and their rlap parameter (which indicates the strength of the correlation). 2. We convert phases since explosion to explosion epochs (since we know the phase of each SNinput spectrum). The associated errors are derived from the rlap values through a procedure described in Appendix D. 3. From each of the N lists, we select the first ten bestmatching templates with rlap > 5.0, compiling them in a unique list. From this list, we extract a sublist for each of the M best-matching SNe (SN bm ). With each of the M sublist, we compute the SNin explosion epoch as the average, taking the standard deviation as the associated error, and including the respective explosion epoch error of the SN bm through a Monte Carlo error propagation. If a spectrum gives a median t0 greater than 40 d, then we remove it from the analysis. 4. After that, we compute the likelihood L(t0|{spec}) with the M results, including an error of 4.1 d which is the rms obtained from the comparison between explosion epochs constrained with photometric information and those derived with SNID (see Appendix D). APPLYING THE PMM Once all observables required for the PMM are available, the next step is to prepare the data before applying the method. As was mentioned in Section 4, we interpolate photometry to the epochs of the photospheric velocities. Since we want to study the PMM distance precision at different photometric bands, i.e., changing only the photometry, we use epochs where spectroscopy is covered simultaneously by optical and near-IR photometry. In the case of SN 2002hj, it does not have spectroscopy covered by J-band photometry, so we interpolate photospheric velocities (using loess) and BVIH photometry to the epochs of J-band photometry. Calibration For the PMM calibration, we express ax,∆t as where ZPx is the zero-point of the PMM in the x-band, and a * x,∆t is a function that represents the dependence of ax,∆t on ∆t (without the constant term). To estimate the evolution of a * x,∆t with ∆t, we use the a * x,∆t i values of the SNe in our set with two or more v ph measurements during the photospheric phase. For each SN, the a * x,∆t i values are given by where δSN is an additive term to normalize the a * x,∆t i values of each SN to the same scale. Based on the definition given in R14, the dependence of a * x,∆t on ∆t has the form a * x,∆t = fx,∆t − 5 log ∆t 100 d . We express the dependence of fx,∆t on ∆t through polynomials. We use the AIC/BIC to determine the optimum polynomial order for fx,∆t and the values of δSN, while to estimate the time range of applicability of the PMM, we group the fx,∆t i values in bins of width 10 d and then we compute the rms of the points in each bin. We found that rms values are lower in a range 35-75 d since the explosion. Among all optimum orders for BVIJH bands (see Table F1), we select the order that the different bands have in common, i.e., order one. With this, we prevent that differences in the rms(fx,∆t) value for different bands are due to differences in the order of the polynomial fit. To estimate error in the parameters, we perform 10 4 bootstrap resampling. Table 2 lists fx,∆t fits parameters for BVIJH bands. The left half of Fig. 6 shows the values of a * x,∆t i as a function of ∆ti for BVIJH bands. The variation of the loess fits (red dashed lines) are within one rms (black dotted lines), which means that polynomial fits we adopted capture almost all the dependence on ∆t. The PMM zero-points can be obtained using a sample of SNe II at known distances where, for each SN, we have Here, µ SN host is the SN host galaxy distance modulus and µ * x is the SN pseudo-distance modulus. The latter, for each measurement of v ph at time ti, is defined similar to equation 2 but with a * x,∆t i instead of ax,∆t i , i.e., The pdfs of µ * x,i are obtained through equation 23 using the pdfs of the observables for each photospheric velocity epoch. Finally, we combine the pdfs of µ * x,i in a unique µ * x pdf maximizing the likelihood (equation A1) for a constantonly model. To compute accurate ZPx values, we need SNe II in galaxies with distances measured with the best possible precision. Among the SNe that we compiled from the literature, there are only three (SN 1999em, SN 2003hn, and SN 2012aw) in galaxies with distances measured through Cepheid, and five (SN 2003hn, SN 2004et, SN 2005cs, SN 2012aw, and SN 2013ej) in galaxies with distances measured with TRGB. Cepheid distances for the hosts of SN 1999em and SN 2012aw were reported by Saha et al. (2006), while Riess et al. (2016) reported the Cepheid distance of the host of SN 2003hn. Comparing Cepheid distances of six galaxies in common between the two publications (NGC 1365, NGC 3370, NGC 3982, NGC 4536, NGC 4639, and NGC 5457) we found that Cepheid distances reported by Saha et al. (2006) are, on average, 0.19 mag greater than those reported by Riess et al. (2016), showing a rms of 0.13 mag. The latter could indicate a systematic difference between the two calibrations, which can introduce an undesirable noise on the ZPx estimation if we rescale Saha et al. (2006) distances to the Riess et al. (2016) calibration. For this reason, we decide to use only SNe in galaxies with TRGB distances, that can be homogenized to the Jang & Lee (2017a) calibration, which is based on the distance to the Large Magellanic Cloud and NGC 4258. Recalibrated TRGB distances are listed in Column 6 of Table 1. From these five SNe II, we discard SN 2004et since the TRGB distance of its host is at least 0.59 mag higher compared to the distances we compute for SN 2004et and two other SNe II that exploded in the same galaxy (see Appendix E). The right half of Fig. 6 shows the ZP SN x values for BVIJH. As in the case of µ * x , the pdf of ZPx is obtained combining the pdfs of ZP SN x . Median values, 99 per cent CI, and rms values for ZPx are summarized in Table 2. Once the PMM zero-points are computed, we can estimate the distance modulus for each band as µx = µ * x − ZPx. Median values, 80 per cent CI, and rms values for µx are summarized in Table F4, where we include the TRGB zeropoint systematic error of 0.058 mag (Jang & Lee 2017a). Hubble Diagrams To investigate the PMM distance precision, we construct HDs. We convert heliocentric host galaxy redshifts to cosmo- logical ones using as reference the cosmic microwave background (CMB) dipole (Fixsen et al. 1996). Redshift errors are dominated by peculiar velocities, with a rms of 382 km s −1 for local SN Ia host galaxies (z < 0.08, Wang et al. 2006), followed by the error in the determination of the Local Group velocity (rms of 187 km s −1 , Tonry et al. 2000). CMB redshifts and their rms errors are listed in Table F4. Taking into account the pdfs of the pseudo-distance moduli (µ * x ) and the pdfs of the CMB redshifts (czCMB), we compute the Hubble diagram intercept (aHD,x) maximizing the likelihood (equation A1), where the model for the pseudo-distance modulus is given by the Hubble law µ * x = aHD,x + 5 log(czCMB). The left half of Fig. 7 shows HDs for BVIJH bands, using PMM distances for all SNe in our set. The rms, greater than 0.5 mag for all bands, is mostly produce by peculiar velocities of host galaxies at low redshift. In fact, the median redshift of the host galaxies in the HD is 1528 km s −1 , where a redshift error of 382 km s −1 translates into a magnitude error of 0.54 mag. Indeed, if we use redshifts corrected for the infall of the Local Group toward the Virgo cluster and the Great Attractor (czcorr) instead of CMB redshifts, we obtain a HD rms of 0.34-0.38 mag for VIJH bands (see the right half of Fig. 7). We note that even after infall corrections the scatter in the HDs is still mostly due to SNe in galaxies with cz < 2000 km s −1 . Therefore, to estimate the PMM distance precision and the Hubble constant (H0), given by log H0 = (25 − aHD,x + ZPx)/5, we use only SNe II with cz > 2000 km s −1 and, as visible in the left half of Fig. 8, the HD rms decreases significantly. The corresponding values of H0 and rms are listed in Table 3. The values of H0 range between 67.1 and 74.9 km s −1 Mpc −1 . Taking into account that the ZPx values were calibrated using TRGB distances in the scale of Jang & Lee (2017a), our H0 values, as expected, are consistent within the errors with those reported in Jang & Lee (2017b), i.e., 71.17 ± 1.66 ± 1.87 km s −1 Mpc −1 , which also use the Jang & Lee (2017a) calibration. As visible in Column 4 of Table 3, all the H0 values are compatible within their errors. However, we note that H0 decreases moving from shorter to longer wavelengths, which could suggest a systematic introduced by: (1) our assumption of the RV value for the SN host galaxies, or (2) an underestimation and/or an overestimation of the host galaxy colour excesses for the four SNe in the PMM calibration set and the the nine SNe at czCMB > 2000 km s −1 , respectively. To test the first possibility, we change the RV adopted for SN II host galaxies to the lowest RV value for which the Fitzpatrick (1999) extinction curve is defined (RV = 2.3). As is visible in Column 6 of Table 3, there are no significant changes in the H0 values. For the second possibility, we found that an underestimation of E h (B −V ) for the SNe in the calibration set, or an overestimation of E h (B −V ) for the SNe at czCMB > 2000 km s −1 , of 0.05-0.07 mag can erase the tension between the H0 values for all bands. Given the typical statistical E h (B −V ) error of 0.097 mag (see Section 7), the probability of obtaining an E h (B −V ) underestimation of 0.05-0.07 mag with four objects is of 8 per cent, while to obtain an overestimation in a same amount for nine objects is of 5 per cent. It is worth mentioning that also the scatter in ZPx decreases going from shorter to longer wavelengths, suggesting again a trend introduced by the combination of a large uncertainty in the colour excess estimation and the low number statistics. Regarding the HD scatter, we note that the rms of 0.28 mag obtained with the B-band decreases to 0.15-0.18 mag for the VIJH bands. Despite the good results, the low number of SNe II within galaxies at czCMB > 2000 km s −1 makes the result not statistically robust. Therefore, to check the PMM distance precision, it is necessary to include more SNe II into the analysis. Thus, we include the four SNe II used for the PMM calibration, plus other two in galaxies having Cepheid and SN Ia distances. The latter distances are converted to redshifts through the Hubble law (equation 24), where for each band we use the H0 value listed in Column 4 of Table 3 and the ZP value given in Table 2. The right half of Fig. 7 shows the HDs computed with the selected SNe II based on the aforementioned criterion, which correspond to our final sample. For VI bands we obtain a HD rms of 0.15-0.16 mag. The lowest HD rms is obtained with the J-band, whose rms of 0.13 mag translates into a distance precision of 6 per cent. This value, compared to the rms of 0.15-0.26 mag obtained for optical bands, suggests that using the J-band photometry instead of optical one to measure PMM distances can improve the precision by at least 0.07 mag. For the H-band we expected a similar HD rms than for J-band since, among BVIJH bands, the H-band is the least affected by dust extinction. We, however, obtained a HD rms of 0.15 mag. The latter can be partially due to the higher photometry error of the H-band (of 0.07 mag) with respect to the error of the J-band (of 0.05 mag). Statistical significance of the result Given the small size of our SN sample, the HD rms of 0.13 mag we measured for the J-band is not statistically robust enough to be considered as a measure of the PMM distance precision in that band. In particular, we want to know the probability of measuring a rms ≤ 0.13 mag with N = 15 values drawn from a parent distribution with standard deviation (σ) ≥ 0.13 mag. Assuming a normal parent distribution, the quantity (N −1)(rms/σ) 2 has a chi-square distribution with N − 1 degrees of freedom. Using this, we found that there is a 1 per cent chance that the parent distribution has σ = 0.23 mag. Therefore, with the evidence we have, we can set an upper limit on the PMM distance precision with the J-band of 10 per cent with a CL of 99 per cent. Comparison with other methods For a consistent comparison of our results with those from other SN II distance measurement methods, we select results from works that uses a sample of SNe II at z ≈ 0.01-0.03. Table 4 lists the best distance precisions reached by other SN II distance measurement methods along with results obtained in this work. We note that the precision we report in this work is significantly lower than the best dispersion obtained by other works with SCM and PCM. We also compare PMM and SCM applied to the same sample for J-and H-band. For this comparison we discard SN 2002hj because there is not photometry in J-band at 50 d since explosion. As visible in Table 4, the dispersion is lower by ∼30-40 per cent in both band. Error budget Taking into account that the lowest HD rms is obtained with the J-band, in Table 5 we show the statistical error budget for the distances measured for that band. We see that 88.6 per cent of the statistical error is induced by the errors of the first four terms: the host galaxy colour excess, the ex- plosion epoch, the PMM zero-point, and the J-band photometry. Therefore, it is possible improve the performance of the PMM in the future developing a better method to estimate E h (B −V ), selecting SNe II with explosion epoch constrained within a small range of time, including more SNe II in the PMM calibration set, and increasing the quality of the J-band photometry. Diminishing systematics Our results show that we are reaching a precision in distance modulus of ±0.1 mag with the PMM at near-IR wavelengths. Therefore it is important to control systematics, and push them below 0.1 mag. For the latter in the following, we analyse sources of systematics affecting our results: Fig. 6) is stronger at early times, so t0 errors have a strong effect at those epochs. In order to obtain errors lower than 0.1 mag for observations at ∆t 35 d, we need SNe II with explosion epochs constrained within 5 d. Explosion epoch: The dependence of the PMM calibration on explosion epoch (left half of 2. SN heliocentric redshift: When the host galaxy heliocentric redshift is used as a proxy of the SN heliocentric redshift, a systematic error of is introduced into the absolute magnitude (equation 1). This effect increases when the photospheric velocity decreases, translating into errors 0.1 mag for photospheric velocities 3500 km s −1 . Therefore, if optical spectra of a SN II do not show H ii narrow emission lines due to a nearby H ii region or if the SN is not within a nearly face-on galaxy, then epochs for which photospheric velocities are greater than 3500 km s −1 are preferable. 3. Host galaxy redshift: A galaxy is believed to be within the Hubble flow when its redshift is greater than 0.01. At that redshift, peculiar velocities are thought to be negligible compared with the velocities due to the Universe expansion. However, the typical error of 382 km s −1 translates into a distance modulus error of 0.28 mag for z = 0.01. Including the error in the determination of the Local Group velocity of 187 km s −1 , the redshift error increases to 425 km s −1 . Therefore, in order to reduce the error induced by redshift errors at a level lower than 0.1 mag, it is necessary to observe SNe II within galaxies at z > 0.03. CONCLUSIONS Our main results are the following: 1. Using nine SNe II at cz > 2000 km s −1 , we obtained H0 ranging between 67.1 and 74.9 km s −1 Mpc −1 , and a HD rms of 0.15-0.28 mag. 2. Adding six SNe II with host galaxy distances measured with TRGB, Cepheids, or SN Ia (total 15), which distances were converted to redshifts through the Hubble law, we obtain a HD rms of 0.15-0.26 mag in the optical bands, which reduces to 0.13 mag in the J-band. In order to test further the promising results we are obtaining in this work, it is necessary to carry out an optical and near-IR photometric follow-up of SNe II at z > 0.03 and with explosion epochs constrained within 5 d. For these SNe, it is necessary to take at least one optical spectrum at any epoch between 35-75 d since explosion. Its evident from Fig. 1 that the quality of the near-IR photometry used in this work is in general lower than the optical one. Therefore, we expect that increasing the quality of the near-IR photometry will further improve our results. APPENDIX A: MODEL SELECTION For the model selection we consider two criteria: the "an information criterion" (AIC, Akaike 1974), which is based on information theory, and the Bayesian information criterion (BIC, Schwarz 1978), which is based on Bayesian inference. From a set of R models, the AIC selects the model that have the least information loss with respect to the unknown true model, while the BIC selects the model with the highest likelihood L, given by Here, Xi is the i-th observed data, p(Xi|model) is the probability density function (pdf) of Xi given the model, and N is the number of observed data points. Quantitatively, the AIC and BIC search for a balance between overfitting and underfitting penalizing the likelihood. For the AIC and the BIC, the best model is one which minimizes the quantity (corrected for small sample sizes, Sugiura 1978), and BIC ≡ −2 ln Lmax + k ln N (A3) (Schwarz 1978), respectively, where Lmax is the maximum likelihood achievable by the model, and k is the number of parameters of the model. It is known that a model selection based only on the minimum AIC value reached for a certain model does not provide enough evidence to prefer that model over the other ones (e.g., Akaike 1978;Burnham & Anderson 2002). Instead, it is necessary to include into the analysis the strength of evidence in favour of each model. To quantify the latter, it has been proposed to use the likelihood of the model given the data (e.g., Akaike 1978) which, normalized by the sum of likelihoods of all R models, defines the Akaike weights (e.g., Burnham & Anderson 2002). Here, AICmin is the minimum AIC value reached among the R models used in the analysis. The same idea is applicable for the BIC (Burnham & Anderson 2002), which defines the Bayesian weights For the AIC and BIC, the evidence ratios defined as wi/wj and pi/pj, respectively, allow comparison of the evidence in favour of the i-th model as the best model versus the j-th model. As reference, if evidence ratios are greater than 13.0, then there is a strong evidence in favour of the i-th model over the j-th model (e.g., Liddle 2007). If several models have substantial support as the best (e.g., evidence ratios < 13.0), then, by the principle of parsimony, we select the one with less parameters. In the case of least-square regressions, with random errors normally distributed and with constant variance, −2 ln Lmax = N ln 2πσ 2 + N (A6) (e.g., Burnham & Anderson 2002, p. 17), whereσ 2 is the average of squared residuals around the model. The AIC and BIC in this case can be expressed as Sinceσ 2 is computed from data, it must be considered as a model parameter. In the case of nonparametric regressions (NPR), like loess, k is not defined. Instead, it has been proposed to use the trace of the smoother matrix, tr(H), which is a quantity analogous to the number of parameters in a parametric regression (Cleveland et al. 1992;Hurvich et al. 1998). Replacing k = tr(H) + 1 in equation (A7) allows us to obtain the AIC version for NPR presented by Hurvich et al. (1998). To check the normality of random errors, it is necessary to carry out a normality test. As we do not measure random errors directly, we use residuals instead. However, widely used normality tests like the Shapiro & Wilk (1965) and the Jarque & Bera (1987) test, when applied over residuals, have little power to reject the null hypothesis even when the random errors are not normal (Das & Imon 2016). Imon (2003) proposed a statistic more suitable to verify normality for regressions, which is based on the Jarque & Bera (1987) test and on the use of unbiased moments. The statistic of the test, called Rescaled Moment (RM), is defined as RM ≡ N c 3 μ 2 3 /μ 3 2 /6 + c · (μ4/μ 2 2 − 3) 2 /24 (A9) (Rana et al. 2009), whereμi is the i-th sample moment, and c ≡ N/(N − k). Under the null hypothesis of a normal distribution, the RM statistic is asymptotically distributed as a chi-square distribution with two degrees of freedom. APPENDIX B: SN II SPECTRA LIBRARY In order to compute total-to-selective broadband extinction ratios (Section 5 and 7) and K-corrections (Section 6) for SNe II, it is necessary to know their SED. The latter can be estimated through theoretical models (e.g., Sanders et al. 2015;de Jaeger et al. 2015de Jaeger et al. , 2017b or, as in Olivares (2008) and in this work, through spectroscopic observations. In practice, spectra are not always taken with the slit oriented along the parallactic angle (PA), so their shape can be modified due to differential refraction (Filippenko 1982). Even when the slit is oriented along the PA, contamination due to the light from the host galaxy can produce spurious SEDs. Therefore we have to check the flux calibration of spectra before using them as proxies for the SED. To do the latter, we compute colour indices from the spectra and then we compare them with those obtained using the observed photometry. If the spectrum is well flux-calibrated, then colour differences should be small. Photometric colour indices at the epoch of the spectra can be computed through the light curve fitting procedure presented in Section 4, while to compute a x1 − x2 colour from a spectrum we use Here, λ is the wavelength in the observer's frame, F λ is the observed SED of the source, S x 1 ,λ and S x 2 ,λ are the transmission functions of the photometric band x1 and x2, respectively, and ZPx 1 −x 2 is the zero point of the colour scale, which can be computed using a star with good spectrophotometric observations. We use the Vega SED published by Bohlin & Gilliland (2004) 4 and the magnitudes published by Fukugita et al. (1996): BVega = 0.03, VVega = 0.03, and IVega = 0.024 mag, and by Cohen et al. (1999): JVega = −0.001, HVega = 0.000, and KVega = −0.001 mag. We adopt the transmission functions given in . Among the SN II spectra available from different sources, we select those: (1) observed in the photospheric phase, and (2) covered by B-and V -band photometry. To check the flux-calibration in the blue and red part of the optical spectra, we compute ∆x 1 −x 2 ≡ (x1−x2) phot −(x1−x2)spec using the B −V and V −I colours, respectively, while for the near-IR spectra we use the J −H and H −K colours, respectively. We also compute the intrinsic B −V colour to represent the shape of the SED. For optical spectra we compute this quantity from dereddened and deredshifted spectra, while for near-IR we compute the intrinsic B −V colour from the photometry (see Section C). values for the collected spectra. For the SN II spectra library, we select spectra with |∆x 1 −x 2 | < 0.1 mag. Finally, we correct spectra for redshift and for Galactic and host galaxy extinction, assuming a Fitzpatrick (1999) extinction curve with RV = 3.1 for both our Galaxy and hosts. Table F5 summarizes the details of the spectra in the library: SN name (Column 1), Galactic colour excess (Column 2), heliocentric redshift (Columns 3), host galaxy colour excess (Column 4), and references for the data (Column 5). APPENDIX C: C3 LINEARITY Assuming that a C3 can be well represented by a polynomial fit, the linearity of a C3 can be demonstrated if there is a high fraction of SNe II showing C3s with straight line as optimal polynomial. Due to the scarcity of K-band photometry, we use only BVIJH photometry for this analysis. With BVIJH photometry set it is possible to define a total of ten colour indices and, therefore, 90 colour-colour plots (i.e., discarding one-to-one correlations). Among them, only 36 combinations give us non-superfluous information, which we analyse for host galaxy colour excess estimation. Before computing intrinsic C3 slopes, photometry must be corrected for Galactic and host galaxy extinction, and for K-correction. Since we need the prior knowledge of the intrinsic B −V , we need to know in advance the value of host galaxy colour excess. For the latter, we apply zero order correction as prior values. The intrinsic B −V can be computed easily from the relation between the observed and the intrinsic B −V colour, i.e., where (B −V ) obs is the observed B −V . In Sections 5, 6, and 7 we found that R G For each SN and for each colour combination, we adjust a polynomial fit. The optimum degree is determined using the AIC/BIC and the principle of parsimony. Fig. C1 shows the fraction of SNe that are well represented by a straight line. In 20 of the 36 colour combinations, the number of SNe displaying a linear C3 is over 70 per cent. Assuming the C3 linearity for different combinations, we compute slopes of all SNe in our set. For each colour combination we compute the median and rms of the C3 slopes. Fig. C2 shows this process for the V −I versus B −V C3s. Among the spectra available from different sources, we select spectra: (1) of SNe II with explosion epoch constrained within 10 d through photometric information, where for these SNe we assume the midpoint between the last nondetection and the first detection as the explosion epoch (t 0,ln ), (2) spanning a rest-frame wavelength range from 4100 A to 7000Å with a S/N 10Å −1 , and (3) within 40 d since the explosion epoch. We do not include spectra at > 40 d since explosion because in the literature it is less abundant than spectra at earlier epochs (see, for example, Fig. 5 in Gutiérrez et al. 2017) which could bias the age determination toward earlier epochs, and also because at late time the spectral evolution is slower than at early phases, which makes more difficult to accurately determine ages with SNID (Blondin & Tonry 2007). If for a given epoch a SN has several spectra within one day, then we keep that with higher S/N. With the aforementioned constraints, we generate a SNID template library with 242 spectra of 56 SNe II, where each spectrum is corrected by heliocentric redshift. Details of this SN II templates library are summarized in Table F6. The library has, on average, 6 spectra per day, while almost all the variation is within ±2 rms around the mean. To create the template library, we run the logwave routine (which is part of the SNID program) with the options w0=3000 w1=8400 nw=1024. This generates SNID spectral templates with a bin in the logarithmic wavelength space of ln(8400/3000)/1024≈0.001, equivalent to 300 km s −1 . Once the template library is created, the next step is to test how good are the phases since explosion computed with SNID and our new library. To do this, we run SNID with each library spectrum as input, using the avoid option to avoid templates of the same SN. We record all phases and rlap values (which indicate the correlation strength) of the templates found to be similar to the input spectrum and with a redshift within ±0.01. The top panel of Fig. D2 shows the 2D histogram of differences between phases since explosion estimated from last non-detection and from SNID, versus rlap. To correlate rlap with a rms error in phase, we compute the rms of phase differences in bins of width 2 rlap, which is shown at bottom of In general, only one spectrum (e.g., the earliest) is used to estimate explosion epochs with SNID (e.g., Anderson et al. 2014b;Gutiérrez et al. 2017). We expect, however, that including all available spectra of a SN in the analysis will result in a best estimation of the explosion epoch. To explore this possibility, we select all SNe in our library with five or more spectra and perform the following procedure: 1. For each SN we randomly choose one spectrum, computing the explosion epoch (t0,SNID) and t0,SNID − t 0,ln . 2. We compute the median (i.e., the offset) and the rms of the t0,SNID − t 0,ln distribution. 3. We repeat steps 1 and 2 10 2 times, recording the median of the offsets and the rms values. 4. We repeat steps 1-3, but now randomly choosing two and then three spectra per SN as input. (1989) extinction curve and R V = 1.9 (Pozzo et al. 2006). For that extinction curve, we obtained R h J = 0.402. * (1) IAUC 3532; (2) Thompson (1982); (3) Table D1 shows the result of the aforementioned process, i.e., the offset and the rms as a function of the number of input spectra. Using only one spectrum we obtain a typical rms of 5.0 d, which is similar than the rms of 5.2 d reported by Gutiérrez et al. (2017). We see that using more than one spectrum the rms is reduced down to 4.1 d. The median of the offsets is around -0.6 d, independent of the number of input spectra in the analysis. This offset means that explosion epochs computed with SNID are 0.6 d earlier that those estimated with the non-detection. Hereafter, for the explosion epochs derived with SNID we assume an intrinsic error of 5.0 d when only one spectrum is used, or 4.1 d whether more spectra are available. APPENDIX E: THE DISTANCE TO NGC 6946 The distance to NGC 6946, host of SN 2004et, was measured with the TRGB method by Tikhonov (2014, hereafter T14), Murphy et al. (2018, hereafter M18), and Anand et al. (2018, hereafter A18), and correspond to µ = 29.39 ± 0.14 mag in the Jang & Lee (2017a) calibration. The PMM Jband distance for SN 2004et obtained in this work (µJ = 28.83 ± 0.12 mag) is in conflict with the TRGB estimation. To investigate the reason for this discrepancy, we compute distances to other two SNe II that exploded in NGC 6946: SN 1980K and SN 2002hh. These SNe have near-IR photometry, but we did not include them into the analysis because they do not have photometry in the five bands we use (i.e., BVIJH). Using data given in Table D2, we obtain µJ = 28.73 ± 0.18 and 28.77 ± 0.11 mag for SN 1980K and SN 2002hh, respectively, which are consistent with the distance computed with SN 2004et. There is then a tension of at least ∼4 rms between the PMM and the TRGB distance. This discrepancy could be due to: (1) all three SNe II are intrinsically brighter at least 0.56 mag than the SNe II we use for the calibration, or (2) there are issues with the TRGB distances reported by T14, M18, and A18. We noted that the latter two independent studies used almost the same data but obtained significantly different values of the TRGB F814W magnitude (F814WTRGB): 26.00 ± 0.04 mag in M18 and 25.84 ± 0.11 mag in A18. T14 used another set of image data, which is closer to center of the galaxy than those used in M18 and A18, and obtained a lower F814WTRGB value (25.79 ± 0.05 mag). At this moment, the origin of this large discrepancy is unclear. Taking into account this, we safely remove SN 2004et from the calibration and the final sample. APPENDIX F: TABLES This paper has been typeset from a T E X/L A T E X file prepared by the author. Table F1. Akaike and Bayesian weights, and evidence ratios for 2456900.5 -1.7 0.0 10 CBET 3964, 29, 24 Column 1: SN names. Column 2: discovery epochs. Column 3 and 4: last non-detection and first detection epochs, respectively, with respect to the discovery epoch. Column 5: spectra phase values with respect to the explosion epoch, which we assume as the midpoint between the last non-detection and the first detection. Adjacent ages are listed in brackets. Column 6: references for data. a Explosion time constraint obtained through polynomial fit to pre-maximum VRI photometry. b C. Feliciano report on the Bright Supernova website (http://www.rochesterastronomy.org/snimages/ ) c (1) Gutiérrez et al. (2017); (2)
16,966.6
2018-12-12T00:00:00.000
[ "Physics" ]
Pou5f1/Oct4 Promotes Cell Survival via Direct Activation of mych Expression during Zebrafish Gastrulation Myc proteins control cell proliferation, cell cycle progression, and apoptosis, and play important roles in cancer as well in establishment of pluripotency. Here we investigated the control of myc gene expression by the Pou5f1/Oct4 pluripotency factor in the early zebrafish embryo. We analyzed the expression of all known zebrafish Myc family members, myca, mycb, mych, mycl1a, mycl1b, and mycn, by whole mount in situ hybridization during blastula and gastrula stages in wildtype and maternal plus zygotic pou5f1 mutant (MZspg) embryos, as well as by quantitative PCR and in time series microarray data. We found that the broad blastula and gastrula stage mych expression, as well as late gastrula stage mycl1b expression, both depend on Pou5f1 activity. We analyzed ChIP-Seq data and found that both Pou5f1 and Sox2 bind to mych and mycl1b control regions. The regulation of mych by Pou5f1 appears to be direct transcriptional activation, as overexpression of a Pou5f1 activator fusion protein in MZspg embryos induced strong mych expression even when translation of zygotically expressed mRNAs was suppressed. We further showed that MZspg embryos develop enhanced apoptosis already during early gastrula stages, when apoptosis was not be detected in wildtype embryos. However, Mych knockdown alone did not induce early apoptosis, suggesting potentially redundant action of several early expressed myc genes, or combination of several pathways affected in MZspg. Experimental mych overexpression in MZspg embryos did significantly, but not completely suppress the apoptosis phenotype. Similarly, p53 knockdown only partially suppressed apoptosis in MZspg gastrula embryos. However, combined knockdown of p53 and overexpression of Mych completely rescued the MZspg apoptosis phenotype. These results reveal that Mych has anti-apoptotic activity in the early zebrafish embryo, and that p53-dependent and Myc pathways are likely to act in parallel to control apoptosis at these stages. Introduction Apoptosis plays a crucial role during development and maintenance of homeostasis in multicellular organisms by eliminating damaged or unneeded cells [1,2]. Programmed cell death is intensely studied in model organisms, because its deregulation is involved in many diseases including cancer, Alzheimer's disease, or immune deficiencies [3,4]. Today many components of the apoptosis pathway and their regulatory interactions are known [5][6][7]. In recent years, zebrafish have been increasingly used as model for studying apoptosis, because of the ease of experimental manipulation and the close homology of the apoptosis pathway core factors between fish and mammals [8][9][10][11][12][13]. Myc proteins are transcription factors involved in regulation of various cellular functions including cell-cycle progression [14], differentiation [15,16], cell growth [17], and apoptosis [18]. Myc forms heterodimers with its transcriptional co-regulator Max to bind specific DNA sites in the promoter regions of its target genes [19,20]. It was shown that Myc might be associated both with gene activation [21,22] and repression [23,24]. The myc proto-oncogene family consists of the three members c-myc [25], N-myc [26] and Lmyc [27]. Myc deregulation is often linked to tumor formation in animals and human [28][29][30]. During normal mouse embryonic development c-myc is expressed in many different cell types [31][32][33][34]. C-Myc was shown to correlate with cell proliferation in several tissues, while it induces or sensitizes cells to apoptosis in others [35]. L-myc and N-myc expression are restricted to specific tissues in the course of cell differentiation [31][32][33][34]. The importance of c-Myc in promoting early development was shown by the fact that c-myc homozygous knock-out mice are embryonic lethal [36], and its role in stem cell pluripotency when mouse fibroblasts were reprogrammed by overexpressing c-Myc together with Oct4, Sox2 and Klf4 [37]. In embryonic stem cells, Myc makes complex contributions to pluripotency [38][39][40]. In zebrafish, homologous genes for all myc family members have been identified [41,42]. As a result of the genome duplication that occurred in the evolution of teleosts [43] two fish paralogues each exist for c-myc (myca and mycb) and L-myc (mycl1a and mycl1b; Figure S1) [41,42]. In contrast, there is only one homologous gene for Nmyc known in zebrafish, mycn [42,44]. Recently, a new member of the myc gene family, mych, was identified in zebrafish. Mych shows a high similarity to N-Myc and c-Myc in its C-terminal amino acid sequence, but has no direct orthologous gene in higher vertebrates [41,45]. Zebrafish myc homologous genes are differentially expressed during embryonic development, and in specific adult tissues [41,42,44,45]. The spatial distributions of the early gene expression patterns have been previously described for mych, mycn and mycl1a. All three genes show a broad expression at blastula and gastrula stages [44][45][46]. Meijer et al. [41] described the postgastrula expression patterns of all known zebrafish myc homologues in great detail (see also Figure S2). However, the functions of myc genes in early zebrafish development are poorly understood. The recently identified mych gene has been linked to cell survival and neural crest development [45]. In mammalian ES cells, myc genes are regulated by the master stem cell factors Pou5f1 (Oct4) and Nanog [38,40]. Previous work in our lab revealed that the transcript levels of two zebrafish myc genes, mycl1b and mych, are reduced in pou5f1 (also named Pou2 or Pou5f3; www.zfin.org) maternal and zygotic (MZspg) mutant embryos as judged from microarray analysis [47]. Here we analyzed the early expression patterns of the zebrafish myc genes and their expression in Pou5f1 deficient embryos. We showed that the zebrafish mych and mycl1b genes are bound by Pou5f1 and are likely direct targets of Pou5f1. We further showed that MZspg gastrulae display enhanced apoptosis compared to wildtype (WT) controls. The apoptosis phenotype was partially rescued by mych overexpression, indicating a role of Mych in regulation of cell survival during gastrulation. Results Pou5f1 activity is required to induce zygotic expression of mych and mycl1b Temporal expression profiles from time series microarray analyses (zygote to 8 hours post fertilization -hpf) of wildtype (WT) and Pou5f1 maternal and zygotic deficient embryos (MZspg) [47] have revealed myc genes potentially regulated by Pou5f1 in the early zebrafish embryo ( Figure S3). For the zebrafish c-myc homologues myca and mycb, in MZspg embryos compared to WT, the mRNA levels derived from maternal expression have been found to be reduced before onset of zygotic transcription (midblastula transition -MBT), but elevated at post-MBT stages ( Figure S3A and C). mycl1a expression was not altered in Pou5f1 mutants compared to WT ( Figure S3E). mycl1b expression in MZspg has been found to be normal during blastula and early gastrula stages, but downregulated during late gastrulation ( Figure S3G). The expression of mych and mycn has been shown to be strongly activated after MBT in WT embryos, but not in Pou5f1 mutants ( Figure S3I and K). Whereas the induction of mycn expression only appeared to be delayed in MZspg embryos ( Figure S3I), mych continued to be expressed at strongly reduced levels in MZspg throughout gastrulation ( Figure S3K). These findings suggested that Pou5f1 function is crucial for a proper regulation of mycl1b and mych gene expression in early zebrafish development. To confirm the microarray data and to obtain more information about the spatial distribution of gastrula stage myc gene expression patterns and their changes in MZspg embryos, we performed whole-mount in situ hybridization (WISH) expression analysis for myca, mycb, mych, mycl1a and mycl1b in WT and MZspg mutants at 60% epiboly ( Figure 1). Early expression patterns of mych and mycn have been previously reported [44,45]. The mesodermal marker no tail (ntl; [48] was used as a control for this experiment, because ntl is not regulated by Pou5f1 [47]. Only one of the two c-myc homologs, mycb, was detected with spatially restricted expression at midgastrulation ( Figure 1A-D). At 60% epiboly mycb was specifically expressed in the region of the embryonic shield ( Figure 1C-D). The expression patterns of both myca and mycb displayed no alterations in Pou5f1 deficient embryos compared to WT. The slightly higher microarray expression levels of myca and mycb at this stage in MZspg (Fig S3A,C) were not clearly detected by WISH except for a minor increase in stain intensity in MZspg. mych, mycl1a and mycl1b were found to be broadly expressed throughout the blastoderm at 60%-epiboly, while mycl1b, in contrast to mych and mycl1a, was excluded from the embryonic shield ( Figure 1E-J). We found that in MZspg mutants mych and mycl1b expression were strongly downregulated ( Figure 1E-F and I-J). Therefore, mych and mycl1b expression appeared to depend on the Pou5f1 transcription factor. Interestingly, expression of mych was found to depend on Pou5f1 activity in most early gastrula cells, except for the cells of the embryonic shield ( Figure 1E-F). We next quantified the expression of mych and mycl1b in WT and MZspg embryos for five different developmental stages ranging from 1000-cell (MBT) to 75% epiboly by relative quantitative realtime PCR (qPCR) (Figure 2). We used the housekeeping gene translation elongation factor 1a (ef1a also named eef1a1l1) as reference gene to normalize the expression levels of mycl1b and mych in MZspg and WT. The developmental expression profiles, determined by qPCR, for both genes were in agreement with microarray results ( Figure S3G,K) [47]. The developmental profile for mycl1b showed a strong signal at mid-blastula transition in WT and Pou5f1 mutants and expression decreased during gastrulation ( Figure 2A). This suggested that most of the early mycl1b mRNA is maternally expressed. In comparison to WT, the expression level of mycl1b in MZspg was about 1.5 times higher at MBT, but found to decline rapidly in MZspg mutant embryos during gastrulation, resulting in a 12.5 fold downregulation compared to WT at 60%-epiboly ( Figure 2A). This difference may be caused by two mechanisms: Pou5f1 may protect maternal mycl1b by direct or indirect mechanisms, or Pou5f1 may induce zygotic expression of mycl1b. In contrast to mycl1b, mych was not detected to be expressed maternally. mych expression was found to be activated immediately after MBT in WT embryos and to reach a high point at 30% epiboly before declining again ( Figure 2B). In MZspg embryos mych expression was strongly reduced at all analyzed developmental stages ( Figure 2B). Our results demonstrate that the Pou5f1 transcription factor is important for the early zygotic activation of mych expression, and for maintaining mycl1b RNA levels. In summary, our data indicate that Pou5f1 function is important for the activation of the early zygotic expression of mych, and mycl1b during the first 8 hours of development. mych and mycl1b are directly activated by Pou5f1 To distinguish between direct and indirect Pou5f1 targets, Onichtchouk et al. [47] performed pou5f1 mRNA overexpression experiments in MZspg embryos, and inhibited translation of zygotic mRNAs after MBT by adding the translation elongation inhibitor cycloheximide (CHX) at the 64-cell stage. They compared by microarray analysis expression profiles at 30% epiboly of embryos injected with pou5f1 mRNA versus noninjected controls, both treated with CHX, such that only the expression of direct Pou5f1 targets should be differentially affected. We analyzed these published data to answer the question whether the myc genes may be regulated directly or indirectly by Pou5f1 ( Figure S3). We found that expression of mych, mycl1b, and the c-myc homologous genes were more than 2-fold increased after the injection of pou5f1 mRNA and subsequent addition of CHX, which suggests direct regulation by Pou5f1 ( Figure S3B, D, H and L). To determine the spatial extent of myc gene regulation by Pou5f1 in the embryo, and to confirm the microarray data, we addressed experimentally whether and to what level Pou5f1 may be able to directly activate mych and mycl1b expression. We microinjected mRNA encoding a fusion protein of Pou5f1 with the strong transcriptional activator domain VP16 [49] into MZspg embryos, and inhibited translation of zygotically expressed mRNAs by CHX. Therefore, the effect of Pou5f1-VP16 on mych and mycl1b should be direct, and not mediated by indirect effects of Pou5f1 targets on mych and mycl1b. The amount of pou5f1-VP16 mRNA injected was chosen such that the embryos showed a partial rescue of the morphological MZspg phenotype as described by Lunde et al. [49] (Figure S4B), which demonstrates the functionality of the used fusion mRNA. To reveal the specificity of the experiment, we used no tail as a control ( Figure 3E), which has been shown not to be regulated by Pou5f1 [49], and is not affected by pou5f1-VP16 mRNA in our experiment. We analyzed the expression of mych and mycl1b by WISH at 60% epiboly. Injection of pou5f1-VP16 fusion mRNA into MZspg embryos induced strong mych and mycl1b expression ( Figure 3A and C, Supplemental Table S1) also in the presence of CHX ( Figure 3B and D). However, mycl1b RNA levels were not only increased by pou5f1-VP16 overexpression in CHX treated MZspg embryos, but also by the CHX treatment alone in control MZspg embryos (Figure3 D). Given the strong maternal expression (Figs. 2 and S3), CHX treatment may indirectly prevent degradation of maternal mycl1b RNA by mechanisms depending on translation of zygotic protein products. To investigate whether Pou5f1 may bind to the mych and mycl1b regulatory regions, we analyzed published genome-scale chromatin immunoprecipitation data (ChIP-seq) for Pou5f1 [50]. Figure 4 shows that at 50% epiboly stage Pou5f1 was detected bound in the proximity of both the mych and mycl1b genes (+6 kb and -10,5 kb from transcription start sites, respectively). In addition, smaller Pou5f1 binding peaks marked the basal promoter region of mych. Three Pou5f1 ChIP-Seq peaks also correlated with Sox2 ChIP-Seq peaks, suggesting that Pou5f1 and Sox2 act together in this regulation. In summary, our data reveal that zygotic transcription of mych and mycl1b are directly regulated by Pou5f1. Pou5f1 mutants show enhanced apoptosis The most prominent early phenotype of MZspg embryos is delayed and ultimately arrested progression of epiboly (compare Figure 5B and J) [49,51,52]. Pou5f1-dependent regulation of cell adhesion and cell motility has been shown to contribute to the epiboly delay phenotype in MZspg [53], but does not appear to explain the full extent of phenotypic abnormalities in MZspg. Additional causes for the epiboly delay phenotype may include slowdown of cell proliferation or increase in apoptosis. Lachnit et al. [51] demonstrated that until 5 hpf stage there is no reduction in number of deep cells detectable in MZspg mutants compared to wildtype. However, cell death and proliferation have not been analyzed during mid to late gastrula stages in MZspg so far. We analyzed cell proliferation at 90% epiboly by calculating the mitotic index in MZspg mutants and WT embryos ( Figure S5). We fixed embryos and stained chromatin using the Sytox Green fluorescent dye. We recorded animal view confocal stacks of the stained embryos and determined the total number of nuclei as well as the number of metaphase and anaphase nuclei, which characterize cells in mitosis. We calculated the ratio of mitotic nuclei and total nuclei ( Figure S5). We found that even at 90% epiboly stage, when the MZspg epiboly phenotype is very pronounced, the mitotic index of the MZspg mutants did not differ significantly from WT embryos ( Figure S5A, Supplemental Table S3). Thus, a reduction in cell number caused by decreased Figure 1. Spatial expression pattern of zebrafish myc genes in WT and MZspg embryos at 60% epiboly. Whole mount in situ hybridization (WISH) analysis of myca, mycb, mych, mycl1a and mycl1b expression in WT (right embryo in each panel) and MZspg (left embryo in each panel). All embryos are shown in lateral (left column) and animal (right column) views with dorsal oriented to the right. All analyzed myc genes are broadly expressed in mid-gastrula embryos, except for c-myc homologues, myca and mycb. mycb is specifically expressed in the shield, whereas myca was not detectable at this stage (A-D). mych and mycl1a in addition have a strong expression domain in the involuting axial mesoderm (E-H). Only mych (E-F) and mycl1b (I-J) depend on the function of Pou5f1 and their expression is strongly decreased in MZspg mutants. However, the mych expression domain in the involuting axial mesoderm is less affected in Pou5f1 deficient embryos (E-F; arrows). We used notail as control, because its expression is not altered in MZspg mutants compared to WT (K-L). doi:10.1371/journal.pone.0092356.g001 proliferation during late gastrulation may be excluded as cause for the MZspg epiboly phenotype. Controlled cell death, and specifically apoptosis, contributes to embryonic development across the animal kingdom [54]. Cole and Ross [55] described the temporal and spatial distribution of apoptotic cells during normal zebrafish development. The earliest apoptotic cells in wildtype development were reported around 12 hpf in the dorsal midline and the segmental plate. To investigate whether MZspg embryos have enhanced apoptosis before this time stage, we analyzed apoptosis in MZspg mutant and WT embryos using TUNEL assay staining from 32-cell stage up to bud stage (Figs. 6 and 7). We could not detect any apoptotic cells in both MZspg and WT embryos until early gastrula stage ( Figure 6A-F). Later in development, starting at 60% epiboly, we found enhanced apoptosis in MZspg mutants, whereas WT embryos showed no TUNEL staining ( Figure 6G-J and Figure 7A-D). Mych controls apoptosis in zebrafish To investigate a potential contribution of loss of Mych to the MZspg phenotype we analyzed the morphology after mych The WT expression at 60%-epiboly stage was normalized to 1 for both genes. mycl1b is maternally expressed and declines as embryonic development progresses (A, white bars). In MZspg mutants, mycl1b expression is upregulated at MBT, likely reflecting higher maternal mRNA levels, but in comparison to WT it declines faster during gastrulation (A, black bars). In WT mych is activated at MBT, than its expression increases until 30%-epiboly before it starts to slowly fade (B, white bars). mych expression levels are up to 26-fold reduced in MZspg mutants compared to WT during early embryogenesis (B, black bars). Table S2). A more detailed analysis of the late phenotypes of Mych morphants was reported by Hong et al. [45]. Next, we analyzed whether Mych activity may contribute to control of apoptosis during gastrula stages. Following mych knockdown and analysis by TUNEL assay, we found enhanced apoptosis in Mych morphants at 24 hpf (data not shown), comparable to the results reported by Hong et al. [45]. To analyze whether mych overexpression can rescue the apoptosis phenotype, we injected mych mRNA into one-cell stage MZspg embryos, and compared the numbers of apoptotic cells at late gastrula stage between injected and non-injected embryos. We found that mych mRNA injection led to a reduced number of apoptotic cells in the MZspg mutants at bud stage ( Figure 7E-F). In parallel we performed p53 knockdown in MZspg embryos, using a p53 specific morpholino, which has been reported to block apoptosis [56]. MZspg p53 MO injected embryos showed a strong reduction in apoptosis compared to non-injected MZspg controls ( Figure 7G-H). The co-injection of mych mRNA and p53-MO completely rescued the MZspg apoptosis phenotype ( Figure 7I-J). We quantified the numbers of apoptotic cells in these experiments ( Figure 7K, Figure S7, Supplemental Table S4, Methods). When MZspg embryos injected with mych mRNA were compared to control MZspg embryos, a statistically significant six-fold reduction of apoptotic cells in mych overexpressing MZspg was observed. p53 knockdown caused a less pronounced reduction of the apoptotic phenotype in MZspg, and the combined overexpression of mych and knockdown of p53 rescued the apoptosis essentially back to low wildtype levels ( Figure 7K). While we microinjected wellestablished amounts of p53 MO for knockdown, and a relatively large amount of mych mRNA for rescue, we cannot completely exclude that incomplete rescue or knockdown may contribute to the apparent partial rescue of apoptosis only. We conclude that Mych and p53 dependent pathways are likely to contribute in parallel to the MZspg apoptotic phenotype. Discussion Myc proteins contribute to the control of cell proliferation by regulating cell cycle progression and apoptosis, and play important roles in cancer [28] as well in establishment of pluripotency [37]. Here we investigated the control of myc gene expression by the Pou5f1/Oct4 pluripotency factor in the early zebrafish embryo. We found that early zygotic mych expression as well as late gastrula stage mycl1b expression both directly depend on Pou5f1 activity. We further showed that Pou5f1 deficient MZspg embryos developed enhanced apoptosis already during early gastrula stages, and that mych overexpression in MZspg embryos was able The mych expression can be rescued by Pou5f1-VP16 in both, CHX treated and untreated embryos, and therefore the regulatory influence of Pou5f1 should be direct (A-B). The expression of mycl1b can also be rescued by Pou5f1-VP16 overexpression in CHX untreated MZspg embryos, but is also strongly upregulated in CHX treated MZspg embryos, even without the injection of pou5f1-VP16 mRNA (C-D). Thus the experiment cannot proove whether activation of mycl1b by Pou5f1 is direct or not. We used notail as negative control, because its expression is independent of Pou5f1 function (E), and depends on zygotic gene products. doi:10.1371/journal.pone.0092356.g003 Pou5f1 Control of Zebrafish myc Expression PLOS ONE | www.plosone.org to significantly suppress the apoptosis phenotype. Combined knockdown of p53 and overexpression of Mych completely rescued the MZspg apoptosis phenotype. Both results together reveal that Mych has anti-apoptotic activity in the early zebrafish embryo, and that p53-dependent and Myc pathways act in parallel to control apoptosis at these stages. Zebrafish myc genes are broadly expressed during the first hours of development Our and others previous studies have shown that zebrafish Lmyc and c-myc homologous genes are maternally expressed and mRNAs deposited in the egg, which remain stable until blastula stages [42,44,45], whereas mycn and mych expression are first detectable after mid-blastula transition [44]. Zebrafish mycl1a, mycl1b, and mych genes are broadly expressed in blastoderm cells after MBT. Broad myc gene expression during early development was also shown in other vertebrate species. The Xenopus c-myc, Lmyc and N-myc homologous genes are all maternally expressed, but their expression levels decrease during early embryogenesis [57,58]. xc-myc expression is maintained at a low level during blastula and gastrula stages before it increases again during neurulation. During mouse gastrulation c-myc and N-myc are widely expressed in embryonic and extraembryonic tissues [32]. Thus, the broad myc gene expression during blastula and gastrula stages is evolutionary conserved throughout the vertebrate subphylum. However, not much is known about potentially conserved myc gene functions during blastula and gastrula stages. Pou5f1 activity is required for proper mych, mycl1b and mycn expression We found that the transcription factor Pou5f1 is required for proper transcription of mych after MBT from the 1000-cell stage on. While Pou5f1 appears to directly activate the broad early zygotic expression of mych after MBT, Pou5f1 activity is not required for the initiation of the later mych expression domain in the involuted axial mesoderm on the dorsal side of the embryo. Pou5f1 protein is also involved in proper maternal regulation of mycl1b, as enhanced levels of mycl1b were detected at pre-MBT and blastula stages in MZspg. Further, Pou5f1 is required for proper maintenance of mycl1b expression level during mid-to late gastrula stages, when mycl1b drops significantly below wildtype levels in MZspg embryos. In addition, Pou5f1 is required for proper early post-MBT activation of mycn expression, which is delayed in MZspg mutants. The regulation of c-myc expression has been intensively studied in several systems. A number of regulatory cis-elements have been identified in mammals and it was shown that many transcription factors can bind to the c-myc regulatory elements in vivo [59]. Chen et al. [60] analyzed the core transcriptional network of embryonic stem cells using chromatin immunoprecipitation coupled with ultra-high-throughput DNA sequencing (ChIP-seq) to map the locations of key pluripotency transcription factors like Oct4, Sox2, Nanog, c-Myc and n-Myc. They could show that Oct4 and Sox2 directly bind to the n-myc promoter region, but do not interact with c-myc. This reported regulation of myc family members is consistent with our results in early zebrafish embryogenesis, where mych and mycl1b genes are directly induced by Pou5f1 and potentially also bound by Sox2. Thus, regulation of myc gene expression by Pou5f1 and Sox2 during early embryogenesis, respectively in ES cells, may be an evolutionary conserved feature in vertebrate development. Maternal and zygotic pou5f1 mutants show normal proliferation, but enhanced apoptosis at gastrulation stages When analyzing cell division rates and cell survival at blastula to gastrula stages, we found no differences in the mitotic rates between MZspg and WT embryos, while apoptosis rates were markedly increased in MZspg embryos starting from 60% epiboly. The relationship of Oct4 to apoptosis in mammalian ES cells is not well understood. Oct4 is a central factor in the ES cell transcriptional network, controlling many parameters of ES cell biology [38,40]. This role is compatible with both promoting proliferation and inhibiting apoptosis, as ES cells are immortal cell population with fast self-replication rates. Oct4 deprivation results in global expression changes, cessation of rapid cell proliferation and finally differentiation of ES cells to trophectoderm. However, Oct4 has also been linked to survival and anti-apoptotic pathways in ES cells by several different potential mechanisms: by a STAT3/Survivin route [61], by Trp53 regulation [62], or by a miR-125b pathway [63]. Next to its role in ES cells, Oct4 has also been suggested to promote survival of cancer stem cells by inhibiting apoptosis [63,64]. In zebrafish, like in mammals, Pou5f1 controls expression of a large network of developmentally important signaling molecules and transcription factors [47,50]. The majority of these factors is expressed in embryonic tissue-specific patterns, starting already from mid-gastrulation stages [47,49,50,65]. Therefore, it is possible, that the absence of some of these factors may not be compatible with cell survival within a specific tissue, which may lead to the activation of safeguarding apoptotic mechanism. However, we did not detect any obvious tissue-specific pattern of apoptosis in MZspg mutants, which suggests that Pou5f1 may be required throughout the whole embryo to prevent the activation of apoptotic cascades. Mechanisms of anti-apoptotic action of Pou5f1: separate Myc and p53 branches The expression of multiple myc genes, mych, mycl1b, and to a lesser degree mycn, is reduced in MZspg embryos. Since myc genes are known anti-apoptotic factors acting through different routes including p19 ARF , BIM and BCL2 (reviewed in [66]) and MDM2 [67], they may convey the anti-apoptotic action of Pou5f1. Enhanced early apoptosis in MZspg may be caused by reduced Myc activity, where individual myc genes may act partially redundantly. Indeed, we found that mych mRNA overexpression is able to rescue most of the ectopic apoptosis in MZspg embryos. However, the knockdown of mych in wildtype embryos did not induce apoptosis during gastrulation stages (10 hpf stage; data not shown), strengthening the notion that mych, mycl1b and mycn gene activities, which are all broadly expressed in the blastula and gastrula embryo [44,45], may be required redundantly downstream of Pou5f1 to prevent early activation of the apoptotic programs. p53, together with co-factors and depending on the type of stress a cell is subjected to, is a universal activator of apoptosis [68], acting in response to various stimuli. In many cell systems, myc genes contribute to control of apoptosis by regulating p53, e.g. in a p19 ARF dependent manner [66]. We determined whether Pou5f1 may regulate levels of tp53 mRNA by reanalyzing published time-series microarray data for MZspg and WT embryos [47], and found no significant differences in tp53 mRNA levels from zygote to end of gastrulation. Thus, Pou5f1 does not regulate p53 expression, and an involvement of p53 may be through indirect pathways controlling p53 activity. If all apoptosis in MZspg would be p53-dependent, apoptosis should be completely abolished by knockdown of p53 through injection of p53-morpholino. However, in p53-MO injected MZspg embryos, as in mych mRNA injected embryos, apoptosis was found to be only partially suppressed. Complete suppression of apoptosis in MZspg embryos was achieved only by simultaneous increase of Mych activity and suppression of p53. This suggests that myc genes, specifically mych, in the early zebrafish embryo are able to suppress apoptosis through a p53-independent pathway. A similar p53-independent mechanism may potentially be involved in the suppression of apoptosis in the neural plate by Mych at later developmental stages [45]. Our study suggests that the zebrafish embryo may be a suitable model system to dissect p53-dependent and independent anti-apoptotic activities of Myc proteins during embryonic development. However, the coexpression of several myc genes throughout the early embryonic stages will likely require combined inactivation of each of these genes, which so far has hindered progress towards analysis of molecular mechanisms. Ethics statement This study was performed with the approval of the State of Baden-Württemberg Regierungspraesidium Freiburg Animal Fish and embryo care We used WT embryos of AB x TÜ B strain crosses (http://www. ZFIN.org) and MZspg embryos carrying the m793 allele of the spg mutation [69](ZFIN ID: ZDBGENE-980526-485, ZDB-GENO-081023-1). Fish were raised, maintained and crossed under standard conditions as described [70]. Embryos were incubated or raised in egg water or in 0.36 Danieau's solution at 28.5uC. Developmental age is reported as hours post fertilization (hpf) when incubated at 28.5uC. Developmental stages of MZspg embryos were indirectly determined by observation of WT embryos born at the same time and incubated under identical conditions. Morpholinos Morpholino oligos mych-Sp-MO: 59-GTAGCAAAAGACT-CACCAGAATCGC-39, mych-ATG-MO: 59-GCAGCATCTT-GACGGAACCTTTTTC-39, and standard control morpholino SCMO: 59-CCTCTTACCTCAGTTACAATTTATA-39 were ordered from Gene Tools (Philomath, USA). The mych-ATG-MO blocks the translation of the mych mRNA into protein by binding to the translation start site. The mych-Sp-MO prevents splicing of the second intron resulting in a non-functional truncated protein missing the DNA binding domain. To test the specificity of mych-ATG-MO, the sequence -4 to +21 bp from the mych translation start site ATG was cloned upstream of GFP ORF in the CS2+ vector, to obtain the mych-GFP construct. 50 pg/ embryo of in vitro transcribed mych-GFP mRNA was injected into one-cell stage embryos together with 1.4 ng, 4.1 ng or 8.6 ng per embryo of the mych-ATG-MO or without Morpholino ( Figure S6). The efficiency of mych-Sp-MO was tested by injecting 1.4 ng, 4.1 ng or 8.6 ng into single cell WT embryos and subsequent RT-PCR analysis at 60% epiboly stage. The ratio of correctly spliced mych mRNA (162 bp) and mRNA containing the second intron (396 bp) shows that the injection of 4.1 ng Morpholino is sufficient to nearly completely inhibit splicing ( Figure S6). To address the Cycloheximide experiment MZspg embryos were injected with 10 pg pou5f1-VP16 mRNA at the 1-cell stage or left non-injected as controls. Embryos were treated with 15 mg/ml of cycloheximide (CHX, Calbiochem) dissolved in egg water. CHX was added at 1.5 hpf to allow for translation of injected mRNAs, but to block translation of the earliest zygotic transcripts. In the presence of CHX, direct Pou5f1 targets are transcribed after MBT, but these mRNAs are not translated, avoiding indirect downstream regulatory effects. Loss of ntl expression in CHX embryos was used as control for efficient inhibition of translation [71]. Whole-mount in situ hybridization and in situ detection of apoptosis Whole-mount in situ hybridization was performed as described [72]. The ApopTag Peroxidase In Situ Apoptosis Detection Kit (Chemicon/Merck Millipore) was used to detect apoptotic cells in early embryonic stages. To investigate whether MZspg embryos develop enhanced apoptosis, MZspg and WT embryos were fixed in 4% PFA at several developmental stages between 32-cell and bud stage. The fixed embryos were incubated in 100% methanol to make them permeable for the TUNEL staining. To analyze the influence of Mych and p53 on this phenotype, 105 pg mych mRNA and/or 4.2 ng p53-MO were injected at one-cell stage into MZspg embryos. Non-injected WT and MZspg embryos were used as controls. Embryos were fixed at 10 hpf. To quantify apoptosis image z-stacks of 5-14 stained embryos were taken for each experiment using transmitted light microscopy. Thereafter, maximum intensity projections of z-stacks were calculated for each embryo using ImageJ (Rasband, W.S., ImageJ, U.S. National Institutes of Health, Bethesda, Maryland, USA, http://imagj.nih. gov/ij/, 1997-2012) and apoptotic nuclei were automatically detected using the following set of parameters in Volocity Image Analysis Software (PerkinElmer): (1) Find Objects by Intensity; (2) exclude objects smaller than 10 mm 2 ; (3) separate touching objects greater than 50 mm 2 ( Figure S7). Cell proliferation analysis To determine a potential role of Mych in proliferation, 105 pg mych mRNA and/or 4.2 ng p53-MO were injected into one-cell stage MZspg embryos. Non-injected WT and MZspg embryos were used as controls. Embryos were fixed in 4% PFA at 10 hpf. The H). The co-injection of mych mRNA and p53-morpholino could completely suppress cell death in MZspg mutants, but did not rescue the delay in epiboly movement (I-J; arrowheads). The quantification of cell death (K) revealed that the number of apoptotic cells is decreased by a factor of six in MZspg embryos after mych mRNA injection. By combined knockdown of p53 and Mych overexpression, apoptosis in MZspg embryos was reduced to WT levels. doi:10.1371/journal.pone.0092356.g007 embryos were incubated in 100% methanol to make them permeable for the Sytox-Green DNA stain. Embryos were stained in 2 mM Sytox-Green (Invitrogen) at room temperature (20 to 25uC) for 2 hours. Mitotic nuclei (prometa-, meta-and anaphase nuclei) were manually detected by their characteristic chromosome arrangement and dense DNA structure, resulting in higher Sytox-Green intensity. The mitotic index is the proportion of mitotic nuclei to the total number of nuclei. For comparisons WT mitotic index was normalized to 1. Microinjections mRNAs were synthesized using the mMessage mMachine kit (Invitrogen) according to manufacturer's instructions. mRNA or morpholino were injected into the yolk of freshly fertilized zygotes (younger than 15 minutes) mounted on 1% (w/v) agarose ramps, using microinjection pipettes connected to an air pressure driven microinjector. A volume of 0.5-1 nl, containing mRNA or morpholino and 0.5% (v/v) phenol red in water, was injected into each zygote. Quantitative RT-PCR 60-100 embryos per sample were snap-frozen in liquid nitrogen, and total RNA was isolated using the RNA Easy kit (Qiagen). cDNA was synthesized using Superscript III kit (Invitrogen). cDNA was amplified using gene-specific primers and ABsolute SYBR Green Fluorescein (ABgene, Thermo Scientific) according to manufacturer's instructions on an Bio-Rad i-Cycler. Results were calculated using the ddCT method and zebrafish ef1a as a normalization control. Primers used: mych Figure S1 Phylogenetic analysis of the six zebrafish myc genes. Phylogenetic tree of myc family genes. In zebrafish two paralogous genes each exist for L-myc, mycl1a and mycl1b, and c-myc, myca and mycb. In addition, there is a single copy each for the mycn and mych genes. The later one is closely related to the N-myc and cmyc genes, but has no known homologues in other vertebrate species. Trees were built using phylip proml, and 100 datasets for bootstrapping. Data are from [47]. Non-injected MZspg control was normalized to 1. (TIF) Figure S4 Morphological analysis of CHX treated and pou5f1-VP16 mRNA injected control embryos. (A) Morphological phenotype of MZspg embryos treated with CHX from 64-cell stage on and developed until WT control embryos reached 60% epiboly. Treated embryos are arrested before sphere stage, but do not degenerate until 60% epiboly equivalent age. (B) The injection of 10 pg pou5f1-VP16 mRNA into 1-cell MZspg embryos is sufficient to rescue the MZspg phenotype, but it also may ventralize the embryo as Pou5f1 overexpression in WT would do [74]. The experiment demonstrates that pou5f1-VP16 was injected in our experiments at concentrations that could be considered physiological for embryonic development. (TIF) Figure S5 Analysis of the mitotic index at 90%-epiboly. Quantification of the proportion of cells undergoing cell division in WT, MZspg and MZspg injected with mych mRNA and/or p53 morpholinos by calculating the mitotic index (ratio between the total number of nuclei and nuclei undergoing cell division). (A). The calculated mitotic indices are not significantly different between the different genotypes and experimental conditions. Mitotic index of WT embryos was set to 1. Confocal microscopy Z-stacks were taken from the animal region of 90%-epiboly stage embryos, whose nuclei are stained by Sytox fluorescent DNA dye (B). Chromatin is highly condensed during meta-and anaphase of the cell division, which leads to an increase in Sytox stain intensity (B; arrows). (TIF) Figure S6 Testing of mych morpholino functionality. (A-H) The functionality of the mych translation-blocking morpholino (ATG-MO) was tested by injecting a fusion mRNA, where the MO target sequence was fused to the gfp ORF at the start ATG, together with different concentrations of the ATG-MO into onecell stage embryos. The GFP signal was analyzed using fluorescence microscopy (left panel) and the normal morphology of the embryos after morpholino injection was documented using transmitted light microscopy (right panel). The translation of gfp was completely blocked by injecting as little as 1.4 ng of the ATG-MO (C). For the splice-blocking morpholino (Sp-MO) the functionality was tested by RT-PCR using a pair of primers overlapping the second intron (I), whose splicing sites are targeted by the mych-Sp-MO. In WT the 162 bp fragment reflects the proper splicing of the pre-mRNA, whereas after the injection of 4.1 ng or more of mych-Sp-MO the detected fragment contains the intron and its size increased to 396 bp (I). Supporting Information (TIF) Figure S7 Quantification of apoptosis in WT and MZspg embryos at bud stage. Detection of apoptotic cells by TUNEL staining (A) and subsequent computational image analysis (B). The images show a lateral maximum intensity projection of a z-stack taken from a single embryo. (B) The same z-stack after automatic object recognition using Volocity software (Perkin-Elmer), where most of the apoptotic cells are marked in red (B). (TIF) Table S1 mych and potentially also mycl1b are directly regulated by Pou5f1. (Referring to: Figure 3) (PDF) Figure 5) (PDF) Figure S5) (PDF) Figure 7) (PDF)
8,699.6
2014-03-18T00:00:00.000
[ "Biology" ]
Relationship between Enterprise Financing Structure and Business Performance Assisted by Blockchain for Internet of Things Financing Mode Financing structure is an important and very complex issue in the financial theory and the rights and obligations of relevant stakeholders of enterprises are also concentrated in the financing structure. Therefore, the financing structure has a significant impact on the value of enterprises. A reasonable financing structure is conducive to standardizing the behavior of enterprises and improving the value of enterprises. The change of corporate financing structure is often used as a signal to external investors about the company's future income expectations, especially because the financing structure has a certain impact on the company's performance, which makes the problem of financing structure more valued by the theoretical and financial circles. For the empirical information about company financing, this paper explores the influencing elements of the company's running overall performance assisted with the aid of the blockchain, and the net of matters provides a chain model and constructs the operating performance indicators according to the comprehensive score. We select the commercial credit financing rate, short-term loan financing rate, long-term loan financing rate, debt financing rate, equity financing rate, and endogenous financing rate. The control variables are total capital, ownership concentration, and average age of the company. The conclusion is drawn by regression analysis. Commercial savings financing rate, fairness financing fee, and endogenous financing fee are positively correlated with the working performance; short-term loans and average age of the company are negatively correlated with the operating performance; and long-term loan financing rate, bond financing rate, and equity concentration are not significantly correlated with the operating performance. Introduction e problem of difficult and expensive financing of enterprises has always been the concern of the industry. Research and practice show whether it is caused by the enterprise's own genes, such as small scale, insufficient collateral, high business risk, short life cycle, and weak ability to resist risk or because there are still some deficiencies in the financial services provided by the current financial system, and financial institutions are unwilling to bear risk losses and lend funds to enterprises with small returns, which leads to the difficulty of enterprise financing, which are all practical problems existing in the process of enterprise financing [1]. e solution mode usually adopts supply chain financing, financial institution financing, intellectual property pledge financing, equity crowdfunding financing, venture capital, and so on. Scholars have analyzed these models and obtained the limitations of different models from different aspects [2]. For example, intellectual property pledge financing can effectively deal with enterprise financing, but there must be a smooth operation mechanism. e projects initiated by enterprises and the growth of projects have an important impact on equity crowdfunding financing, which can be used for reference to help enterprises obtain financing [3]. However, there are problems such as long review time before loan and asymmetric information of project implementation after loan. Venture capital can attract a large amount of private capital to the Chinese enterprises, but it is faced with the lack of docking platform, poor management, and low exit efficiency. e development of blockchain and IOT technology has attracted great attention from all walks of life. Blockchain is a new technology that has been rising in recent years. It has the characteristics of decentralization, traceability, programmability, high security, and high credibility [4]. It is expected to reveal the real information of enterprises; improve the trust of both investors and financiers; solve the problems of difficult, slow, and expensive financing; and improve the efficiency of financing [5]. e Internet of things technology is the third information wave of the development of computer and Internet. It is the product of the fourth industrial revolution and represents the development trend of the next generation of information technology. e most prominent feature of the Internet of things is intelligence; it uses the advantages of the Internet of things to connect things, to uniquely identify and manage the goods to be pledged by enterprises, and realize the intelligent management of enterprises, so as to improve the efficiency of enterprises and liberate labor [6]. It promotes the development of enterprises by giving full play to the unique communication principle and by combining with the traditional service industry [7]. e Internet of things technology is the expansion and extension of the Internet. With the help of the Internet of things technology, new changes can be made to the financing mode and financing environment of enterprises, so that enterprises can "supply what they need" in reality [8]. erefore, the financing problem of enterprises can be solved with the help of blockchain and IoT finance. However, these studies did not make an in-depth and systematic analysis on how blockchain and IoT solve the core problem of enterprise financing. Based on the analysis of the operation mechanism of blockchain and IoT financing, this paper intends to study the enterprise's blockchain and IoT supply chain financing mode from the perspective of the comparison of blockchain and IoT systems, trying to provide solutions to the enterprise's financing problems. e chapters of this paper are arranged as follows: Section 1 is the introduction. is paper expounds the research background, purpose, and significance of this paper. Related work is discussed in Section 2. Section 3 expounds the relevant knowledge of the supply chain financing mode of blockchain and IoT. Section 4 analyzes the current situation of the financing structure and business performance of enterprises assisted by the blockchain and IoT supply chain financing mode. Section 5 makes an empirical analysis on the relationship between enterprise financing structure and business performance assisted by blockchain and IoT supply chain financing mode. Section 6 summarizes the full text and further finds out the shortcomings and limitations of this paper in the empirical research. Related Work e relationship between financing structure and business performance has been widely discussed and analyzed. Foreign scholars believe that studying the rationality of the financing structure is of great significance to enterprise management. At present, scholars have not reached a conclusion on the way to reasonably adjust the financing structure, but through continuous assumptions and verification, a reasonable financing structure will be helpful to improve the business performance of enterprises. e proportion of the financing structure to the operating performance of an enterprise refers to the proportion of its liabilities to its operating performance. Taking rail transit enterprises as the starting point, this paper analyzes the relationship between their financing structure and business performance. ere are many theories about financing structure at home and abroad, which provides a reference for relevant theoretical research and practical treatment. e above literature can be summarized as follows: Most scholars at home and abroad pay attention to the degree of ownership concentration and on the basis of it, China has added the nature of ownership to analyze the impact of ownership and business performance [9]. In short, the purpose of academic research is to reasonably optimize the equity financing structure of enterprises, significantly promote the improvement of business performance, and ensure the sustainable, orderly, and healthy development of enterprises [10]. e depth and breadth of foreign theoretical research on the financing structure far exceeds that of domestic scholars, and there is still a great lack of domestic research: first, domestic scholars are limited to discussing the impact of financing structure or corporate performance, they do not further explore the internal causes of empirical analysis results, and stop at the universal theoretical reasons behind the research problems [11]. If we only study the impact of financing structure on corporate performance from one aspect, the conclusion will be insufficient [12]. Second, domestic scholars did not conduct in-depth research on the optimization of the financing structure of an industry and put forward effective suggestions. ey only analyzed whether there was a positive and negative relationship between the financing structure and business performance, and the significance of the analysis results to stakeholders is limited [13]. e early research of domestic and foreign scholars mainly focused on the relationship between financing structure and business performance, and the conclusions are not unified [14]. When enterprises make financing decisions, most shareholders with low shareholding ratio only pay attention to whether the company operates steadily and meets the expected profits, but pay little attention to the development of enterprises and do not give full play to the role of decision makers [15]. However, when corporations finance externally, it will make businesses make prudent selections on capital use, enterprise improvement, and income administration mode, which has a nice effect on commercial enterprise's overall performance [16]. According to the different ways of enterprise capital integration, a stable capital system will drive the optimization of the company's capital value and help to achieve the business objectives of maximizing the enterprise value and profit [17]. Other enterprises can also learn from this and promote their own development through debt financing. Relevant Knowledge of IoT Supply Chain Financing Mode Using blockchain and IoT technology, on the one hand, can record the data flow between various types of institutions in the financing process, effectively improve the transparency of financial information and the traceability of data, and improve enterprise credit, so as to solve the problems such as the difficulty of solving the financing trust crisis in the financing process [18]. On the other hand, the Internet of things technology realizes the information connection between people and things, so as to realize the real-time connection of goods information in the real world, which can greatly alleviate the problem of information asymmetry. Meanwhile, using RFID can uniquely identify and manage the pledged goods of enterprises and supervise the goods. Internet of ings Supply Chain Financing Model. Limited by the factors of small scale and low credit rating, enterprises cannot guarantee the safety and income of funds, so most financial institutions are reluctant to lend funds to them, so that they cannot get normal supply when enterprises urgently need funds. e Internet of things forms a database for the information transmission and processing of various goods of enterprises, and finally forms an Internet of things financing platform [16]. rough the interface of the Internet of things financing platform, enterprises, financial institutions, and investors can clearly understand the current temporal and spatial state of goods and alleviate the transmission asymmetry of information of all parties [17]. Financing under the Internet of things, when enterprises borrow from borrowers, the information sources that borrowers can collect are no longer a single connection, but resource allocation and information sharing in all aspects from raw material production to products and goods in transit [18]. For example, with the help of the Internet of things platform, there is more intersection of information links between enterprises, borrowers, and sellers. Supply chain financing is no longer a single simple financing line cantered on core enterprises but extends to the interactive links between multiple core enterprises, upstream and downstream enterprises, and multiple supply chains. e basic framework of blockchain and IoT supply chain financing model is shown in Figure 1. As shown in Figure 1, upstream and downstream enterprises like suppliers and distributors, core enterprises, finance, and other relevant institutions manage authority, account, credit, financing, asset traceability, and process through the blockchain and IoT system; leave traces of the data generated by various enterprises and business links in the blockchain network; and uniquely identify and manage the pledged goods. e Internet of things is composed of four layers: application layer, data processing layer, network transmission layer, and perception layer [19]. Among them, the perception layer is the intelligent perception and recognition of the electronic tag of the cargo information, scanning the information, and uploading it. We use the local network transmission layer to process and analyze the collected data by means of wireless or wired and then collect and transmit the corresponding data. Finally, it can realize a variety of intelligent applications of the Internet of things system [20]. Internet of things mode financing is to upload the goods' information to the data supervision platform through the perception layer, so as to realize the intelligent positioning, tracking, monitoring, and intelligent management of a variety of "in transit, in warehouse, and in processing" real goods; form an Internet of things database; form a financing platform through the centralized processing of goods, enterprise information, risk, and other data; and connect several system interfaces to realize overall management. rough the interface of this platform, enterprises, investors, supervisors, and other participants can accurately locate and supervise the information and location of goods from the perspective of time and space and fully perceive the changes of their transaction information and other data. Its operation framework is shown in Figure 2. Current Situation of Supply Chain Financing Mode of Blockchain for Internet of ings. Internet of things supply chain financing mode is a new financing mode in which borrowing institutions take core enterprises as the center and provide financing to the upstream, middle, and downstream enterprises. It not only solves the dilemma of the shortage of traditional enterprise financing channels but also makes banks and enterprises more closely linked. e bank judges whether to provide loans to its upstream, middle, and downstream enterprises according to the credit capacity of the core enterprises in the chain. Its development is the outsourcing of production and operation business, and the production of products is generated from the supply chain inside the enterprise to outside the enterprise, which promotes the birth of IoT supply chain financing to reduce capital flow for the production of products [21]. e application of IoTsupply chain financing is different from bank credit. It mainly focuses on the high-quality projects of enterprises, takes the core enterprises as the center, and extends the credit system to the upstream and downstream enterprises associated with it. e relationship between the elements of the Internet of things supply chain financing model is shown in Figure 3. Enterprise Application of Blockchain for Internet of ings Financing. e use of blockchain for the Internet of things technology enables complete business information recording and backtracking, enterprises applying for loans, the intelligent positioning of goods, and many data such as transaction information, enterprise operation status, and capital flow can be uploaded to blockchain for the Internet of things financing platform in real time. rough blockchain for the Internet of things financing platform, enterprises release financing needs on the platform, while investors master the verification level, disclosure information, risk prediction, and Computational Intelligence and Neuroscience other information of enterprises through the platform, so as to realize the connection between demand and supply and avoid the risk of staying and misappropriating funds of previous Internet financing platforms. We can know the state of the pledge all the time, avoid the disadvantages of pooling of the pledge and the untimely transmission of the information of the pledge, and provide a convenient financing mode for the development of enterprises. Analysis on the Current Situation of Enterprise Financing Structure and Performance According to the data, first, it arranges and analyzes the current situation of business performance and financing structure, in order to avoid unnecessary errors as much as possible and affect the authenticity of the results. e analysis of financing structure on business performance has always been favored by researchers. Internal financing, equity financing, and debt financing constitute the financing structure. In order to understand the financing structure more finely, we need to dissect the financing structure carefully. As can be seen from Figure 4(a), enterprises assisted by blockchain for Internet of things supply chain financing mode still prefer external financing. e amount of endogenous financing increased from an average of 497.86 million yuan in 2016 to an average of 1150.6 million yuan in 2020. e amount is increasing, but the proportion is shrinking. is shows that the share of endogenous financing in the financing structure is decreasing, and more people begin to choose external financing. External financing increased from an average of 1.30663 billion yuan in 2016 to an average of 3.94321 billion yuan in 2020, with rapid variable growth. As can be seen from Figure 4(b), the average value of undistributed profits within five years is 0.0798, 0.0802, 0.0812, 0.0840, and 0.0885, respectively. e average value of undistributed profits of enterprises assisted by blockchain for Internet of things supply chain financing mode is relatively stable. However, the variance change is larger than that of accumulated depreciation and surplus reserve. e role of variance and standard deviation depends on the dispersion of data. It can be seen from the table that surplus reserve is the most stable component in the endogenous financing structure. e second is accumulated depreciation, and the last is undistributed profit. As can be seen from Figure 4(c), the average equity proportion of enterprises assisted by blockchain for Internet of things supply chain financing mode in recent five years is 0.0828, 0.0924, 0.0896, 0.0931, and 0.0953, respectively. It can be seen that the proportion of equity is steadily increasing. e proportion of capital reserve has been relatively stable in the recent five years, but it can be seen from the variance of the two in recent five years that the dispersion of the proportion of capital reserve is greater than that of the variance. It can be seen from Figure 4(d) that from 2016 to 2020, short-term loan financing has become the most important component, but according to the gap between the maximum and minimum in recent five years, the maximum is 2018, the maximum is 0.5467, and the minimum is 0. e variance and standard deviation of the average value are also the largest in the composition of the debt financing structure. e average value of variance is 0.0087 and the average value of standard deviation is 0.0933. Compared with other debt financing structures, the average value of standard deviation is the largest. According to the Analysis on the Current Situation of Business Performance. We select the financial indicators of profitability, operation ability, solvency, and development ability to analyze the current situation of enterprise performance. Analysis of the Enterprise Profitability. From the perspective of its actual profitability and income quality indicators, the profitability and income quality data of enterprises assisted by blockchain for Internet of things supply chain financing model are shown in Figure 5. As can be seen from Figure 4, the rate of return on total assets of the enterprise decreases first and then increases, indicating that the level of input and output of the enterprise increases again. However, the fly in the ointment is that according to the variance of the rate of return on total assets, it can be concluded that the dispersion degree of the enterprise is increasing, that is, there are more and more two-level differentiation in the profitability of the enterprise. According to the profit margin of net assets, it can be analyzed that the after-tax profitability of enterprises assisted by the supply chain financing mode of blockchain for Internet of things will increase steadily from 2016 to 2020, especially in the second two years, but at the same time, the differentiation of enterprises is similar to the return on total assets. rough costs, the net interest rate is a measure of the cost paid by enterprises to obtain benefits, and the two are inversely proportional. e number of enterprises assisted by blockchain for Internet of things supply chain financing mode decreased first and then increased and the proportion increased rapidly. It is possible that the enterprise's ability to control costs will weaken again. Analysis on the Solvency of Enterprises. In order to study the solvency of enterprises assisted by blockchain for Internet of things supply chain financing mode, four important indicators are selected. e data are shown in Figure 6. As can be seen from Figure 5, the broken line chart of enterprise solvency assisted by blockchain for Internet of things supply chain financing mode shows that the asset 5% every year. It shows that the whole industry is still relatively stable. Current ratio and quick ratio are selected for short-term borrowings. e ideal value of the current ratio is 2, and the lower limit of the current ratio is 1. If the current ratio is too low, it means that the enterprise is difficult to repay on schedule. If it is too high, it means that the occupation of current assets is too high, which will affect the use of funds and profitability of the enterprise. Analysis of the Enterprise Operation Capacity. e operating capacity of enterprises assisted by blockchain for Internet of things supply chain financing mode is shown in Figure 7. If the inventory turnover rate is too low, it will lead to the interruption of reproduction or tight sales, but it is also easy to form product backlog if there is too much storage. e normal production and operation can only be ensured if the inventory turnover rate is in a reasonable range. rough the Computational Intelligence and Neuroscience annual data feedback from 2016 to 2020, it can be concluded that the enterprise is moving from a too high inventory turnover rate to a reasonable range, indicating that the enterprise's sales capacity, all aspects of operation capacity, and inventory management level continue to improve. Business cycle is an important factor to measure the current assets of enterprises, the shorter the business cycle, the faster the capital turnover. e data have been stable and small, the operation ability of the enterprise has been improving continuously, and the overall management has been effective. e turnover rate of accounts receivable is an index to measure the collection speed of an enterprise. e higher the turnover rate of accounts receivable, the shorter the accounting period of the enterprise and the faster the return of funds. Analysis on the Growth Ability of Enterprises. e analysis of the enterprise growth ability is the prediction of the future development trend and development speed of the enterprise and the analysis of the expansion of the business ability of the enterprise. e growth capability indicators of enterprises assisted by blockchain for Internet of things supply chain financing mode are shown in Figure 8. Overall, except for the highest year-on-year growth of total profit in 2017, it has slowed down in other years. However, from the perspective of net profit, the profit is not very high, which is directly proportional to it. It shows that the company continues to expand. On the whole, the profit growth of enterprises is not very stable. It can be seen from the picture that the total assets of the enterprise continue to expand. Empirical Analysis In this paper, the multiple linear regression model is based on multiple financial indicators, so the degree of multicollinearity of these variables is statistically tested before analysis using the multicollinearity test function of SPSS software, the tolerance, and variance expansion factor (VIF) are calculated. If the VIF value of each variable is less than 10 (or the tolerance is greater than 0.1), it is considered that there is no multicollinearity between variables, which is suitable for the next statistical analysis. Relationship between the Overall Financing Structure and Enterprise Performance. We establish the individual fixed effect model of the overall financing structure on enterprise performance. e specific multiple regression results are shown in Figure 9. As can be seen from Figure 8, the regression coefficient of endogenous financing rate is 1.673396, and the test value with significant coefficient difference is 0.0026. ere is a significant positive correlation between endogenous financing level and enterprise performance at the level of 1%. is shows that endogenous financing has a positive impact on the improvement of enterprise performance of China's petroleum and petrochemical enterprises, which is consistent with the priority financing theory. When enterprises need to raise funds, they will give priority to financing through endogenous financing. e regression coefficient of debt financing ratio is −2.179861, and the test value with significant coefficient difference is 0. e scale of debt financing is significantly negatively correlated with enterprise performance at the level of 1%. e debt financing scale of petroleum and petrochemical enterprises is relatively large, and the average debt financing rate is 52.08%. e debt level of enterprises is relatively high, and the financial risk of enterprises is high. Debt financing has a negative impact on enterprise performance. Relationship between Debt Financing Structure and Enterprise Performance. We establish the individual fixed effect model of debt financing term on enterprise performance. e specific multiple regression results are shown in Figure 10. It can be seen from Figure 9 that the Hausman test result is 0.2145, which shows that there is not enough evidence to reject the original hypothesis at the 5% confidence level, that is, the impact of debt financing period on enterprise performance should choose to establish a random effect model. e explanation degree of the impact of debt financing term on enterprise performance is 51.29%. From the significance test, the adjoint probability of F-statistic of the regression equation is 0. At the importance level of 5%, the model is significant as a whole, the variables are linearly correlated, the three selected control variables are positively correlated with the explained variables, and the asset scale variable fails to pass the significance test, and the other two control variables passed the significance test at the 5% confidence level. e regression coefficient of long-term debt ratio is −0.089986, and the test value with significant coefficient difference is 0.7362. e longterm debt ratio is negatively correlated with the enterprise performance, but it is not significant; rejecting the hypothesis, the regression coefficient of short-term debt ratio is 0.188074, and the test value with significant coefficient difference is 0.4415. e short-term debt ratio is positively correlated with enterprise performance, but it is not significant. e average proportion of long-term debt is only 17.70%, while the average proportion of short-term debt is 82.30%. e unreasonable debt financing term structure makes the impact of long-term debt and short-term debt on enterprise performance not significant. More short-term debt does not match the capital operation of the enterprise and cannot effectively improve the company's performance. Conclusion A reasonable financing structure can reduce the financing cost and risk of enterprises and then help to improve the performance of enterprises. erefore, the study of financing structure is of great significance to the development of enterprises. Taking the enterprise financing structure as the starting point, this paper generally grasps the current situation of the enterprise financing structure and enterprise performance assisted by blockchain for Internet of things supply chain financing mode. Based on the theoretical and empirical evaluation of financing forms and corporate performance, and integrated into the scenario of modern enterprises, this paper analyzes the equity financing structure from three aspects: equity financing structure and debt financing structure. Aiming at the Internet of things supply chain, this paper puts forward some suggestions on optimizing the enterprise financing structure with the help of the blockchain financing mode. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
6,302.8
2022-05-31T00:00:00.000
[ "Business", "Economics" ]
Interdomain Interactions in the Tumor Suppressor Discs Large Regulate Binding to the Synaptic Protein GukHolder* Multidomain scaffolding proteins are central components of many signaling pathways and are commonly found at membrane specializations. Here we have shown that multiple interdomain interactions in the scaffold Discs Large (Dlg) regulate binding to the synaptic protein GukHolder (GukH). GukH binds the Src homology 3 (SH3) and guanylate kinase-like (GK) protein interaction domains of Dlg, whereas an intramolecular interaction between the two domains inhibits association with GukH. Regulation occurs through a PDZ domain adjacent to the SH3 that allows GukH to interact with the composite SH3-GK binding site, but PDZ ligands inhibit GukH binding such that Dlg forms mutually exclusive PDZ ligand and GukH cellular complexes. The PDZ-SH3-GK module is a common feature of membrane associate guanylate kinase scaffolds such as Dlg, and these results indicate that its supramodular architecture leads to regulation of Dlg complexes. Communication and adhesion between cells is mediated by specialized regions of the plasma membrane. For example, in excitatory synapses in the brain, the postsynaptic membrane contains an actin-rich cytoskeletal region known as the postsynaptic density (1). Analogous structures are present at sites of cell-cell contact, including the junctions between epithelial cells, which are important for signaling and the formation of physical barriers (2,3). The establishment and function of these important structures is regulated by a large number of proteins that serve to organize receptors and downstream signaling proteins and to anchor signaling complexes at specific membrane locations. Membrane-associated guanylate kinases (MAGUKs) 2 are scaffolding proteins that regulate the formation and function of membrane specializations, such as synapses and tight junctions (3,4). MAGUKs have a unique domain architecture that is typified by one or three PDZ domains, an SH3 domain, a variable HOOK sequence, and a region with homology to the enzyme guanylate kinase (GK) that lacks enzymatic activity but instead acts as a protein interaction domain. The SH3 and GK domains form an intramolecular interaction in the MAGUK PSD-95, which is thought to be a common feature of MAGUK proteins (5,6). One of the best studied MAGUK proteins is the Drosophila tumor suppressor Discs Large (Dlg). Dlg plays a role in the formation and function of diverse polarized cellular structures, including epithelial junctions (7), stem cell cortical domains (8,9), and neuronal synapses (10). In the neuromuscular synapse, Dlg is present at high levels at both pre-and postsynaptic sites (11). Dlg is thought to function at these sites by clustering ion channels and organizing signal transduction pathways. The intramolecular interaction between the SH3 and GK domains is important for MAGUK function. All genetically identified mutations in the SH3 and GK regions of dlg and the related Caenorhabditis elegans lin-2 gene disrupt the intramolecular interaction (5). However, the exact role of the intramolecular interaction in MAGUK function has remained obscure. The crystal structure of the PSD-95 SH3-GK revealed that the two domains interact through a unique mechanism in which a two-stranded ␤-sheet is composed of strands that emerge from the SH3-HOOK and GK domains (12,13). The nature of this interaction is such that movements of the domains relative to one another could create functionally distinct conformations that result from hinge movements about the linking strands. However, how the distribution among these conformations might be modulated and what their functions are has remained unclear. One function of the SH3-GK intramolecular interaction may be to regulate the assembly of MAGUK complexes. For example, GK-associated protein (GKAP) binds to a fragment of the MAGUK SAP-97 containing only the GK domain but fails to bind to the SH3-GK, indicating competition between the intraand intermolecular interactions (14). A secondary intramolecular event with an NH 2 -terminal L27 domain rescues the interaction with GKAP in the full-length protein. However, the mechanism by which this interaction may be regulated in the context of the full-length protein is unknown. Not all SH3-GK ligands compete against the intramolecular interaction, indicating that multiple binding surfaces are utilized by ligands of this unique domain. Here we have analyzed how binding of Dlg to the synaptic protein GukHolder (GukH) is regulated by complex interdomain interactions within Dlg that involve transitions in the SH3-GK intramolecular interaction. GukH was first identified in a yeast two-hybrid screen as a binding partner for the Dlg GK domain (15), and mammalian homologues have been identified (16,17). GukH colocalizes with Dlg at synaptic borders, and the interaction of the two proteins appears to be required for proper localization of the tumor suppressor Scribble. GukH also colocalizes with Dlg in neuroblasts, precursors of the Drosophila central nervous system (18). The interaction of GukH with Dlg is mediated by an ϳ300-residue region at the GukH COOH terminus that contains no known domains. We find that GukH binding to Dlg is actively regulated by the SH3-GK intramolecular interaction. GukH binds to a composite site formed by not only the GK domain but the SH3 domain as well. However, the intramolecular interaction between the two domains competes against GukH binding. Binding is rescued by a PDZ domain directly NH 2 -terminal to the SH3 domain, a common feature of MAGUK proteins. The complex interdomain interactions in Dlg cause GukH-Dlg-and PDZbound complexes to be mutually exclusive in a cellular context. These results have implications for the types of complexes that are formed by Dlg and therefore its role in regulating the formation and function of membrane specializations, such as epithelial junctions and synapses. For expression of glutathione S-transferase (GST) fusions, cDNAs were ligated into pGEX 4T-1, whereas the pET-19b derivative pBH was used for hexahistidine tags. The pBH vector encodes for a tobacco etch virus protease site following the hexahistidine tag to allow for removal of the tag. All proteins were expressed in the Escherichia coli strain BL21(DE3). Hexahistidine fusion proteins were purified using nickel-nitrilotriacetic acid resin and standard protocols. Unless otherwise noted, nickel-nitrilotriacetic acid purification was followed by incubation with tobacco etch virus protease to remove the histidine tag (after cleavage, the protein contains an extra glycine and serine residue on the NH 2 terminus). Ion exchange chromatography was used to further purify proteins if necessary. Purity was established using SDS-PAGE and/or MALDI-TOF mass spectrometry. For qualitative "pulldown" assays, E. coli cell lysates containing the GST fusion protein of interest were incubated with glutathione-agarose beads and washed with binding buffer (10 mM HEPES, pH 7.5, 100 mM NaCl, 1 mM dithiothreitol, 0.5% Triton X-100). Potential interacting proteins were added to a concentration of 10 M and incubated with the beads at room temperature for 15 min. The reactions were washed three times with binding buffer to remove unbound proteins. Bound proteins were eluted with SDS loading buffer and analyzed by staining with Coomassie Blue and/or Western blotting using an antihexahistidine antibody (Qiagen). For quantitative binding measurements, a peptide with the sequence from the last ten residues of CRIPT and an NH 2terminal rhodamine was synthesized by Fmoc solid phase peptide synthesis. A sulfonyl chloride rhodamine derivative (Molecular Probes L-20) was used after the addition of the final Fmoc amino acid. Following cleavage from the solid support, the peptide was purified by reverse-phase high pressure liquid chromatography and its sequence verified by MALDI-TOF mass spectrometry. A series of solutions were prepared with increasing concentrations of the appropriate Dlg fragment and a concentration of 100 nM rhodamine-labeled peptide. The anisotropy of each solution was measured using an ISS PC1 fluorometer. The K d of interactions was determined by nonlinear fitting of the data to a bimolecular binding equation. Cell Culture and Immunoprecipitation-We transiently transfected Drosophila S2 cells grown in Schneider's insect medium supplemented with 10% fetal bovine serum with expression vectors for full-length Dlg and/or the GFP-CRIPT and hemagglutinin (HA)-GukH Dlg binding domains using 1 g of total DNA, which resulted in an efficiency of ϳ30%. We induced protein expression after 24 h using 0.5 mM copper sulfate and collected the cells after an additional 24 h of growth. To collect the cells, we centrifuged them for 5 min at 1000 ϫ g and washed the resulting pellet with ice-cold phosphate-buffered saline twice. For immunoprecipitation experiments, extracts were prepared by incubation with lysis buffer (150 mM NaCl, 1% Nonidet P-40, 50 mM Tris, pH 8.0, 1 mM phenylmethylsulfonyl fluoride) for 30 min on ice. Cell lysate was precleared by gently mixing with protein A-Sepharose beads (Amersham Biosciences) at 4°C for 1 h. The beads were then removed by centrifugation at 12,000 ϫ g for 20 s. The proteins were immunoprecipitated by incubating anti-HA, anti-GFP, or anti-His with precleared lysate at 4°C for 1 h. Protein A-Sepharose beads were added to the mixture and incubated at 4°C for 1 h with rotation. The pellets were collected at 12,000 ϫ g for 20 s and washed three times with lysis buffer and once with phosphatebuffered saline. The final pellets were suspended in protein loading buffer and analyzed by SDS-PAGE followed by western. For immunostaining, Dlg localization was detected with an anti-Dlg antibody and Cy3-labeled secondary antibody (endogenous Dlg was below the level of detection for immunostaining). After labeling, the cells were imaged by confocal microscopy on a Nikon Eclipse TE2000-U microscope with a Photometrics CoolSNAP fx CCD camera. Images were analyzed with the ImageJ software (NIH), and cells were binned into cortical or cytoplasmic localization based upon the pixel intensity distribution across the cell. Cells having cortical signal intensity 2ϫ or greater than the cytoplasmic pool were scored as cortically localized. Results are reported from two independent experiments. RESULTS The Third Dlg PDZ Domain Modulates the Interaction of Dlg with GukH-The interaction of Dlg with GukH has been shown to occur through the Dlg GK domain (15). While analyzing fragments of Dlg for their ability to bind to GukH, we found that, although the GK is able to bind GukH, a fragment that also includes the SH3 domain does not bind (Fig. 1A). This result indicates that the intramolecular interaction between the SH3 and GK domains competes with GukH binding. However, elements NH 2 -terminal to the SH3-GK control the ability of Dlg to interact with GukH, as larger fragments of Dlg, including the full-length protein, are able to bind GukH. We analyzed several Dlg fragments to identify the minimal components necessary for regulation of the Dlg-GukH complex assembly using a GST fusion of the GukH Dlg binding domain (Fig. 1B). A fragment containing the third PDZ domain along with the short linker that connects it to the SH3-GK module are necessary and sufficient to allow GukH binding (Fig. 1, A and C). As shown in Fig. 1D, the PDZ-SH3-GK architecture is highly conserved among MAGUK proteins with the number of residues linking the PDZ and SH3 domains ranging from 5 to 40. The functional linkage between the Dlg PDZ and SH3-GK domains and the conserved architecture of these domains in MAGUK proteins suggests that the PDZ domain is an integral component of a larger PDZ-SH3-GK module. GukH Binds to a Composite Site on Dlg Formed by Both the SH3 and GK Domains-In experiments using purified components, we noticed that a proteolytic fragment of the PDZ-SH3-GK also interacts with GukH (Fig. 1C, starred band). This proteolytic fragment corresponds to a COOH-terminal trunca-tion that lacks the GK domain (based on the molecular weight of the fragment and the presence of the NH 2 -terminal His tag), indicating that one or more additional GukH binding sites exist outside of the GK domain. We tested both the Dlg PDZ and SH3 domains for the ability to bind GukH. Consistent with a GukH binding site in Dlg outside of the GK domain, purified fragments of Dlg that lack the GK but contain the SH3 domain are able to bind GukH ( Fig. 2A). However, the third PDZ domain is not required for GukH binding in this context. This interaction is qualitatively weaker than with the PDZ-SH3-GK (Fig. 2B), consistent with the SH3 domain being only one part of a larger interaction surface that includes the GK domain. As the region of GukH that binds to Dlg contains several proline-rich sequences (Fig. 2C) and SH3 domains bind to a consensus sequence of PXXP (19), we tested these sequences for their ability to bind the Dlg SH3 domain (using an SH3 domain that lacked the "HOOK" segment that links the SH3 and GK domains). Although MAGUK SH3s deviate from canonical SH3 domains (12), we found that each of the GukH proline-rich segments is able to bind to the Dlg SH3 domain (Fig. 2D). Similar to the interaction of GukH with the Dlg GK domain, the binding site on the SH3 domain is obscured when the GK domain is present (Fig. 2E), presumably because of the intramolecular interaction between the two domains. Another similarity between the two binding sites is that GukH binding is rescued by the presence of the third PDZ domain (Fig. 2E). Binding of proline-rich sequences requires the presence of specific proline residues as mutation of the PXXP to AXXA completely disrupts binding (Fig. 2D). We therefore conclude that GukH binds to a composite binding site formed by the SH3 and GK domains and that this binding site is obscured by the intramolecular interaction between the two domains. The presence of three SH3 ligand sequences in GukH indicates that each GukH may bind multiple Dlg proteins. Modulation of GukH Binding by COOH-terminal PDZ Ligands-The interplay between the Dlg PDZ and SH3-GK modules appears to be a mechanism for communication between the PDZ and SH3-GK binding sites. PDZ domains are common protein interaction domains that bind short, COOHterminal sequences present in target proteins (20), although binding to internal motifs can also occur (21). The third PDZ domain from Dlg or its mammalian homologues has been shown to bind two COOH-terminal sequences, one from CRIPT (cysteine-rich interactor of PDZ; sequence DTKNYK-QTSV-COOH) and Drosophila neuroligin (sequence KRVHIQEISV-COOH) (22,23). CRIPT binds to microtubules providing a link between MAGUK scaffolding proteins and the NOVEMBER (24), whereas neuroligin is a membrane protein involved in synapse formation (23). Regulated MAGUK Scaffolding We synthesized peptides containing the last ten residues of CRIPT and neuroligin and used these peptides to probe the coupling of PDZ and SH3-GK ligand binding activity. Although the PDZ-SH3-GK fragment is able to bind GukH, the addition of the CRIPT or neuroligin peptide lowers the affinity for GukH in a dose-dependent fashion, whereas an unrelated PDZ ligand peptide has no effect (Fig. 3A). These results indicate that PDZ ligand binding influences the GukH (SH3-GK) binding site. The communication between the Dlg PDZ and SH3-GK domains must have energetic consequences. To explore the coupling between the two, we measured the affinity of the CRIPT and neuroligin peptides for the isolated PDZ domain and the entire PDZ-SH3-GK fragment using the fluorescence anisotropy of NH 2 -terminal tetramethylrhodamine (Fig. 3B). Although the affinity of CRIPT for the isolated PDZ domain is 14 M, which is typical for interactions of isolated PDZ domains with COOH-terminal ligands, the affinity of CRIPT for the PDZ-SH3-GK fragment is 0.8 M, representing an ϳ15-fold difference in affinity. No binding was observed to the SH3-GK (data not shown). The difference between PDZ-SH3-GK and PDZ binding is similar for neuroligin (K d ϭ 71.5 and ϳ900 M, respectively), although the affinities are significantly weaker. The difference between the affinity for the two fragments is consistent with the coupling of PDZ ligands and GukH binding sites. However, as the affinity for PDZ-SH3-GK is higher than for PDZ alone, the data excludes a simple model in which the PDZ ligands compete with an intramolecular interaction formed with the PDZ domain. Interdomain PDZ-SH3-GK Interactions Lead to Mutually Exclusive Cellular Complexes-How might the complex interdomain interactions in the Dlg PDZ-SH3-GK module affect the complexes that Dlg forms in a cellular context? The in vitro analysis of these proteins indicates that GukH-Dlg and PDZ ligand complexes are likely to exist as distinct complexes (Fig. 4A). To examine whether the CRIPT and GukH interactions affected the complexes that Dlg forms in a cellular context, we transfected Drosophila S2 cells with the Dlg binding domains from GukH and CRIPT. Immunoprecipitation of the Dlg ligands from extracts of these cells leads to coimmunoprecipitation of full-length Dlg (Fig. 4B). However, neither of the Dlg ligands is able to immunoprecipitate the other, consistent with a model in which GukH and PDZ ligands form mutually exclusive Dlg complexes. We also examined the effect of the GukH and CRIPT Dlg binding domains on Dlg localization in these cells. Immunofluorescence from an HA fusion of the GukH Dlg binding domain shows a large percentage of cells with cortical localization (Fig. 4C), whereas fulllength Dlg is consistently found in the cytoplasm, excluded from the nucleus (Fig. 4D). Expression of GukH induces localization of Dlg at the cell cortex in a significant fraction of the transfected cells (Fig. 4E), indicating that GukH recruits Dlg to the cell cortex. Expression of a GFP fusion of the Dlg binding portion of CRIPT (which localizes to the cytoplasm) leads to a significant decrease in the fraction of cells with cortical Dlg localization (Fig. 4F), which we interpret as arising from competition between the CRIPT and GukH complexes of Dlg. Additionally, a Dlg mutant in which the PDZ domain no longer rescues the ability to bind GukH (Dlg⌬⌬; see below) is not significantly localized to the cortex by GukH. These results are also consistent with mutually exclusive PDZ ligand and GukH-Dlg complexes. Mechanism of PDZ-based Regulation of the SH3-GK Module-How does binding of COOH-terminal ligands to the PDZ domain alter the ability of SH3-GK to bind GukH? As the CRIPT and GukH binding sites are likely to be fairly distant from one another, competition through a direct steric mechanism is unlikely. One possible mechanism for PDZ regulation of the SH3-GK module is that the PDZ domain disrupts the intramolecular interaction between the SH3 and GK domains to expose the composite GukH binding site. In this model, the PDZ-SH3 would fail to interact with the GK. However, in an intermolecular assay, in which the PDZ-SH3 and HOOK-GK are separately expressed, we find that these two domains are able to bind one another and that this binding is not qualitatively altered by the presence of CRIPT peptide (Fig. 5A). This indicates that the SH3-GK intramolecular interaction is not qualitatively affected by the PDZ domain. The Dlg PDZ-SH3-GK module contains a conserved linker between the PDZ and SH3 domains (Fig. 5B). In the structure of the Dlg PDZ domain (25), a portion of the sequence following the PDZ domain forms a short helix that packs against the PDZ domain. The conservation and structure of this ϳ40-residue sequence prompted us to examine the role of the linker in functionally coupling the PDZ and SH3-GK modules. To test the contribution of this sequence to the coupling between CRIPT and GukH binding, we constructed a series of PDZ-SH3-GK fragments with short deletions in the sequence. When either half of the linker is removed (⌬1 and ⌬2), the PDZ domain no longer efficiently rescues binding of GukH to the composite SH3-GK binding site (Fig. 5C). Combining both deletions (⌬⌬) results in an even more severe effect. In addition, we find that the PDZ domain is unable to relieve GukH inhibition in trans. 3 To determine whether the sequence of the linker is important or, alternatively, whether the spacing provided by the linker is only required, we replaced the deleted residues with glycine-serine repeats (Fig. 5C, GS). As this flexible linker is unable to restore GukH binding activity, the linker does not function solely to provide proper spacing between the domains. These results indicate that the covalent attachment of the PDZ and SH3 domains through a conserved linker is necessary for the regulation of the composite GukH binding site and that this regulation does not occur by disruption of the SH3-GK intramolecular interaction. DISCUSSION We have demonstrated a set of interdomain interactions in the Drosophila tumor suppressor Dlg that leads to regulation of the ligand binding activity of these domains. The Dlg SH3-GK module, a defining feature of MAGUK proteins, forms a composite binding site for the synaptic protein GukH, but the intramolecular interaction between the SH3 and GK domains obscures this binding site. The third PDZ domain from Dlg relieves this inhibition, making use of the short linker that attaches it to the SH3 domain. Binding of CRIPT to the PDZ domain induces a change in the SH3-GK module that again obscures the GukH binding site, effectively leading to competition between CRIPT and GukH binding that influences the organization of Dlg-mediated protein complexes in a cellular context. The active scaffolding of Dlg complexes does not utilize disruption of the SH3-GK intramolecular interaction but requires a conserved, structured linker that attaches the PDZ and SH3 domains. Active Scaffolding of MAGUK Complexes-Cellular signaling relies on the formation of specific protein complexes, and scaffolding proteins play a central role in this process (26). Although scaffolds are critical components of many signaling pathways, their exact function has remained unclear (27). In particular, are scaffolds simple tethers that passively bind to their many ligands or is ligand binding actively regulated? The answers to these questions have significant implications for the types of scaffold-mediated complexes that are formed in cells. In the yeast mitogen-activated protein (MAP) kinase scaffold Ste5, heterologous protein interaction domains can functionally replace the native kinase recruitment modules, although at reduced levels (28). As the heterologous domains are unlikely to participate in interactions that would lead to regulated binding of scaffold ligands, this suggests that certain scaffolds may function in a passive manner. However, the fact that domainswapped Ste5 scaffolds do not function at wild-type levels leaves open the possibility that this scaffold also has characteristics of an active scaffold. Clearly, in the case of Dlg, however, the complexes that it forms are regulated by dynamic interdomain interactions. We are currently assessing how these interactions affect the diversity of Dlg complexes that might be formed in different cellular contexts. PDZ-based Regulation of MAGUK Complex Formation-How might the third PDZ domain from Dlg expose the composite GukH binding site within the SH3-GK module? Our data exclude a simple model in which the PDZ domain directly competes against the interaction between the SH3 and GK domains as the PDZ-SH3 and GK domains bind to one another in an intermolecular assay (Fig. 5A). We propose that the PDZ domain stabilizes the linker that connects it to the SH3 domain and that this linker interacts with the SH3 domain to induce an SH3-GK conformation that allows for GukH binding. The structure of the PSD-95 SH3-GK indicates that the SH3 ligand binding site is obscured in the closed conformation. The PDZ must therefore alter the position of residues that occupy the PXXP binding site to allow for GukH binding. Although there is no direct structural information on the position of the PDZ domain, based on the NH 2 terminus of the SH3 domain, we can infer that the approximate position of the PDZ domain is likely to be in close proximity to the PXXP binding site. Binding of CRIPT would then return the SH3-GK to its basal conforma- tion, which binds GukH with low affinity. This would explain why the affinity of CRIPT for the PDZ-SH3-GK fragment is higher than that for the PDZ alone. Such a supramolecular interaction induced by a PDZ ligand has been observed in the PDZ-regulated protease DegS (29, 30) (see below). The PDZ domain of the MAGUK PSD-93 has also been shown to be involved in the regulation of its SH3-GK module, although in this case, the PDZ domain negatively regulates ligand binding to SH3-GK. A fragment of PSD-93 containing the SH3 and GK domains binds to microtubule-associated protein 1A (MAP1A), but MAP1A fails to bind to full-length PSD-93 (31). Binding of MAP1A is restored by the presence of a COOH-terminal ligand for the third PSD-93 PDZ domain, although the other two PDZ domains appear to play a small role. In this system, the PDZ domain appears to repress binding to the SH3-GK and the PDZ ligand somehow restores binding. The distinct behavior of these two systems indicates that ligands can utilize the interdomain interactions in MAGUK proteins to achieve very different regulatory effects. Regulation of interactions that are modulated by the SH3-GK intramolecular interaction have also been shown to occur by PDZ-independent mechanisms. The GK domain from the Dlg homologue SAP97 binds to GKAP (32). GKAP binding is inhibited by the SH3-GK intramolecular interaction (14). In this case, binding is rescued by an L27 domain present at the very NH 2 terminus of the protein. However, how GKAP binding might be modulated in the context of the full-length protein is unknown. PDZ domains have been utilized in the regulation of diverse functions including the enzymatic activity of proteases. In the DegS protease, which is responsible for initiation of the misfolded protein proteolytic cascade in the periplasm of bacteria, a NH 2 -terminal PDZ domain regulates the protease domain activity using a mechanism that may be similar to PDZ regulation of the Dlg SH3-GK module. In DegS, the PDZ domain does not directly repress the protease domain. Instead, the protease is normally found in an inactive conformation (30). Activation occurs when a COOH-terminal ligand binds to the PDZ domain, which induces an interaction between a loop within the protease domain and PDZ ligand. This interaction causes a large change in the protease to an active conformation. Such an interaction with the ligand of the Dlg PDZ domain would be consistent with the higher affinity of the PDZ-SH3-GK module for this ligand.
5,944.4
2006-11-24T00:00:00.000
[ "Biology" ]
Exopolysaccharide production by lactic acid bacteria: the manipulation of environmental stresses for industrial applications Exopolysaccharides (EPSs) are biological polymers secreted by microorganisms including Lactic acid bacteria (LAB) to cope with harsh environmental conditions. EPSs are one of the main components involved in the formation of extracellular biofilm matrix to protect microorganisms from adverse factors such as temperature, pH, antibiotics, host immune defenses, etc.. In this review, we discuss EPS biosynthesis; the role of EPSs in LAB stress tolerance; the impact of environmental stresses on EPS production and on the expression of genes involved in EPS synthesis. The evaluation results indicated that environmental stresses can alter EPS biosynthesis in LAB. For further studies, environmental stresses may be used to generate a new EPS type with high biological activity for industrial applications. Introduction In recent years, the trend of using natural polymers in many fields has led to the development of research on producing exopolysaccharides (EPSs) from bacteria. The unique structural features have made bacterial EPSs of particular interest in the fields of chemistry, medicine and food industry [1]. Because of their ability to increases hold water, EPSs are widely used as viscous, stabilizing and emulsifying agents in the food industry [2] to improve the rheological property, texture and sensibility of bread and fermented milk products such as yogurt and cheese [3]. In addition to its technological properties, EPSs also have potential health benefits as antioxidant, anticancer, anti-inflammatory antiviral activities [4,5] and cholesterol lowering effects [6]. Among EPS producing bacteria, Lactic acid bacteria (LAB) have has grasped the attention of researchers thank to their strong ability to produce EPSs. The LAB strains as Streptococcus, Lactococcus, Pediococcus, Lactobacillus, Leuconostoc and Weissellale are often used to produce EPSs [7]. LAB are recognized as safe microorganisms (GRAS-Generally Recognized As Safe) and also capable of creating EPSs with many different structures without any health risks [8]. In LAB, EPSs play an important role in controlling cell surface physicochemical characteristics [9], protecting bacterial cells from dehydration, negative environmental impacts, antibiotics, phagocytosis, and phage attacks [10][11][12]. EPSs take part in the structural components of extracellular matrix, in which cells are encapsulated during the development of cell membrane [13]. Previously, there have been several reviews to describe the stress response in LAB [14][15][16] focusing almost exclusively on the function of stress proteins (HS proteins, Csp, etc.) and their regulators (HrcA, CtsR) or that of proteins linked physically to the cell membrane (transport systems, sensors, housekeeping proteases, etc.). However, in order to understand clearly the role of stresses in EPS biosynthesis, a series of key questions must be addressed: -Why are EPSs related to stress resistance? -What type of stress to apply? -Is it possible to control EPS biosynthesis using environmental stress? Therefore, in this review, we will discuss EPS synthesis; the physiological functions of EPSs as well as the impact of environmental stresses on EPS production and the expression of genes involved in EPS biosynthesis in LAB. This assessment will clarify the relationship between environmental stresses and changes in LAB EPS synthesis. It also suggests that environmental stress can improve the productivity of EPSs from LAB and produce customized EPSs with desired functionality. EPSs biosynthesis in LAB LAB synthesize two types of EPSs including homopolysaccharides and heteropolysaccharides [17]. Homopolysaccharide synthesis is a relatively simple biochemical process involving a specific GT (glucansucrase or fructansucrase) and an extracellular sugar donor (sucrose for the synthesis of glucans, but it can also be other fructose-containing oligosaccharide (e.g. raffinose) for the synthesis of fructans) [18,19]. Heteropolysaccharide synthesis is a complex process which involves the specific role of several gene products (enzymes) encoded by the eps gene cluster and housekeeping genes. These gene products can be categorized into four groups (or modules) basing on their functions: polysaccharide assembly machinery (the priming glycosyltranferases, Wzx or flippase, Wzy or polymerase and EpsA), phosphoregulatory system managing polysaccharide assembly (EpsB, EpsC and EpsD), glycosyltranferases and sugar nucleotide biosynthetic pathways. Genes encoding acetyl-and pyruvyl transferase involved in the chemical decoration of EPSs also present in the cluster (Figure 1) [20]. Figure 1. Schematic genetic organization of the eps gene clusters [20]. In general, EPS biosynthesis is summarized in 3 main steps ( Figure 2). Firstly, it is the generation of activated sugar precursors (or sugar nucleotides such as uridine diphosphate glucose and thymidine diphosphate glucose) for repeating units. The sugar nucleotides are synthesized in multistep pathways from glycolytic intermediates, generally glucose-6-phosphate or fructose-6-phosphate. This complex process requires the function of several housekeeping gene products such as phosphoglucomutase (converts glucose-6-phosphate to glucose-1-phosphate) [21]; UDP-glucose pyrophosphorylase and dTDP-glucose pyrophosphorylase (converts glucose-1-phosphate to sugar nucleotides UDP-glucose and dTDP-glucose, respectively) [22]. The producing potential of different sugar nucleotides is intrinsically determined by the gene content of each LAB, which ultimately dictates the type of monomers found in EPSs. EPSs produced by LAB consist of repeating units which is usually composed of two or more (usually 3-8) types of monosaccharides [22][23][24]. Secondly, the synthesis process of repeating units begins by attaching the first sugar nucleotide to the isoprenoid lipid carrier, undecaprenyl phosphate, which is attached to the cytoplasmic membrane of the cell, and being catalyzed by priming glycosyltransferase. This is followed by the sequential addition of sugar nucleotides to form repeating units and glycosyltransferases encoded by gense in the eps gene clusters catalyze for this process [25]. Finally, it is the polymerization and export of repeating units from the inner part to the outer part of cell membrane. Basically, three different proteins which are also encoded in the eps gene cluster carry out polymerization and export process: A flippase (encoded by wzx or cpsJ) or a translocase moves the lipid carrier-repeating unit complex from the inner surface of cytoplasm membrane to periplasmic. Then, a polymerase (encoded by wzy or cpsH) catalyzes the coupling of repeating units [21]. Lastly, a chain length determination protein separates lipid carrier-repeating unit complex to stop polymerization and export process simultaneously determines the chain length of final EPSs [21]. EPSs are synthesized to serve various functions in the bacteria. One of these is to ensure bacteria survive under stress conditions. The function of EPSs in LAB's stress resistance is discussed in the next section. The role of EPSs for LAB stress resistance EPSs are the important structural component of LAB cell wall [26]. EPSs form a layer surrounding cells to protect them against adverse environmental conditions such as dehydration, extreme temperature, acid, osmotic stress, phagocytosis, macrophages, and antibiotics [25,27,28]. Other roles of EPSs include biofilm formation, cell adhesion mechanisms [29] and the determinant of strain-specific characteristics in host interaction [30]. To adapt to environmental stresses, LAB can alter their cell surface by producing more EPSs [31]. The increased production of EPSs results in thicker and firmer cell walls ( Figure 3). As a result, it increases the LAB's resistance to stresses. This feature may be useful to exploit for improving the stamina of the probiotic starters as well as the ability to produce EPSs in LAB. Numerous studies have also demonstrated that, after being pre-stressed, LAB's viability is improved significantly [31,32]. As discussed, LAB enhance EPSs synthesis to create a physical barrier on the cell surface which separates the cell from stress. Especially, in low pH conditions, this EPS layer restricts the access of exogenous acids to bacterial cells due to the anions bound to EPSs as phosphate groups [33]. Phosphate residues confer a net negative charge to EPSs [27]. The presence of phosphate in EPSs is also observed in many studies [34][35][36][37]. According to these viewpoints, LAB may produce anionic EPSs carrying phosphate groups under acid stress conditions and they cause negative charge on cell surface to prevent proton diffusion into cells ( Figure 4). In the case of osmotic stress, a sudden increase in osmotic pressure made by stress results in water movement from the inside to the outside of cell, causing a detrimental loss of cell turgor pressure and changing intracellular solute concentration, which ultimately can seriously affect cell viability [38]. In response to osmotic stress, LAB synthesize EPSs to protect themselves by holding water around cells to prevent dehydration ( Figure 4) [39]. The water holding capacity of EPSs is due to the presence of OH groups in their structure. Another substance such as glycerol known for high water holding capacity can sometimes be included in the structure of EPSs. The presence of glycerol was recorded in EPSs produced by Latilactobacillus sakei [34]. Furthermore, external protective compounds such as water stress proteins which aid in the survival of cells from desiccation can accumulate in extracellular glycan and show homologies with carbohydrate-modifying enzymes [40]. A lot of bacteria respond to carbon dioxide stress by producing EPSs to create a barrier for slowing the diffusion of toxic substance into cells, which in this case would be carbon dioxide [41,42]. Similarly, EPSs also provides support for LAB to resist metal stress. The negatively charged groups in EPSs bind cations and protect bacterial cells against toxic metals [43]. In addition, it has been shown that EPSs are related to oxidative stress resistance in LAB. The supplementation of EPSs into culture medium could appreciate the growth of L. mesenteroides by 10 times under oxidative stress and influence promoting the aerobic growth of oxygen-sensitive strains such as Lactobacillus and Bifidobacterium [44]. In oxidative stress conditions, the production of harmful reactive oxygen species may be increased. EPSs can scavenge of these reactive oxygen species to prevent cell damage ( Figure 4) [45]. Furthermore, EPSs also reduce oxidative stress by extrusion of dissolved oxygen from aqueous culture medium [44]. The ability to protect cells from environmental stresses depends on EPS-phenotype. Terms such as 'ropy', 'mucoid', and 'slime' have been used to describe the different EPS producing phenotypes of LAB [46]. LAB strains with the ropy-exopolysaccharide production show better resistance to stress. According to a report, the ropy phenotype of Lactiplantibacillus plantarum is related to better tolerance to low pH [29]. The physiological functions of EPSs Together with the cell protection function, the positive advantages of EPSs are highlighted through the essential contribution to the human health such as prebiotic, anticoagulant, antioxidant, anti-inflammatory, antiviral, cholesterol lowering effects and even anticancer activity [47]. LAB's EPSs have been showed an essential functional role in blood coagulation prevention. The strong anticoagulant activity of EPSs in sulphate derivatives has been demonstrated. Heparin Cofactor II is a potent inhibitor of thrombin in the coagulation pathway and the sulphated EPSs provides an acidic medium condittion to facilitate the inhibitory effect of Heparin Cofactor II on thrombin [48,49]. The sulphated sites and stereochemistry of EPSs activate HC II according to the allosteric mechanism [50]. One study proved that EPS47FE and EPS68FE which are secreted by L. plantarum 47FE and Lactiplantibacillus pentosus 68FE, respectively, exhibit strong anticoagulant and fibrinolytic activity [51]. Prebiotic effects were also observed at LAB's EPSs [52,53]. EPSs from LAB can be used by probiotic strains [54] and have the capacity to stimulate the growth of probiotic bacteria and maintain the balance of intestinal microflora [55,56]. The prebiotic potential of LAB's EPSs has demonstrated in many studies. The α-D-glucan synthesized by L. plantarum can stimulate probiotic bacteria growth. It is low-digested by artificial gastric juice and show to put non-probiotic bacteria off growing that Enterobacteriaceae is a representative instance [57]. In vitro EPSs produced by Weissella cibaria, Weissella confusa, L. plantarum and Pediococcus pentosaceus could be used as a prebiotic ingredient in the food industry to modulate gut microbiota towards health benefits [58]. Another health promoting functions of EPSs produced by LAB are cholesterol lowering effects [50]. In an in vitro assay, EPSs produced by L. plantarum BR2 show cholesterol lowering properties (45%) [59]. Based on animal and in vitro experiments, several hypotheses to explain the cholesterol lowering mechanism of EPSs have been proposed including bile removal, anabolism and cholesterol conversion, co-precipitation effects, etc… [60,61]. Free radicals usually cause serious health problems. Therefore, EPSs are such an important natural antioxidant to prevent the free radicals. LAB's EPSs also exhibit high antioxidant activity. One evidence showed that EPSs from Lactobacillus gasseri FR4 have a good free radical activity, while hydroxyl and superoxide radical capture activities are dependent on EPS concentration [62]. Additionally, under in-vivo conditions, LAB'S EPSs have been shown to increase the activity of hepatic superoxide effutase, serum catalase, and glutathione S-transferase simutaneously reducing serum malondialdehydes and monoamine oxide activity. These are excellent antioxidant and anti-aging evidence created by EPSs [63,64]. In recent decades, the immunomodulatory potential of EPSs has received a lots of scientific consideration. Many in-vitro studies have demontrated that EPSs produced by different LAB species have the immunomodulatory ability [17]. The phosphate group (a good inducer of the immune response) plays a critical role and characterizes the immunomodulatory effects of EPSs. Phosphate molecules can activate various immune cells (such as macro-phages and lymphocytes) and initiate immune responses [65]. According to these results, it can be speculated that EPS generated under acid stress (it seems that acidic EPSs [65]) may exhibit stronger immunological properties. Cancer is one of the health problems getting a lot of attention today and it is usually treated via chemotherapy method. However, chemotherapy can cause some unexpected effects which can range from minor to severe and life-threatening [66]. Therefore, other pharmaceutical products are being researched to help cure cancer that LAB's EPSs are the helpful ones due to its anti-tumor effects [65]. EPSs from L. plantarum 70810 can significantly inhibit the proliferation of tumor cells such as HepG-2, BGC-823, especially HT-29 [67]. In vitro evaluation of anticancer properties of Lactobacillus acidophilus EPSs in colon cancer cell lines demontrated that they were able to inhibit the expression of genes involved in angiogenesis and tumor survival [68]. In another study, EPSs from Levilactobacillus brevis MSR10 are used to synthesize the silver nanoparticles (AgNPs). According to the results obtained, these AgNPs not only had high antimicrobial and antioxidant capabilities but also significantly reduced the percentage of live HT-29 cells [69]. Many recent researches have been conducted to show the antiviral bioavailability of EPSs and they are considered to be an immune stimulant affect in a number of ways in the immune system, contributing to the protection of human cells against certain viruses [65]. A study has proven that EPSs extracted from L. plantarum LRCC5310 were able to resist human rotavirus in vitro [70]. Impact of environmental stresses on EPS production in LAB Under environmental stresses, LAB have different adaptation mechanisms which involve the accumulation of compatible solutes and energy storage compounds; regulation of energy production pathways, as well as the modulation of cell envelope, i.e., membrane, cell wall, surface layers, and EPSs [71]. In this review, we divide environmental stresses into common groups including nutrient factors (carbon sources, nitrogen source, carbon dioxide, oxigen, mineral salts, etc.); physiological factors (pH, osmotic stress, temperature, etc.) and co-cultivation to discuss the impact of stresses on EPS production in LAB. Nutrient stress factors The composition of nutrients is one of the factors which affects the growth and metabolism of cells [72]. Thus, EPS synthesis is also influenced by culture medium compositions [73]. The starvation or oversupply of nutrients such as nitrogen, sugars, carbon dioxide, etc. may change EPS synthesis [74][75][76]. Effects of nutritional stress on EPS synthesis in LAB have been proved by prior studies. Marshall et al. demonstrated that EPS production in Lactobacillus lactis subsp. cremoris LC 330 is stimulated by nitrogen limitation [77]. In contrast, Lactobacillus delbrueckii ssp. bulgaricus was recorded increased EPS production in additional nitrogen-enriched [78]. Excessive sugar presence in the culture medium also increases EPS production in LAB. The possible explanations for the increased EPS synthesis under stress of high sugar concentration are osmosis, unlimited supply of sugar building blocks and high energy availability [75]. The increased sucrose concentration in the MRS medium was suitable for EPS overproduction in Lactobacillus confusus TISTR 1498 [79]. It has also been showed that Lactobacillus strains (L. delbrueckii bulgaricus, Lactobacillus helveticus and Lacticaseibacillus casei) yield the highest EPSs when growing on fermentation medium comprising 20% sucrose as carbon source [80]. Likewise, the synthesis of EPSs in Fructilactobacillus sanfranciscensis LTH2590 rose by increasing the sucrose concentration in the medium and reached about 40 g/L at sucrose concentration of 160 g/L [81]. In the case of Leuconostoc mesenteroides NRRL B-1299, culture medium with sucrose concentration over 5 g/L caused more dextran production [82]. Similar to succorse, the high concentration of glucose is also advantageous for the production of EPSs. As previously reports, the EPS production of Streptococcus thermophilus (W22) and L. delbrueckii subsp. bulgaricus (B3, G12) was stimulated by high glucose concentration [83]. It was also shown that the presence of excess sugar in medium has a improving effect on EPS production in L. casei and Lacticaseibacillus rhamnosus, although the growth is apparently decreased [84,85]. In some LAB strains, carbon dioxide can used as a carbon source for growth because it is a substrate in carbamoyl phosphate synthesis and other metabolic reactions in LAB [86]. Carbon dioxide regulates physiology and energy metabolism by regulating enzymes involved in glycolysis [87]. The impact of carbon dioxide stress on EPS production in LAB has also recorded in several studies. EPS production depended entirely on carbon dioxide concentration and the maximum EPS yield, produced by Bifidobacterium longum JBL05, increased proportionally to carbon dioxide concentration in the range of 0-20% [88]. L. casei growing in carbon dioxide-rich environment was surrounded by a membrane like EPS component [89]. In contrast to carbon dioxide stress, under dissolved oxygen concentration above 0.05 ppm, B. longum declined growth and EPS accumulation [90]. These results suggest that the EPS synthesis of B. longum varies under different stress conditions. Although oxidative stress reduces the accumulation of EPSs in B. longum, it increases EPS production in B. scardovii and B. adolescentis. One evidence has shown an increase in EPS production and the cell surface hydrophobicity of B. scardovii and B. adolescentis under oxidative stress [91]. Physical stress factors EPS production in LAB may be stimulated by various physical stresses as a cellular defense response, which could also enhance the formation of biofilms [92]. The rate of EPSs in biofilms can account for about 50-90% of total organic matter amount [93,94] and EPSs, together with proteins, nucleic acids and lipids, form the structure of a biofilm matrix [95]. Low pH was found to significantly decrease the formation of biofilm in L. rhamnosus GG, while it enhanced biofilm formation in Limosilactobacillus reuteri strains [96,97]. Although studies have not focused on the effect of low pH stress on EPS production in LAB bacteria, an increase in EPS production under low pH has been observed in several reports. The EPS production of Lactobacillus helveticus ATCC 15807 under controlled pH of 6.2 was lower than that observed at pH 4.5 [98]. Likewise, EPS production in Ligilactobacillus salivarius UCO_979C-2, adapted variant strain, after 24 h at pH 2.6 was 690 mg/L, compared to native L. salivarius UCO_979C-1 strain that was only 450 mg/L at pH 6.4 [99]. The negative effects of osmotic stress on cells may be limited because of the presence of EPSs. Therefore, the presence of substances caused high osmotic pressures, such as NaCl, can stimulate EPS synthesis on the cell wall. As previously described by Seesuriyachan et al., the EPS synthesis of L. confusus TISTR 1498 did not depend on biomass and stress of high NaCl concentration could enhance EPS production in solid state fermentation [79]. Similarly, Leuconostoc mesenteroides/pseudomesenteroides 406 achieved maximum EPS yield in the presence of 5% NaCl [100]. In contrast, the inhibition of EPS production by NaCl was recorded in L. helveticus ATCC 15807 [98]. Excessive temperature causes protein denaturation, nucleic acid and membrane damage [101]. However, when bacteria are exposed to extreme temperature, they reprogram their metabolism to deal with temperature changes [102]. One of the metabolic changes is an increase in EPS synthesis. High temperature stress is also recorded to affect EPS production in LAB. Nguyen et al. demonstrated that sub-lethal thermal stress increases EPS production and improves the viability of B. bifidum [31]. Co-cultivation In biotechnology, co-culture has been shown to make microorganisms more resistant to environmental changes and can perform more complex metabolic activities through the culture combination of various strains [103,104]. Consequently, co-culture can also affect EPS synthesis. The effect of co-cultivation on improving EPS production of LAB is often studied in combination with Saccharomyces cerevisiae. Lactobacillus kefiranofaciens JCM 6985 enhanced the production of kefiran, an exopolysaccharide, in co-culture with S. cerevisiae IFO 0216 [105]. The EPS production of L. rhamnosus strains was also increased by 39-42% and a higher level of EPS operon expression was observed for L. rhamnosus RW-9595M in co-culture [106]. Similarly, L. paracasei co-cultured with Saccharomyces cerevisiae resulted in the overexpression of gene (coding for polyprenyl glycosylphosphotransferase) involved in EPS production [107]. In facts, the enhancement of EPS production by LAB in co-culture with Saccharomyces cerevisiae is induced by direct and physical contact with components on the surface of yeast cell [105]. In a high viscosity environment, LAB can be stressed by themselves own acids. LAB adhesion to yeast cell will activate EPS production in LAB because this adhesion leads to efficient lactic acid consumption by yeast cells [105]. In general, the biosynthesis of EPS can be altered either up or down under different stress conditions. These changes may be related to expression level of genes involved in EPS synthesis. To clarify this hypothesis, we have discussed the expression of esp genes under environmental stresses. Details are presented in section 6. Changes the expression of genes involved in EPS production under environmental stresses Bacteria respond to stresses by activating various regulatory mechanisms including activities involved in metabolisms, cell envelope and gene expression, giving them the potentiality to adapt to extreme environmental conditions ( Figure 5) [108]. Changes in gene expression establish the principal component of the bacterial response [109] and can alter the biosynthesis of EPSs under stress conditions [110]. The correlation between stress and the expression of genes involved in EPS biosynthesis has been documented in LAB. Increasing expression of gtf01207 gene, encoding for a priming glycosyltransferase related to EPS synthesis, was observed in B. animalis subsp. Lactis after exposure to stress of acid, bile salts and osmosis [92,111]. According to another study, when the pH of culture medium decreased from pH 6.5 to pH 5.5, the expression level of epsNMLKJ genes in Streptococus thermophilus ASCC 1275 increased. However, the expression of genes involved in the synthesis of sugar nucleotides such as dTDP-rhamnose and UDP-GlcNAc reduced [112]. Also in this study, when temperature increases from 37 ℃ to 40 ℃, there are not changes in the expression level of epsNMLKJ cluster, but the expression of eps1C and eps1D genes increase while that of eps2C and eps2D decrease [112]. Expression of gtf gene encoding for enzyme which produces beta-glucan (membrane-linked glycosyltransferase enzyme) in Lacticaseibacillus paracasei caused 60 times higher heat tolerance, 20 times higher acid tolerance compare to control strain [113]. Evaluation of gene expression is not only based on mARN but also on genetic products which are enzymes formed after decoding. Glyceraldehyd-3-phosphate dehydrogenase, proved to be necessary for EPS production of Xanthomonas campestris pv. [81], increased heterological expression in L. rhamnosus HN001 during heat stress. In contrast, the heterological expression levels of glyceraldehyd-3-phosphate dehydrogenase and phosphoglycerate kinase, related to EPS biosynthesis of Xanthomonas axonopodis pv. Glycines) [114], decreased under osmotic stress [115]. In general, environmental stress can alter the expression of genes involved in EPS biosynthesis. The result of this response may increase EPS production in LAB. Conclusions and future trends In order to survive under environmental stress conditions, LAB react by synthesizing EPSs to form a protective barrier around the cells. This EPS synthesis is catalyzed by enzymes encoded by genes in the eps cluster and the impact of environmental stresses can alter the expression of these genes resulting in increased EPS production. Accordingly, environmental stress may be considered as a major factor to control LAB's EPS biosynthesis. The impact of different stresses on EPS synthesis is summarized in Table 1. In general, the synthesis of EPSs in LAB under stress conditions depends on the type of stress and bacterial species. Within the same species, EPS production may not be the same under different stress conditions. For instance, EPS production in L. helveticus ATCC 15807 is stimulated by stress at low pH but inhibited by sodium chloride stress. Conversely, a specific stress may stimulate EPS production in one species but inhibit it in another (Table 1). The EPSs produced by LAB can be the key ingredients showing promising functional roles for various utilities in food, medicine, etc. However, low EPS productivity could be a problem limiting commercial applications of these EPSs. Currently, EPS production improvement studies often focus on optimizing culture mediums, using genetic engineering, using cheap fermentation substrates, and environmental stress [116]. As discussed, EPSs protect LAB from negative environmental effects. Consequently, environmental stresses can promote EPS synthesis in LAB. This feature can be useful to exploit to improve the stamina of probiotic starters and the yield of EPSs. In addition, the biological activities of EPSs such as prebiotic, anti-oxidant, anti-inflammatory, ... are related to the monosaccharide compositions of EPSs. It has been proved that EPSs with distinct monosaccharide compositions vary in their therapeutic effects [117]. For instance, the proportion of monosaccharides (galactose > rhamnose > glucose) in the composition of EPSs produced by L. reuteri Mh-001 was demonstrated to relate to their anti-inflammatory activity, in particular galactose content enhances EPS anti-inflammatory effects on the macrophages [118]. Similarly, rhamnose-containing EPSs have been used in cosmetic applications because of owning to their emulsifying activity [119]. For further studies, we believe that environmental stresses may be an effective method which positively alters EPS biosynthesis to generate a new EPS type with higher biological activity for industrial applications.
5,994.8
2020-11-12T00:00:00.000
[ "Biology" ]
Food and Non-Food Biomass Production, Processing and Use in sub-Saharan Africa: Towards a Regional Bioeconomy : The bioeconomy concept has the aim of adding sustainability to the production, transformation and trade of biological goods. Though taken up throughout the world, the development of national bioeconomies is uneven, especially in the global South, where major challenges exist in Sub-Saharan Africa with respect to implementation. The BiomassWeb project aims to underpin the bioeconomy concept by applying the ‘value web’ approach, which seeks to uncover complex interlinked value webs instead of linear value chains. The project also aimed to develop intervention options to strengthen and optimize the synergies and trade-o ff s among di ff erent value chains. The special issue “Advances in Food and Non-Food Biomass Production, Processing and Use in Sub-Saharan Africa: Towards a Basis for a Regional Bioeconomy" compiles 22 articles produced in this framework. The articles are grouped in four sections: the value web approach; the production side; processing, transformation and trade; and global views. The synthesis presented in this paper introduces the challenges of the African bioeconomy and the value web approach, and outlines the contributing articles. The Sub-Saharan African Biomass Sector The rising global demand for biomass as food, feed, industrial raw material, and a source of energy is putting increasing pressure on the agricultural sector. This is particularly true for Sub-Saharan Africa (SSA), where many countries are confronted with growing regional and global demands for biomass-derived products while not yet having solved their national demands for food and non-food biomass [1][2][3][4][5]. Though food and nutrition security has improved globally in the last few decades, around 30% of the population in SSA is still faced with various forms of food insecurity. The number of undernourished in SSA has risen from 177 million in 2005 to 237 million in 2017 [5]. Associated indicators, such as the rates of anemia in women of reproductive age and stunting and wasting in children under the age of five, have increased [6,7]. With regard to energy supply, the major source of domestic fuel in SSA is fuelwood, which is primarily collected from forests, woodlands, and parklands. Due to rapid urbanization and a lack of alternatives, fuelwood is in demand not only in rural but also in urban areas, where up to 90% of households depend on it. With an average consumption of 1 kg fuelwood per person and day and a population of one billion people with an annual growth rate between 3% and 4%, the 'fuelwood gap' remains an ecological and socioeconomic challenge [8][9][10][11]. In SSA, modern biomass processing is still in an early stage, and the production of food is not harmonized with the production of biomass-based raw materials. This was evidenced during the recent boom of the biofuel industry that attracted governments to promote the large-scale cultivation of oil palm, jatropha, cassava, and sugarcane despite warnings about the risks of competition for land [12][13][14][15][16]. Many of these matters are rooted in the agricultural sector, which is the focus of this special issue. Farming in SSA falls roughly into two major categories: (i) subsistence or semi-subsistence farming by smallholders to cover their own demands and to market surpluses, and (ii) commercial farming managed by estates, large enterprises, emerging medium-size farms, or organized small farmers under government programs (contract farming), many of which are devoted to producing export-oriented crops, such as cotton, coffee, cocoa, flowers or vegetables [17,18]. Both categories face challenges that hamper and limit their development. While subsistence farming is severely limited by rural poverty, institutional and technical weaknesses, ecological fragility (aggravated by climate change), and political instability [19][20][21], commercial farms, by their economic focus, have been accused of undermining environmental and social standards, and they are challenged by the volatility of international prices due to their dependence on external markets [22]. Challenges for an African Bioeconomy Bioeconomy, or bio-based economy, is defined as the " . . . knowledge-based production and use of biological resources to provide products, processes and services in all economic sectors within the frame of a sustainable economic system." It is based on the expectation of expanding biomass production and processing sectors going beyond the production of food, feed, fiber, fuel, and other basic products towards the production of value-added goods and services that are demanded by other economic sectors such as the industry, energy, pharmacy, and chemical sectors [16]. A biomass-based economy is increasingly envisaged by many countries as a path to follow. While most countries of the global North are investigating and developing new technologies, establishing large and sophisticated biorefineries, focusing on maximizing benefits, minimizing waste, and even reorganizing their institutions accordingly, progress in the global South is limited ( [23][24][25] in this special issue). In most regions of SSA, biophysical features such as the wide availability of productive land and a constant solar radiation are the major comparative advantages for biomass production, and they represent a great natural potential to increase the amount of biomass that is used for food and industrial raw material (non-food, including energy). Nonetheless, currently, only 15% of the net primary production (NPP) of the continent is used (human appropriation of net primary production-HANPP), and the growth rates of this use are much lower than population growth [26,27]. In Europe, for instance, 35% of the NPP is appropriated by humans. Accordingly, there are compelling opportunities for SSA's further development based on the more intensive production and use of biomass in the context of a comprehensive African bioeconomy. On the other hand, major challenges for a regional bioeconomy are the weak economic, technical, and institutional conditions that restrain the production, post-harvest and processing sectors. The extent and diversity of these challenges and the pressure on SAA countries to catch up with global trends require diversified strategies and coordinated actions to simultaneously handle these challenges [16,[28][29][30]. A broad consensus is that an African bioeconomy agenda should prioritize (i) the encouragement and enhancement of the productive sector under the premises of ecological sustainability, social equity, and fair economic return to farmers; (ii) the further development of the processing sector by generating, promoting, and adopting innovations, technologies, and techniques to convert biomass into goods of higher value; and (iii) linking producers with processors and with local, national, and international markets to guarantee reliable incomes [16]. Biomass-based Value Web Approach In the context of an emerging African bioeconomy, the project "Improving food security in Africa through increased system productivity of biomass-based value webs (BiomassWeb)" aimed at understanding and enhancing food and non-food biomass production, processing, and trade in Ghana, Nigeria, and Ethiopia. These countries were chosen because of their regional importance and potential as case studies with relevance to other SSA countries. The key crops considered were maize, cassava, plantain/banana/enset, and bamboo, which were selected according to their regional relevance as sources of food and non-food biomass. The overarching concept was that of the 'biomass-based value webs', i.e., complex systems of interlinked value chains in which biomass products and by-products are produced, processed, traded, and consumed ( Figure 1). Though introduced two decades ago [31,32], the value web approach is still innovative, as it more realistically describes the value that is added in the biomass sector in comparison to the linear supply and value chain concepts. Value webs not only depict material and cash flows, they also connect supply and value chains with their actors, e.g., through information flows, the effects of policy decisions, or innovative developments in the production and processing of biomass, as well as via the effects of national and international market events. The resulting dynamic, hyper-connected, and collaborative relationships are country-and situation-specific and can only be disclosed in cooperation with local stakeholders and experts. We argue that the value web is a useful scientific approach for investigating SSA biomass-related activities in view of its current and forthcoming challenges. Ghana, Nigeria, and Ethiopia. These countries were chosen because of their regional importance and potential as case studies with relevance to other SSA countries. The key crops considered were maize, cassava, plantain/banana/enset, and bamboo, which were selected according to their regional relevance as sources of food and non-food biomass. The overarching concept was that of the 'biomass-based value webs,' i.e., complex systems of interlinked value chains in which biomass products and by-products are produced, processed, traded, and consumed ( Figure 1). Though introduced two decades ago [31,32], the value web approach is still innovative, as it more realistically describes the value that is added in the biomass sector in comparison to the linear supply and value chain concepts. Value webs not only depict material and cash flows, they also connect supply and value chains with their actors, e.g., through information flows, the effects of policy decisions, or innovative developments in the production and processing of biomass, as well as via the effects of national and international market events. The resulting dynamic, hyper-connected, and collaborative relationships are country-and situationspecific The BiomassWeb project objectives were: (i) to investigate the potential interventions to strengthen biomass value web production, processing and trade, and (ii) to identify the synergies and trade-offs among them. Along this process, BiomassWeb had a strong foresight character in identifying and facilitating the current and future synergies and trade-offs among biomass uses. BiomassWeb was co-led by the Center for Development Research (ZEF) of the University of Bonn and the Forum for Agricultural Research in Africa (FARA). The consortium included a network of universities, national research institutions, international agricultural research organizations, and partners from the private sector in Ghana, Nigeria, Ethiopia, and Germany. The BiomassWeb project objectives were: (i) to investigate the potential interventions to strengthen biomass value web production, processing and trade, and (ii) to identify the synergies and trade-offs among them. Along this process, BiomassWeb had a strong foresight character in identifying and facilitating the current and future synergies and trade-offs among biomass uses. Summary of Articles Included in this Special Issue BiomassWeb was co-led by the Center for Development Research (ZEF) of the University of Bonn and the Forum for Agricultural Research in Africa (FARA). The consortium included a network of universities, national research institutions, international agricultural research organizations, and partners from the private sector in Ghana, Nigeria, Ethiopia, and Germany. Summary of Articles Included in this Special Issue This special issue summarizes some of the results of the BiomassWeb project, together with other selected research results regarding exploring, developing, and testing innovative approaches to produce, process, and trade food and non-food biomass in SSA. The 22 articles in this volume cover stand-alone and aggregated studies, disciplinary, inter-and transdisciplinary approaches, and ex-post and foresight-oriented investigations. They are compiled into four major sections: (i) overarching studies contributing to the value web approach; (ii) the production side; (iii) processing, transformation and trade; and (iv) the global view. The Value Web Approach A few articles are featured that look at the value web approach itself from different perspectives: (i) complex systems analysis, (ii) as the basis for analyzing a supply chain, and (iii) describing a demand-driven research and development concept to identify potential interventions to strengthen the effectiveness and efficiency of biomass-based value webs. Concerning food security, Anderson et al. model and analyze biomass-based value webs of selected crops in Ghana, Nigeria, and Ethiopia by applying the systems analysis software iMODELER in participatory stakeholder workshops. In all three countries, the transdisciplinary mapping of the different crop-value chains clearly reveals widely ramified systems with a complex web character having food security as the overall target. In contrast to the initial hypothesis, the value chains of the different crops considered do not show relevant direct links between each other in their matter and capital flows. However, they are connected through nonspecific factors (corresponds to nodes or variables in other systems approaches) such as communication, governmental interventions, extension services, agricultural innovations, global food prices, and others. Results from a generic model allow for a critical reflection on the relation between value web dynamics and food security policy in SSA. Current policy-making trends targeting the market integration, mechanization, and reduction of post-harvest losses are supported by the model results. In a case study, Lin et al. focus on the current market challenges and opportunities for the future development of the northern Ethiopian bamboo producing and processing sectors. The results show that bamboo producers are constrained by the lack of local demand and markets for higher-value bamboo products. This also leads to less product diversification on the local markets and reduces the innovative capacity of the manufacturing sector. It is recommended that local and regional governments support specific training programs on bamboo production and processing. Additionally, the market access of small bamboo producers may be improved through the establishment of cooperatives and the development of contractual arrangements that protect local producers, processors, and traders. A demand-driven research and development (DDRD) program was one of the innovative aspects of the BiomassWeb project. Funds were provided by the donor agency for implementing research and development activities that emerged from alliances with stakeholders of the biomass producing, processing and trading sectors during the project lifetime. Jatta et al. discuss the concept and application of DDRD in Ghana, Nigeria, and Ethiopia based on six projects that were selected and supervised by the Forum for Agricultural Research in Africa (FARA): (i) using cassava peels for mushroom production, (ii) the development of plantain biomass into composite flour for traditional foods and bakery products, (iii) bamboo leaves as fodder for livestock production, (iv) the production of bio-plastics and bio-gels from agricultural waste, (v) the mass and energy balance analysis of pneumatic dryers for cassava, and (vi) exploring potentials of the bamboo sector for employment and food security. The Production Side The contributions in this section cover a broad array of biomass-based production systems and highlight the need to tackle food insecurity by using a variety of approaches. Several studies focus on smallholder crop production systems. Scheiterle and Birner show that in Ghana, maize production systems with above-average yields of 1.5 Mt/ha are profitable at the household level, while production systems below this threshold report negative social profits. The use of fertilizers that are sponsored through national subsidies, however, does not increase the likelihood to produce above-average yields, while the use of improved seeds and herbicides does. For Ethiopia, Srivastava et al. employ a modelling approach to test the effect of different intensification scenarios on maize yields. They report that a combination of higher mineral fertilizer rates and the incorporation of crop residues are the most successful, while the rotation of maize with groundnut could help to increase economic profits. Legesse et al. use a modelling framework to evaluate the effects of a higher efficiency of fertilizer application in a variety of cropping systems in Ethiopia. Higher fertilizer application increases annual yields at the average farm and is profitable despite a price reduction on the markets, which has a positive effect on the welfare gains for both rural farming and non-farming households. Finally, Poku et al. show that in the case of cassava outgrower schemes in Ghana, contract farming could benefit both farmers and agribusiness firms if contracts sustained long-term supportive business agreements. Next to field cropping systems, agroforestry systems can make a strong contribution to food security through food and non-food biomass production. For Ethiopia, Jemal et al. demonstrate that smallholder farmers can benefit from 120 plant species that grow in home gardens, in multistory coffee systems, and in farmland systems with multipurpose trees. Similarly, Kelboro and Stellmacher underline that family farms in Ethiopia rely on ad-hoc agricultural production systems to achieve food security at the household level, and this needs to be taken into consideration when developing agricultural intensification schemes. In Ghana, Akoto et al. assess the local acceptance of bamboo use in agroforestry systems in a dry forest zone. They show that farmers who have traditional knowledge of multipurpose trees and bamboo are more inclined to adopt these systems for combined charcoal, fodder leaf, and crop production. By using a transdisciplinary approach, Mbeche et al. analyze the application of the push-pull technology in Ethiopia to control stemborer pests and Striga weed in maize and demonstrate that transdisciplinary approaches can be efficient in tackling real-life problems. Moving from the rural to the urban setting, Nero et al. demonstrate the potential of food trees in Accra, Ghana, to contribute to food security in African cities. They report that the diversity of tree species with food uses is higher in poorer neighborhoods than in wealthier neighborhoods, but their abundance is lower in the former than in the latter. They conclude that policies to promote food trees can support several goals, such as achieving food security and raising the quality of living. Processing, Transformation and Trade Processing, transformation, and trade are important elements of the biomass-based value web. Several articles explore opportunities and also challenges of the conversion of by-products or waste into valuable products. Loos et al. apply participatory methods, expert interviews and group discussions to evaluate the potential of plantain residues as a resource for industrial raw materials (fiber) in Ghana. They report that key stakeholders and structures exist that could boost the establishment of a sustainable plantain-based value web. However, pilot activities and technology transfer of suitable innovations from other countries would be required. In their article, Chala et al. explore the potential of by-products from coffee processing (husk, pulp and mucilage) for biogas production in Ethiopia. The authors estimate that the anaerobic digestion of these products could generate as much as 68 × 10 6 m 3 methane per year, which could be converted into 238,000 MWh of electricity and 273,000 MWh of thermal energy. Both electricity and thermal energy are used by coffee processing facilities and, accordingly, biogas production would lead to energy cost savings. Intani et al. take a critical look into the use of corncob biochar, which is a sought-after resource for soil amendment. As phytotoxicity has been observed, the experiments carried out by the authors demonstrate that the phytotoxicity of fresh corncob biochar can be effectively mitigated by washing and heat treatments. Several articles address production and use of cassava and its processing by-products. Ayetigbo et al. provide a review in which they compare properties of cassava root, flour, and starch from white-flesh and biofortified yellow-flesh variants. The companion papers by Adeyemo and Okoruwa on the effects of value addition on the productivity of cassava farming households and by Adeyemo et al. on determinants of the intensity of cassava utilization-both in Nigeria-demonstrate that the prospect of adding value through processing determines production decisions. Better extension services, training, and enterprise regulation, as well as asset acquisition, improving land quality, and the encouragement of social capital development among smallholders, are important drivers. Looking at waste management options amongst cassava processors in Nigeria, Omilani et al. report that public expenditure on training for processors in waste management options would empower them to use solid-waste conversion technologies to generate value-added products. Besides generating additional income, this would lead to social benefits including a lower exposure to environmental toxins from the air, streams, and groundwater. Global Views This section offers a global view on the trends, challenges, and opportunities that countries are confronted with when developing their own bioeconomies. Given the countries' differences in potential, priority setting and strategies to develop their own bioeconomies, Biber-Freudenberger et al. propose a classification and then sort them into primary, advanced, high-tech, and mixed categories that they later use to gauge their performance in terms of sustainability. The authors find that countries with more sophisticated bioeconomies are more diversified in terms of innovations and policies that promote them. In contrast, countries with incipient bioeconomies are based on less varied alternatives and concentrate on bioenergy, but they tend to explore and expand towards high-tech strategies. Interestingly, the efforts of the former are not necessarily accompanied by more sustainable performance. One step ahead, Beuchelt and Nassl, under the premise of the UN Sustainable Development Goals that suggest the satisfaction of multiple demands instead of the optimization of a single or a few of them, model the trends of several bioeconomy operation plans and their weighing of economic sectors. They report a worrying imbalance that tends to prioritize the uses of biomass for the generation of energy at the expense of other material uses and even the satisfaction of basic needs like food production. Finally, Dietz et al. examine the governance strategies of the 41 countries that lead the progression into a biomass-based economy. Contextualizing their analysis into the UN Sustainable Development Goals, they identify four potential pathways and foresee sets of governance measures to enable or constrain their development. Based on the unevenness of outcomes, the authors conclude that advocating for the establishment of political structures (when nonexistent) to put national bioeconomies into operation and for the creation of global frameworks to coordinate and guarantee the sustainability of these structures are key issues. Concluding Remarks and Outlook To date, only a few national or regional strategies for innovative uses of biomass in Africa exist. The papers in this special issue underpin the fact that there is great potential for food and non-food biomass production, use, and trade in Africa. These may encompass production systems, such as bamboo intercropping, underutilized plant species in agroforestry systems, and fruit trees in urban settings, as well as processing techniques, e.g., biochar production, starch uses, and agricultural residues as industrial raw materials or energy sources. It clearly appears that further research is needed on implementing the findings in practice, while the results and thoughts that are presented by the authors of this volume already make a contribution to this process. Additional efforts, however, are needed to disseminate the results to practitioners and thus to contribute to rural development. In general, targeted biomass-related policies and governance measures at the local and national levels, also considering the UN Sustainable Development Goals (SDGs), are recommended to support efforts to help achieve food security and improved quality of living. In particular, rural policies should focus on extension services as well as capacity building and training for biomass producers, processors, and traders. Furthermore, market opportunities and access to markets, in combination with the establishment of cooperatives, have to be developed. Other recommendations for promoting a biomass-based economy range from enterprise regulations, asset acquisition, and contractual issues to social capital development in rural environments. The examples show that the establishment of biomass-based economies faces a multitude of challenges. The dilemma is that the respective research-based recommendations-as can be seen above-are of a general nature. The implementation of research outputs in practice, however, requires more detailed instructions for action. These can only be developed through systematic implementation research that is transdisciplinary in nature and, accordingly, based on stakeholder involvement. Implementation research aims to identify and overcome the barriers to the implementation of research outputs. Its activities can of course be considerably reduced if newly developed concepts and technologies to be implemented are based on demand-driven research. Implementation research is the basis for the piloting and up-scaling of innovations based on research and development. In the case of biomass-based innovations, their implementation should take place in the context of a circular bioeconomy. The integration of biomass production, processing, and trade into a circular system requires that value chains of food and non-food biomass and the value webs that are derived from them are analyzed in their system context. Particular attention must be paid to cause-and-effect relationships between the system components. Knowledge of these relationships helps to identify intervention options that contribute to optimizing effectiveness and efficiency of value chains and webs in the biomass sector. Additionally, the meaningful implementation research should consider not only economic, socio-cultural and technical aspects but also aspects such as political structures or the personal resources and capacities of the members of the target groups. Last but not least, when developing a regional bioeconomy, it must be borne in mind that the nation states concerned have their own policy priorities and strategies that face different development trends, challenges and potentials. Accordingly, different information needs must be met, for which an efficient science communication system has to be established. In this context, the interactive online expert network BiomassNet (https://www.biomassnet.org/) provides a forum for information exchange on biomass-related aspects in Africa.
5,629.4
2020-03-06T00:00:00.000
[ "Economics" ]
Autonomous Behavior Intelligence Control of Self-Evolution Mobile Robot for High-Voltage Transmission Line in Complex Smart Grid In complex smart grid, the power maintenance robot is important equipment to ensure the reliable operation of high-voltage lines and it is a useful exploration to realize high-quality power transmission. In view of the increasingly prominent contradiction between the robot single operation function and the diversification of power grid maintenance operations, additional with the robot weak autonomous operation and intelligent behavior ability, this paper proposes a new configuration of a reconfigurable power robot with terminal functions and its autonomous operation behavior control method for the three typical tasks which are the high-voltage transmission line insulators, drainage plates, and dampers maintenance. ,rough the analysis and planning of the robot operation behavior, the robot finite state machine (FSM) model in the three operation states has been established. ,rough the introduction of the state transfer function in the FSM, the automatic switching control between the robot key operation states can be realized, and the robot motion planning can be optimized. ,e movement and working flow of the robot improve the robot operation intelligence and operation efficiency. Based on this, the robot autonomous operation control system has been designed and the robot physical prototype has been developed for three maintenance tasks of insulators, drainage plates, and dampers. Finally, simulation experiments and field operation tests verify the effectiveness and engineering practicability of the proposed method. Compared with traditional manual control, the autonomous behavior control method can significantly improve the robot operational efficiency and operational intelligence. At the same time, the robot multitask function and autonomous behavior control under different tasks can be realized and the method has strong versatility for different task objects and different line environments. ,e research and its promotion have important theoretical significance and practical application value for the power system operation and maintenance integration management. Introduction Electricity is the lifeblood of the national economy and highvoltage cables are an important channel for power transmission. eir special geographical environment and harsh natural environment will cause various faults on line, in order to ensure the safe, normal, and stable operation of highvoltage transmission lines, thereby, effectively reducing economic losses. It is necessary to regularly and irregularly perform maintenance and construction operations on line fittings and their operation environment [1][2][3][4]. At present, such special operations in dangerous and harsh environments are all performed manually, which not only is labor-intensive and low-efficient but also poses great personal safety risks. Regarding the maintenance operations far away from the poles and towers, the live working can be performed only in the condition of power outages. As the assessment indicators of power transmission quality and operation safety are getting higher and higher, as well as the urgent needs of modern power system operation and management automation, the contradiction between this manual operation method and modern high-quality power transmission has become more and more prominent. erefore, it is an effective measure to replace the live line operation performed by the live line maintenance robot [5][6][7][8], which has important practical application value for improving operation efficiency, operation reliability, and worker safety. e robots currently studied are mostly only oriented to double operations [9,10], and most of them are only oriented to a single operation. Due to the wide variety and dispersion of line maintenance operations, power grid companies will inevitably need to configure robots of different types of operations, which result in high purchase and maintenance costs. Namely, the contradiction between the robot single operation function and the diversification of the power grid maintenance operation objects in the existing research has become more and more prominent [11,12]. erefore, the study of multitask-oriented power maintenance robots and their autonomous operation behavior control so as to realize the multipurpose function of the robot mobile platform has important theoretical significance and practical value for building a resource-saving and environment-friendly integrated operation of the power transmission grid. In order to improve the robot autonomous behavior ability and operation efficiency, the robot is required to have a strong autonomous behavior control ability. In terms of autonomous behavioral control, there are mainly autonomous behavior control methods based on expert systems [13,14], fuzzy logic [15,16], neural network [17,18], swarm intelligence algorithm [19,20] and, additionally, the hierarchical planning method [21,22] which have already been adopted to the power line inspection robot through using generative reasoner to create behavior sequences online at the behavior planning layer, it combined with the behavior interpretation knowledge base generated offline at the action layer, and to achieve automatic robot obstacle crossing on power line. A finite state machine model [23,24] has been proposed and used in inspection robot obstacle crossing on HVTL; however, this method can only realize the robot's local obstacle crossing and semiautomatic obstacle crossing. In [25][26][27][28], a multisensor-based transmission line identification and robot spatial posture positioning method has been proposed; however, it did not perform multisensor information fusion processing, thereby affecting the improvement of the overall robot operation efficiency. rough the above analysis, we can know, there is currently no universal robot autonomous behavior control method since the complicated robot operation can be abstracted into a series of state combinations from the operation beginning to the task completion; in this way, the complex operation process of the robot has been simplified and the key to the improvement of the robot autonomous behavior ability is the free switching between different joint states. erefore, the goal of robot behavior planning for multitask tasks in the ultrahigh-voltage multisplit environment is to formalize model descriptions of different types of tasks and robot motion control behaviors in all layers. is facilitates the completion of automatic reason and decision control and achieves the purpose of the electric power line robot autonomous operation. Based on the above analysis, this article is aimed at the multisplit multitask power transmission maintenance robot. Based on the analysis of the robot completion insulator replacement, drainage plate tightening, and damper replacement of three typical operations' motion planation, the design method of hierarchical control has been adopted and the robot motion behavior during the operation can be divided into a combination of multiple basic behaviors by defining the basic behavior of the robot arm joint motion. e finite state machine (FSM) model is used to realize the management and control of the combined behavior and the robot operation motion behavior has been decomposed. By selecting and setting a reasonable operation state transfer function, the robot three different operations have been planned and optimized for motion control, compared with the traditional manual control, and the operation status reduces the operation intensity of the operator and improves the operation efficiency. At the same time, for some special states that the robot may appear in the operation process, a mechanism for handling robot abnormal behavior of the robot is designed. Finally, the experimental verification of the feasibility and engineering practicability of the robot behavior control algorithm designed in this paper through simulation and field operation experiments are of great significance for accelerating the robot practical process. Mechanical Configuration Analysis and Synthesis of Multitask Robots. rough the analysis of the three operation tasks and operation principles of insulator (auxiliary) replacement [29], drainage plate bolt tightening [30], and damper replacement [31], the three operation tasks can be completed, respectively. e degrees of freedom required by the robot manipulator and their corresponding functions are shown in Tables 1-3. It can be concluded from Tables 1-3 that in order to complete the three tasks, the number of DOF and the functions required by the robot dual operation arms are completely the same. e only difference is the DOF and functions of the manipulator. erefore, the live maintenance robot can be designed using the configuration mode of the robot mobile platform and the manipulator reorganization. e mobile robot platform can be shared by the three operations. e mobile robot is equipped with different manipulators to complete different tasks. e configuration of the mobile robot and entity model is shown in Figure 1. It is mainly composed of a control box, a double walking arm, a double operation arm, a double walking wheel and its clamping mechanism, and several main parts. e operation arm 1 is fixed on the machine body and it has three DOF which are rotation, stretch, and vertical movement. In addition to the three DOF of the operation arm 1, the operation arm 2 also has a horizontal joint, which can move laterally along the body to realize the pushing out and loading of the insulator string. e dual operation arms have a total of seven three. e double operation arms end effector can be reorganized and installed with different manipulators, 2 Complexity respectively, to complete the three functions of insulator replacement, drainage plate bolt tightening, and damper replacement. e Key Postures of the Robot Operation Process. After the insulator maintenance robot is online, the dual operation arms can be adjusted from the initial posture to the working posture, and they travel along the wire and detect and locate the suspension clamps to achieve rough positioning to the insulator string. By fine-tuning each joint of the operation arm, the positioning of the bowl head hanging plate, W pin, and insulator steel cap can be completed and the bowl head hanging plate can be clamped, the W pin can be pushed out, the insulator steel cap can be clamped, and the insulator ball head is pushed out so that the insulator string can be changed from a fixed state to a free state, so as to facilitate manual replacement. e motion simulation of the operation planning entity is shown in Figure 2. After the drainage board maintenance robot is online, the dual operation arms can be adjusted from the initial posture to the working posture and travel along the wire, detecting and positioning the press-connection-pipe to determine the initial position of the robot to tighten the bolts. e adjustment can be completed by adjusting the joints of the operation arm. e positioning of the bolt head, nut, and the connection with the operation manipulator can be realized. e entity motion simulation of the operation planning is shown in Figure 3. After the damper robot is online, the dual operation arms can be adjusted from the initial posture to the working posture, travel along the wire, and roughly be positioned to the replacement workspace of the damper, which can be completed by adjusting the joints of the operation arm, especially the movement of the pitch mechanism, the bolt head, nut positioning and the realization of the docking with the operation manipulator and the bolt head and nut, the tightening of the nut, and the clamping of the damper; the physical motion simulation of the operation planning is shown in Figure 4. e Design Principle of Robot Terminal Reconfiguration for Multitasks. e reconfiguration of the robot operation terminal can be divided into two categories which are static reconfiguration and dynamic reconfiguration. e static reconfiguration is the assembly and reorganization of the terminal by the operator. Dynamically reconfigurable is the movement of the robot's own modules to replace the operation end. Combining with the actual application requirements of special power operation robots, this article adopts the realization method of terminal static reconfiguration. According to the operation task of the live maintenance robot, the corresponding operation end reconstruction principle has been proposed. For different operation tasks, corresponding actuators are required and the structure of the robot is modularized. According to different tasks, the corresponding actuators are selected to reorganize the robot operation ends. According to the functions of each part of the robot, the robot can be divided into two categories. e first category is the operation manipulator, including the bowl head hanging plate clamping and W pin-pushing end, insulator string pushing end, bolt fixing and bolt tightening end, damper clamping and supporting mechanism, and damper bolt tightening and loosening mechanism. e operation end directly acts on the operation object, making small-scale movements and finetuning the posture as needed. e second category is the mobile robot platform for live maintenance, which includes dual manipulators, dual operation arms, and mobile platforms. Each manipulator is composed of multiple joints in series, and each joint moves in coordination to perform terminal positioning or coordinate with the corresponding terminal. e dual mobile arms and dual manipulators coordinate and cooperate to complete different tasks of walking. e mobile platform is the carrier of the entire robot. e mobile platform of the live maintenance robot completes the corresponding operation tasks by carrying different operation ends. e principle of reconfigurable end functions is shown in Figure 5. According to the principle of module division and terminal reconstruction, insulator (auxiliary) replacement, bolt tightening, and damper replacement live working robots are all composed of double operation arms and the mobile robot equipped with double operation manipulators and the mobile robot is used as refactoring the platform. For other multitask robots terminal reconstruction methods can be designed and implemented based on this principle. Analysis of Robot Operation Behavior. In order to clearly describe the robot operation process, the state vector is used to describe the robot key posture. e robot operation can be regarded as the change process of the robot key posture caused by the robot execution. Table 4 shows the definition of the robot state vector. Using the above representation method, the robot has a total of 2 [12] possible states. Because robots face different tasks and different tasks of robots have specific processes, many postures are not allowed to appear when robots are working. erefore, through analyzing the robot posture, effective postures can be screened out. As shown in Table 5 Definition of Robot Behavior. e robot motion behavior in the operation process from part to the whole mainly includes joint motion behavior, arm motion behavior, robot motion behavior, and robot abnormal behavior processing. Among them, the basic behavior of the robot joint motion behavior (JMB) is defined as a module that is directly connected to the drive mechanism and sensors and has specific motion functions. Manipulator action behavior AMB (arm motion behavior) is a robot arm behavior that can achieve specific functions, which is composed of several basic joint behaviors and a joint behavior inference device as shown in the following equation, where AMBi is the combination behavior and F is the joint state transition function: Robot motion behavior (RMB) is composed of several operation arm behaviors, as well as the robot walking mechanism and behavior inference device. Its activation and execution are consistent with the operation arm motion behavior. e robot motion behavior is defined as equation (2), wherein AMB is defined in the above and F is the state transition function. e C++ thread pseudocode is used and can be formally expressed as Algorithm 1: Due to the complex working environment of electric power robots, there are many steps in the operation process and strict sequence of steps. In addition, the robot will inevitably encounter some special situations, such as interruption of information transmission, sensor, and industrial computer crash. erefore, it is necessary to introduce an exception handling mechanism to ensure personnel safety, line safety, equipment safety, and normal operation tasks. Robot Operation Behavior Planning Based on Task Decomposition. Regarding the specific electric maintenance robot, the whole robot operation process can be divided into four basic behaviors, namely, the behavior of the robot arm 1, the behavior of the robot arm 2, the behavior of the work Damper bolt tightening mechanism 0: clamping; 1: release 11 Damper bolt tightening pitch mechanism 0: up; 1: down Alignment of drainage plate bolts 0 × 0100 S 5 Alignment of drainage plate nut 0 × 0101 S 6 Damper clamping state 0 × 0110 S 7 Damper bolt alignment state 0 × 0111 S 8 Damper clamping mechanism state 0 × 1000 S 9 Damper screw bolt mechanism 0 × 1001 S 10 Damper bolt tightening pitch mechanism 0 × 1010 S 11 Standby state-1 0 × 1011 S 12 Standby state-2 0 × 1100 S 13 Standby state-3 0 × 1101 S 14 Standby state-4 0 × 1110 S 15 Standby state-5 0 × 1111 6 Complexity end 1, and the behavior of the work end 2. Each behavior can be completed by several robot posture adjustments and also includes multiple robot state transitions. For each behavior of the robot, the corresponding state transfer function can be set separately, by inputting the initial state and target state of the robot and behavior parameters to complete the robot state transition. erefore, the whole process of completing different tasks on the power transmission line by the robot represented by the basic behavior and combined behavior of the robot can be obtained as shown in Figure 6. Complexity units as shown in Figure 7, namely, environmental monitoring unit, state recognition unit, planning decision unit, and motion control unit. Among them, the environment monitoring unit is mainly used by the robot to detect and perceive the robot operation environment with various sensors carried by itself, which is also the basis for the robot intelligent decision-making. Based on the monitoring of the robot's local and global environment, the robot can recognize and detect its own operation status. en, the robot motion planning decision can be made through the behavior database and FSM model, and finally, the motion control unit drives the robot joint motors to realize intelligent autonomous operation control. FSM Design for Insulator Replacement Operation. Finite state machine is a mathematical model that represents a finite number of states and behaviors such as transitions and actions between these states. A finite state machine M is a five-tuple, M � (K, E, T, S, Z) wherein K is a finite set and each element in it is called a state. E is a finite alphabet and each element of it is called an input character. T is a conversion function which is a mapping on K × E ⟶ K. S is an element in K, which is the only initial state. Z is a subset of K, a final state set. When the online maintenance robot is performing insulator replacement operations, the robot dual operation arms and their ends need to take a series action, including multiple states and state transition rules. ese actions have been proved to be the most feasible way through many experiments. ey are set according to the operation object task and the robot's own mechanism. It is also based on experience planning. e robot FSM design for insulator replacement is shown in Figure 8. e operation can be divided into nineteen states, which are triggered by nineteen events. e event can be described as follows: (1) operation arm 1 turns forward; (2) operation arm 2 turns forward; (3) robot goes online; (4) operation arm 1 turns backward; (5) operation arm 2 turns backward; (6) operation arm 2 extends; (7) the robot moves forward at a low speed; FSM Design for Drainage Plate Bolt Tightening Operation. According to the abovementioned bolt tightening operation motion planning and manual experience operation, the FSM design when the robot performs the drainage plate tightening operation can be obtained as shown in Figure 9. e operation can be divided into eighteen states, which are triggered by eighteen events. e event description is as follows: (1) operation arm 1 turns forward; (2) operation arm 2 turns forward; (3) robot goes online; (4) operation arm 1 turns backward; (5) operation arm 2 turns backward; (6) operation arm 2 extends; (7) the robot moves forward at a low speed; (8) the robot walking wheels collide with the press-connection-pipe; (9) operation arm 2 moves forward; (10) operation arm 1 turns forward; (11) operation manipulator 2 moves forward; (12) both arms move inward; (13) operation manipulator 1 fixes the bolt head; (14) operation manipulator 2 tightens the nut; (15) the nut has been tightened and the operation is completed; (16) the operation end exited; (17) and the robot returns to the initial state. FSM Design for Damper Replacement Operation. According to the motion planning of the damper operation and manual experience operation, the FSM design of the robot during the replacement of the damper can be obtained as shown in Figure 10. e operation can be divided into nineteen states, which are triggered by nineteen events. e event description is as follows: (1) Design of Object-Oriented Autonomous Operation Control System. Generally speaking, the robot operation process can be divided into two behaviors which are arm motion and end-effector motion. Each behavior includes detailed steps for executing each motor. e robot compiles the robot operation behavior plan according to the task type and stores it in the robot database. e robot autonomously selects the behavior plan to execute the action after detecting the operation object; that is, the robot behavior planning sequence is established as shown in Figure 11. e sequence includes the motor steps and parameters from the robot detection of the operation object to the end effector. e robot behavior planning sequence is the optimal motion plan set by the designer based on the robot experience value. During the robot autonomous operation, the behavior will be interrupted due to some uncontrollable factors such as the PC restart. erefore, the robot is required to have the ability to self-recover behavior after the system restarts. e premise is that the robot can "memorize" the motor running value and the corresponding key sensor information in the motion sequence step executed before the failure after the industrial computer restarts; then the robot can use this to continue the motion plan at the breakpoint. erefore, the state parameter sequence is added on the basis of the robot behavior motion planning sequence and stored in the robot database system to store the robot real-time status information, including the number of steps in the running action sequence, the execution status of this step, the hall sensor light potential information, count value of the motor executed in this step, and tilt sensor information. e step in the state parameter sequence corresponds to the steps in the robot action sequence. e status contains three states "not started," "end," and "in progress." e hall sensor records information about each actuator's collision or in position. e motor count value can record the motor's running mileage in the current step in real time. e inclination sensor information reflects the robot's posture state at this time, and its autonomous operation control can be realized through the interaction of sensor information carried by the robot itself. After the robot restarts from the breakpoint, it first recognizes the step and status in the state parameter sequence and then combines the motor count value and sensor information value to complete the continuation of the action planning sequence. In the process of robot adaptive operation, the state information sequence can save the movement and environment information under the current behavior of the robot in real time and provide detection and decisionmaking functions for the robot's autonomous operation behavior. e flowchart of the robot autonomous operation principle is shown in Figure 12. e object-oriented design Next steps Step Step Figure 11: Robot status information sequence. Complexity method is used when designing the autonomous operation control system. Compared with the original process-oriented fixed program, the object-oriented design method converts the robot motor control into a natural language description of the robot motion. e realization process of the program is brief and clear. e robot operation steps can be described as the movement of each mechanism, so as to get rid of the attention of the underlying motor movement and the sensor configuration, avoiding the complicated and tedious process of process-oriented design methods. At the same time, the robot operation program design is clear, and the human-machine interaction is simple, which is conducive to the promotion and use of robots subsequent maintenance. Simulation Experiment. In order to verify the effectiveness of the robot FSM behavior control method, three different tasks have been used as the research objects. For each task, traditional manual tasks and the FSM autonomous behavior tasks proposed in this paper are used to control the robots. e sensor signal carried by itself monitors the working status of the robot in real time to evaluate the robot Operation environment identification Operation object recognition Operation process decomposition Step 1 Step 2 … Step n Operational behavior planning Operation motion execution Operation status awareness performance under different control methods. e obtained simulation results are shown in Figure 13. e horizontal axis of the three sets of graphs is time and the vertical axis is the robot swing angle. e inclination angle of the robot operation line is 10 degrees and three different operating experiments are performed twice. rough the three sets of simulation results for insulator replacement, bolt tightening, and damper replacement, it can be seen that, based on the robot operation motion control under the FSM autonomous behavior control method, the system swing angle is smaller than the manual control method and the time for the robot to reach the equilibrium state is less than the manual method, especially in the process of damper replacement being more obvious. e main reason is that under the FSM autonomous behavior control method, robot motion planning has been further optimized. Some redundant motions in the operation process are eliminated. At the same time, the robot multijoint linkage control is integrated in some time periods. erefore, the robot robustness motion and the robot operation efficiency have been significantly improved. erefore, the simulation experiment verifies the effectiveness of the FSM-based robot autonomous behavior control method. Field Operation Experiment. In order to verify the effectiveness of the robot autonomous behavior control method under different tasks, this paper developed a robot physical prototype system for insulators, drainage plates, and damper maintenance operations, under the jurisdiction of the State Grid Hunan Electric Power Transmission Maintenance Co., Ltd.; three types of operations tests were carried out on the live line. It can be seen from Figure 14 that in the process of insulator replacement using the autonomous behavior control method, the free state insulator-bowl head hanging plate, clamping-steel cap, W pin-pushing, ball head-pushing, a series of smooth states and behavior transition, and the robot insulator replacement operation can be successfully completed. Due to the intelligent behavior control, the actual operation steps of insulator replacement have reduced from nineteen steps of FSM theoretical model to six; according to the average value of the statistical experiment data, the robot operation Step 1 insulator Step 19 insulator Step 14 insulator Step 2 insulator Step 6 insulator Step 10 insulator efficiency increased and the robot autonomous behavior ability and intelligence were significantly improved. It can be seen from Figure 15 that, during the maintenance operation of the drainage plate, a series of state and behavior transitions smoothly from the robot nut alignmentbolt head alignment-nut-fixing bolt head, and the robot drainage plate fastening operation is successfully completed. Due to the intelligent behavior control, the actual operation steps of drainage plate tightening have reduced from eighteen steps of FSM theoretical model to four; taking the average statistical data, the robot operation efficiency has increased and the robot autonomous behavior especially the bolts autonomous capture and positioning on the drainage plate by the end sleeve of the manipulator is much more accurate. It can be seen from Figure 16 that, in the process of replacement of the damper, a series of state and behavior transitions smoothly by bolt alignment-tightening the boltline clamp alignment-line clamp clamping, and the robot damper replacement operation is successfully completed. Due to the intelligent behavior control, the actual operation steps of damper replacement have reduced from nineteen steps of FSM theoretical model to four. Taking the average statistical data, according to the average of the statistical data, the robot operation efficiency has increased and the robot autonomous behavior, especially the holding mechanism of the old damper and the alignment and positioning control of the damper bolts, has been significantly improved. Step 1 drainage plate Step 18 drainage plate Step 13 drainage plate Step 2 drainage plate Step 6 drainage plate Step Conclusion (1) According to the maintenance requirements of transmission line insulators, drainage plates, and dampers, the basic configuration of the reconfigurable mobile robot at the end of the wheel-arm compound that travels along the transmission line has been designed, and the corresponding operation motion planation has been proposed and developed; it is suitable for the physical prototype of the robot of 220 kV live line. (2) e robot operation motion behavior can be divided into three categories which are joint behavior, operation arm behavior, and robot behavior. e robot finite state machine model for three different tasks has been designed and a hierarchical architecture, finite state machine model of the robot autonomous behavior control method, has been proposed. (3) Using the designed FSM model and the robot autonomous behavior control method, three operations have been performed on the 220 kV line with insulators, drainage plate tightening, and damper maintenance. e experimental results show that the autonomous behavior control method can effectively improve robot performance intelligence and operation efficiency. Future Work. e live working robot and its highvoltage transmission line operation environment constitute a complex rigid-flexible coupling; in order to meet the basic needs of complex, changeable line environment and practical application, there are still much research that needs to be carried out, such as autonomous behavior control methods. e robustness of the line environment, structural parameters, internal and external disturbances, and uncertain factors still needs further research, the quantified processing method of disturbances and uncertain factors in Step 1 damper Step 19 damper Step 16 damper Step 2 damper Step 7 damper Step 12 damper …… …… …… S1 damper S4 damper autonomous behavior motion control, the dynamic modeling of robots in a flexible operation environment, and its parameter identification methods. e breakthrough in these key technologies is the key to further improve the robot operation intelligence. Data Availability e experiment data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that there are no conflicts of interest regarding publishing this manuscript.
7,014
2020-11-05T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Unsupervised Single-Image Super-Resolution with Multi-Gram Loss : Recently, supervised deep super-resolution (SR) networks have achieved great success in both accuracy and texture generation. However, most methods train in the dataset with a fixed kernel (such as bicubic) between high-resolution images and their low-resolution counterparts. In real-life applications, pictures are always disturbed with additional artifacts, e.g., non-ideal point-spread function in old film photos, and compression loss in cellphone photos. How to generate a satisfactory SR image from the specific prior single low-resolution (LR) image is still a challenging issue. In this paper, we propose a novel unsupervised method named unsupervised single-image SR with multi-gram loss (UMGSR) to overcome the dilemma. There are two significant contributions Introduction Super-resolution (SR) based on deep learning (DL) has received much attention from the community [1][2][3][4][5][6][7]. Recently, Convolutional neural networks (CNN)-relevant models have consistently resulted in significant improvement in SR generation. For example, the first CNN-based SR method SRCNN [4] generated more accurate SR images compared with traditional methods. In general, many high-resolution (HR)-low-resolution (LR) image pairs are the building blocks for DL-SR methods in a supervised way. The SR training uses the HR image as the supervised information to guide the learning process. Nevertheless, in practice, we barely collect enough external information (HR images) for training under severe conditions [8][9][10], e.g., medical images, old photos, and disaster monitoring images. On the other hand, most DL-SR methods train on the dataset with fixed kernel between HR and LR images. In fact, this fixed kernel assumption creates a fairly unrealistic situation limited in certain circumstances. When a picture violates the fixed spread kernel of training data, its final performance decreases in a large margin. This phenomenon is also highlighted in ZSSR [11]. In addition, if there are some artifacts, e.g., kernel noise or compression loss, a pre-trained DL model with a fixed kernel relationship will generate rather noisy SR images. As a result, we claim that we can turn to synthesis of the SR image with a single input, and it may become a solution to the problematic situation mentioned above. Theoretically, SR is an ill-posed inverse problem. Many different SR solutions are suitable for one LR input. Intuitively, the more internal information of the LR input involves in the generation process, the better result can be expected. The changing route of DL-SR shows that various carefully designing strategies are being introduced to improve the learning ability. However, as a typical supervised problem, supervised DL-SR models train on the limited HR-LR image pairs. Model is restricted by the training data. In contrast, our method is conducted on single-input SR, i.e., designing a SR model for one image-input condition. We define the special condition as the unsupervised SR task following [11]. A new structure is proposed in our model. Moreover, to learn the global feature [12][13][14], we introduce the style loss to the SR task, i.e., the gram loss in the style transfer. Some experimental results show that the well-designed integrated loss can contribute to a better performance in the visual perception as depicted in [15]. Taking advantage of new structural design and loss functions, we can acquire considerably high-quality SR images both in the accuracy and the texture details. Specifically, the accuracy refers to the pixel alignment, which is commonly measured by the peak-signal-to-noise-ratio (PSNR) and the structural similarity index (SSIM) [2,4,5,7,16,17]. Moreover, the texture details are highlighted in some SR methods, such as [3,8,18,19], trying to generate satisfying images in visual perception by minimizing the feature distance between the SR image and its HR counterpart in some specific pre-trained CNN layers. To sum up, in this paper, we propose a new unsupervised single-image DL-SR method with multi-gram loss (UMGSR) (Our code is available in the address: https://github.com/qizhiquan/ UMSR). To address the aforementioned issues and improve visual performance, we dig three main modifications to the existing approaches. Firstly, we implement a specific unsupervised mechanism. Based on the self-similarity in [20], we denote the original input image as the G HR . Then, the degradation operation is equipped to gain the corresponding G LR counterpart. The training dataset is constituted with the G HR -G LR pairs. Secondly, we build a high-efficient framework with the residual neural network [21] as building blocks and introduce a two-step global residual learning to extract more information. The experimental results confirm that our approach performs well at the texture generation. Thirdly, we introduce the multi-gram loss following [22], which is commonly used in the texture synthesis. Accordingly, we form the loss function in UMGSR by combining the MSE loss, the VGG perceptual loss, and the multi-gram loss. Benefiting from these modifications, our model eventually achieves better performance in visual perception than both existing supervised and unsupervised SR methods. A comparison of SR images with different DL-SR methods is shown in Figure 1. There are two main contributions in this paper: • We design a new neural network architecture: UMGSR, which leverages the internal information of the LR image in the training stage. To stably train the network and convey more information about the input, the UMGSR combines the residual learning blocks with a two-step global residual learning. • The multi-gram loss is introduced to the SR task, cooperating with the perceptual loss. In detail, we combine the multi-gram loss with the pixel-level MSE loss and the perceptual loss as the final loss function. Compared with other unsupervised methods, our design can obtain satisfying results in texture details and struggle for SR image generation similar to the supervised methods. Figure 1. A comparison of some SR results. The figure shows the generation of ZSSR (an unsupervised DL-SR method), EDSR (a supervised method with best PSNR score), SRGAN (method good at the perceptual learning), ResSR (the generator of SRGAN), and our proposed method with three different loss functions. From the details, we can infer that more pleasant details are shown in the last pictures. The generations of different loss functions further provide change route of details. Related Work SR is one of basic computer vision tasks. In the realm of SR, there are mainly three distinct regimes: interpolation-based methods [23,24], reconstruction-based methods [25], and pairs-learning-based methods [1][2][3][4][5]7,11,20,26]. A lot of works are done to address this issue. like [27][28][29]. Recently, DL models achieve greatly success in many CV area, like [14,[30][31][32]. In SR area, DL-SR methods become hugely successful, in terms of the performance both in accuracy and perceptual feeling. Most content achievements refer to outstanding DL-based approach and can be divided into three branches: supervised SR methods, unsupervised methods, and Generative Adversarial Networks (GAN) related methods. Supervised SR methods. After AlexNet [33] firstly demonstrates the enormous advantage of DL over shallow methods in image classification, a large body of work applies deep CNN to traditional computer vision tasks. Regarding SR, the first DL-SR method is proposed by Dong et. at. in [4,34], which is a predefined upsampling method. It scales up of the LR image to the required size before training. Firstly, a traditional SR method (bicubic) is used to get the original scaled SR image. Then, a three layers CNN is employed to learn the non-linear mapping between the scaled SR image and the HR one. Noting that despite only three convolutional layers are involved, the result demonstrates a massive improvement in accuracy over traditional methods. Later, researchers succeed in building sophisticated SR networks to strive for more accurate performance with relatively reasonable computation resource. For example, a new upsampling framework: the Efficient Sub-Pixel five layers Convolutional Neural Network (ESPCN), is proposed in [7]. Information of different layers is mixed to obtain the SR result. Meanwhile, the training process works with the small size LR input, and the scale-up layer is based on a simple but efficient sub-pixel convolution mechanism. Because most layers deal with small feature maps, the total computation complexity of ESPCN is considerably dropped. The sub-pixel scaling strategy is widely used in subsequent algorithms, such as SRGAN [3] and EDSR [1]. On the other hand, as mentioned in SRCNN, while it is a common sense that a deeper model accompanied with better performance, increasing the number of layers might result in non-convergence. To bridge this gap, Kim et al. design a global residual mechanism following the residual neural network [21], to obtain a stable and deeper network. This mechanism eventually develops into two approaches: Very Deep Convolutional Networks (VDSR) [5] and Deeply Recursive Convolutional Network (DRCN) [35]. Due to the residual architecture, both networks can be stacked with more than 20 convolution layers, while the training process remains reasonably stable. The following SR research mostly focuses on designing new local learning blocks. To building a deep and concise network, Deep Recursive Residual Network (DRRN) is proposed in [6], which replaces the residual block of DRCN with two residual units to extract more complex features. Similar to DRCN, by rationally sharing the parameters across different residual blocks, the total parameters of DRRN are controlled in a small number, while the network can be further extended to a deeper one with more residual blocks. In the DenseSR [36], new feature extracting blocks from DenseNet [37] contribute to fairly good results. To leverage the hierarchical information, Zhang et al. propose Residual Dense Block (RDB) in Residual Dense Network (RDN) [17]. Benefiting from the learning ability of local residual and dense connection, RDN achieves state-of-the-art performance. Besides, the Deep Back-Projection Networks (DBPN) [2] employs mutually up-down sampling stages and error feedback mechanism to generate more accurate SR image. Features of LR input are precisely learned by several repetitive up and down stages. DBPN attains stunning results, especially for large-scale factors, e.g., 8×. Unsupervised SR methods. Instead of training on LR-HR image pairs, unsupervised SR methods leverage the internal information of single LR image. In general, there are a large body of classical SR methods follow this setting. For example, [38,39] make use of many LR images of the same scene but differing in sub-pixels. If the images are adequate, the point-spread function (PSF) can be estimated to generate the SR image. The SR generations are from a set of LR images with blurs, where pixels in the fixed patch following a given function. However, in [40], the maximum scale factor of these SR methods is proved to be less than 2. To overcome this limitation, a new approach trained with a single image is introduced in [20]. As mentioned in the paper, there are many similar patches of the same size or across different scales in one image. Then, these similar patches build the LR-HR image pairs, according to the single input and scaled derivatives for PSF learning. The data pre-processing in our work is similar to their idea. However, we adopt a DL model to learn the mapping between LR and SR images. In addition, Shocher et al. introduce "Zero-Shot" SR (ZSSR) [11], which combines CNN and single-image scenario. Firstly, the model estimates the PSF as traditional methods. Then, a small CNN is trained to learn the non-linear mapping from the LR-HR pairs generated from the single-input image. In the paper, they prove that ZSSR surpasses other supervised methods in non-ideal conditions, such as old photos, noisy images, and biological data. Another unsupervised DL-SR model is the deep image prior [26], which focuses on the assumption that the structure of the network can be viewed as certain prior information. Based on this assumption, the initialization of the parameter serves as the specific prior information in network structure. In fact, this method suffers from over-fitting problem if the total epochs go beyond a limited small number. To our knowledge, the study of unsupervised DL-SR algorithm hardly receives enough attention, and there is still a big space for improvement. GANs related methods. Generative Adversarial Networks (GANs) [41] commonly appears in image reconstruction tasks, such as [3,19,42,43], and is widely used for more realistic generation. The most important GAN-SR method is SRGAN [3], which intends to generate 4× upsampling photo-realistic images. SRGAN combines the content loss (MSE loss), perceptual loss [43], and adversarial loss in its last loss function. It can obtain photo-realistic images, although its performance on PSNR and SSIM indexes is relatively poor. In fact, our experiments also support their controversial discovery: a higher PSNR image does not have to deliver a better perceptual feeling. Besides, in [19], the FAN (face alignment) is introduced into a well-designed GAN model to yield better facial landmark SR images. Their experiments demonstrate significant improvements both in quantity and quality. For the restriction of facial image size, they use 16 × 16 as input to produce 64 × 64 output image. However, the FAN model is trained on a facial dataset, and it is only suitable for facial image SR problem. Inspired by the progress in GANs-based SR, we combine the SRGAN and Super-FAN in our architecture. We also make refined modification to address the unsupervised training issue. Methodology In this section, all details of the proposed UMGSR are shown in three folds: the dataset generation process, the proposed architecture, and the total loss. Referring to training DL-SR model upon unsupervised conditions, how to build the training data solely based on the LR image is the primary challenge to our work. Moreover, we propose a novel architecture to learn the map between generated LR andĤR images. We also introduce a new multi-gram loss to obtain more spatial texture details. The Generation of Training Dataset How generating LR-HR image pairs from one LR input I in is the fundamental task for our unsupervised SR model. Indeed, our work is a subsequent unsupervised SR learning following [11,20,44,45]. To generate satisfactory results, we randomly downscale I in in a specific limited scale, which comes from the low visual entropy inside one image. Therefore, we obtain hundreds of different sizes I HR and perform further operations based on these HR images. Most supervised SR methods learn from dataset involving various image contents. The training data acts as the pool of small patches. There are some limitations for this setting: (1) the pixel-wise loss leads to over-smooth performance in the details; (2) supervised learning depends on specific image pairs and perform poorly when applied to significantly different images, such as old photos, noisy photos, and compressed phone photos; (3) no information of test image is involved in the training stage while it is crucial for the SR generation. Therefore, supervised SR models try to access the collection of external reference without the internal details of the test image. Figure 2 shows the mentioned drawbacks of supervised methods. It can be inferred from the comparison that handrails of SR image in Glasner's [20] looks better than its counterpart of VDSR [5]. There are several similar repetitive handrails in the image, and details of different part or across various scale can be shared for their similarity. Training with these internal patches obtains better generations than the ones with external images. Normally, the visual entropy of one image is smaller than that of a set of different images [46]. Moreover, as mentioned in [11,46], lower visual entropy between images leads to better generation. Based on this consideration, learning with one image will result in an equal or better qualitative result than diverse LR − HR image pairs. In our work, we continue this line of research by training with internal information, as well as incorporating more features. From Figure 1, we can see that our unsupervised method achieves a similar result as the state-of-the-art SR method in common conditions. For non-ideal images, it performs better. Normally, the objective of SR task is to generate I SR images from I LR inputs, and information of I HR acts as the supervised information during training. However, there are no or few available I HR images for training in some specific conditions. Unsupervised learning seems to be a decent choice. In this circumstance, how to build the HR-LR image pairs upon a single image is a fundamental challenge. In our work, we formulate the dataset from the LR image by downsampling operation and data enhancement strategy. This maximized use of internal information contributes to a better quality of I SR . Based on the generated training dataset, the loss function is shown as: Figure 2. The comparison of supervised and unsupervised SR learning under "non-ideal" downscaling kernals condition. The unsupervised DL-SR method (ZSSR) firstly estimate the PSF, and learning internal information by a small CNN. The supervised method is one of the best ones named EDSR which is trained by a lot of image pairs. The comparing result shows that the unsupervised method surpasses the supervised method in the repetitive details, which potentially indicates the validity of internal recurrence for SR generation. To obtain a comprehensive multi-scale dataset, we implement the data augmentation strategy on input image which is further down-scaled in a certain range. The process is in following. Firstly, an input image I acts as the I HR image father. To use more spatial structure information, we introduce a down-scaled method to produce various different scaled HR images I HR i , i = 1, 2, · · · , n, which are dealt with several different ratios. Secondly, we further downscale these I HR i with a fixed factor to get their corresponding LR images I LR i (i = 1,2,. . . ,n). Lastly, all these image pairs are augmented by rotation and mirror reflections in both vertical and horizontal directions. The final dataset contains image pairs with different shapes and contents. More information about the change of pixel alignment comes from a variety of scale images. In summary, all training pairs contain similar content architecture. Hence, the more pixel-level changing information among images of different sizes is involved, and then the better result will be yielded. Unsupervised Multi-Gram SR Network Based on ResSR, our model incorporates with a two-step global learning architecture inspired by [19]. Some specific changes are implemented for the specific of unsupervised SR purposes. Architectures of our UMGSR, ResSR, and Super-FAN are shown in Figure 3. There is limited research on unsupervised DL-SR. To our knowledge, ZSSR [11] obtains a significant success in accurate pursuing route. They introduced a smaller and simpler CNN SR image-special model to obtain SR upon smaller diversity I HR i and I LR i from the same father image than any supervised training image pair. They announced that a simple CNN was sufficient to learn the SR map. At the same time, to some extent, the growth track of better PSNR supervised method indicates an obvious affinity between the network complexity and the SR generation accuracy. For example, EDSR [1] reports that their significant performance is improved by extending the model size. Therefore, we propose a more complex unsupervised model-UMGSR-shown in Figure 3c. The total architecture of UMGSR. Generally speaking, the SR network can be divided into several blocks according to the diverse image scales during training. Taking 4× for example, there are three different inner sizes: the original input, the 2× up-scaling, and the 4× up-scaling. For simplicity, we define these intermediate blocks as L s1 , L s2 , and L s4 . Several blocks are stacked to learn the specific scale information in the corresponding stage. Then, ResSR leverages 16 residuals as L s1 for hierarchical convolution computation. The final part contains a 2× scaled block L s2 and a final 4× scaled one L s4 . In general, the total architecture of ResSR can be denoted as 16 − 1 − 1 (i.e., L s1 − L s2 − L s4 ). From the comparison in Figure 3a-c, the architectures of three methods are: 16 − 1 − 1 , 12 − 4 − 2, and 12 − 4 − 2 respectively. The first part of the network contains one or two layers to extract features from the original RGB image. To this end, former methods mostly use one convolutional layer. By contrast, we use two convolutional layers for extracting more spatial information as in DBPN [2]. The first layer leverages a 3 × 3 kernel to generate input features for residual blocks. It is worth pointing out that there are more channels in the first layer for abundant features. For the purpose of acting as a resource of global residual, a convolutional layer with a 1 × 1 kernel is applied to resize layers same as the output of branch. For middle feature extracting part, the total residual blocks in all three models are similar. The main difference refers to the number of scaled feature layers. In fact, as pointed out in super-FAN, only using a single block at higher resolutions is insufficient for sharp details generation. Based on super-FAN, we build a similar residual architecture for a better generation. In detail, the middle process is separated into two sub-sections, and each subsection focuses on a specific 2× scaled information learning. Inheriting the feature from the first part, layers in the first subsection extract features with the input size. Because more information of the input is involved here, more layers (12 layers) are employed in the first subsection, which aims at extracting more details of the image and producing sharper details. In contrast to the first subsection, the second one contains three residual blocks for further 2× scale generation. Global residual learning. Another important change is a step-by-step global residual learning structure. Inspired by ResNet, VDSR [5] firstly introduces global learning in SR, which succeeds in steady training a network with more than 20 CNN layers. Typically, the global learning can transmit the information from the input or low-level layer to a fixed high-level layer, which helps solve the problem of dis-convergence. Most of the subsequent DL-SR models introduce global learning strategy in their architectures to build a deep and complicated SR network. As shown in Figure 3a, the information from the very layer before the local residual learning and the last output layer of the local residual learning are combined in the global residual frame. However, only one scaling block for SR image generation is not enough for the large-scale issue. Therefore, in UMGSR, we arrange the global residual learning in each section: two functional residual blocks with two global residual learning frames. In fact, the first global learning fulfills stable training, and the closely adjacent second section can leverage similar information of the input image. Local residual block architecture. Similar to SRGAN, all local parts are residual blocks which has proved to achieve better features learning results. During the training stage, we also explore the setting as in EDSR [1] abandoning all batch normalization layers. In general, the local residual block contains two 3 × 3 convolutional layers and a ReLU activation layer following each of them. Results of ResSR and EDSR elucidate the superior learning ability of this setting. Pixel, Perceptual, and Gram Losses In the realm of SR, most DL-SR methods train models with the pixel-wise MSE loss. Because there is direct relationship between MSE loss and standard PSNR index which commonly measures final performance. In [43], a novel perceptual loss is proposed to learn texture details. The new loss calculates Euclidean distance between two specially chosen layers from a pre-trained VGG19 [47] network. In SRGAN [3], the perceptual loss is firstly introduced to SR, and it shows great power in the generation of photo-realistic details. Another loss for feature learning is the gram loss [13] which is widely used in the realm of style transfer. Gram loss performs as a global evaluating loss, which measures the style consistent. To extract more information about spatial structure, we use multi-gram loss in this paper. Ultimately, the loss function of UMGSR combines MSE loss, perceptual loss, and the multi-gram loss. More details are shown in the followings. Pixel-level loss. Pixel-level loss is used to recover high-frequency information in I SR i with supervised I HR i . Normally, traditional l 1 or l 2 norm loss is widely used in DL-SR model, and they can produce results with satisfactory accuracy. In our UMGSR, the MSE loss is also introduced as the principle pixel-level loss for high accuracy. It is defined as: where W and H are shape factors of input, and s is the scale factor. The MSE loss contributes to finding the least distance in pixel-level among all possible solutions. When measuring the accuracy, models achieve the best PSNR and SSIM without using other loss. However, the I SR s suffers from the over-smooth issue, which leads to an unreal feeling in visual. A detailed illustration will be shown in the experimental part. To deal with this problem, we further propose perceptual loss and multi-gram loss. Perceptual loss. To obtain more visual satisfying details, we apply the perceptual loss [43] as in SRGAN [3], which minimizes the Euclidean distance of a pre-trained VGG19 [47] layer between the corresponding HR and SR images. It aims at better visual feeling results, as well as reducing of PSNR. To facilitate the understanding, we illustrate the architecture of VGG19 in Figure 4. In SRGAN, only one specified layer of VGG19 is involved in the perceptual loss, i.e., VGG 5,4 (the fourth convolution before the fifth pooling layer). Different layers of the network represent various levels of feature. In other words, the former part learns intensive features, and the latter one learns larger coverage information. As a result, we argue that one layer for perceptual loss is not enough. To fix this weakness, we propose a modified perceptual loss by mixing perceptual losses in several different layers of VGG19. In our experiments, we use the combination of VGG 2,2 , VGG 2,3 , VGG 3,4 , and VGG 5,4 with different trade-off weights, i.e., In fact, this new loss helps us abstract feature information in different feature sizes. Although it is proved in [7] that the perceptual loss in high-level layer promotes better texture details, we still insist that the training of DL-SR network is a multi-scale learning process, and more information involved can potentially lead to better results. During experiments, we propose a perceptual loss to generate visual transition details from high-frequency information. Multi-gram loss. In style transfer, the gram matrix measures the relationship among all inner layers in a chosen channel. It supplies the global difference information of all image features. The gram loss is first introduced to DL in [13], to train a DL network with gram loss as a style loss and MSE loss as a content loss between two images. In SR, I HR i and I SR i share similar spatial architecture and features. More spatially invariant can be extracted by the feature correlations in different sizes. Compared with style transfer, we introduce the multi-gram loss [22] in UMGSR to generate better visual details as [22], which first proposes the multi-gram loss from the Gaussian pyramid in a specific layer. Our redesign of the multi-gram loss for the SR purpose is shown as follows: In detail, the first function calculates the gram matrix in a specific layer. All i, j, r, s represent different feature maps: i, j in the r th layer and the s th scale octave of the Gaussian pyramid. The second function measures the gram loss between the source image and its counterpart. The last function refers to the specially chosen layers, where we expect to extract the gram loss. The values of v and w are chosen from 1 or 0, to keep or abandon the gram loss of one certain scale layer, respectively. The multi-gram loss determines the overall global texture in image compared to the perceptual loss on local features. Each of them can be served as the complementary role to another. The experiments show their positive effect on the details of the final SR output. In general, the final loss of UMGSR is constituted by summing up all the three losses with specific trade-off factors as: Experiments In this part, we conduct contrast and ablation experiments to evaluate our proposed UMGSR. All of our models are trained on a NVIDIA TITAN XP GPU with 4× scale factor. There are three parts as follows: Setting Details Because just one image acts as the input of UMGSR, we choose all input images I in from three different benchmark datasets (Set14 dataset [48], DIV2K dataset [49], and PIRM dataset [15]), to conduct a fair comparison with other supervised and unsupervised methods. The images with content consistent to various complicated conditions are qualified as the realistic ones. Training setting details. As mentioned in the methodology part, we firstly apply the data augment strategy to form the training dataset from I in . To obtain I HR i (i=1,2,. . . ,n), we randomly scale I in in the range of 0.5 to 1, following with rotation on I HR i in both horizontal and vertical directions. In addition, we do not apply random cropping, so that more information of I in can be kept. The initial learning rate is set to be 0.001, with half reducing when remaining epochs are half down. We perform Adam (β 1 = 0.9, β 2 = 0.999) to optimize the objective. The patch size is 30 × 30, and the corresponding HR size is changed to 120 × 120. The I LR i (i=1,2,. . . ,n) images are with smaller size since they are 4 × −8× down-scaled from the I in images. We set the total training epochs as 4000. Ablation setting. In the following part, we demonstrate the influence of proposed changes in UMGSR by ablation analysis. To this end, firstly, we train our model only with MSE loss. Secondly, we use both the MSE loss and the perceptual loss. Here, we also consider the comparison between single perceptual loss and the incorporating one to evaluate its influence. Finally, we investigate the performance with the total loss, combining the MSE, the perceptual, and the multi-gram loss. Except for the loss function, all other settings are kept consistently. We parallelly compare the generations of UMGSR (with different loss functions and structures), EDSR(https://github.com/thstkdgus35/EDSR-PyTorch), SRGAN(https://github.com/tensorlayer/srgan), and ZSSR (https://github.com/assafshocher/ZSSR). All generations are obtained by the pre-trained models from the url links. All results are compared in PSNR (Y channel), which measures the accuracy in pixels, and another total distribution index: the spectral image. Moreover, we further present the detail comparison of the same patch from all generations. Structure setting. UMGSR with 15 residual blocks is shown in Figure 3. In detail, the former 12 blocks are used to extract the first 2× features from the input. The remaining three residual blocks inherit information from previous 2× scaled blocks and achieve 4× up-scaling. All filter sizes equal to 30 × 30, and all residual blocks include 64 channels for feature learning in contrast to 256 channels in the deconvolutional part. We train the model with the 1008 HR-LR image pairs from one image. Ablation Experiments Training when β and γ are equal to zero. As most DL-SR methods, we use the MSE loss as the basic loss function. In this setting, our model is similar to the ResSR except for single difference in the total architecture. To show changes of new structure, we compare them with only structure difference. The final results of these two methods are shown in Figure 5. From the results, we can see that our two-step network produces pictures with more natural feeling than ResSR. In addition, spectral comparison in Figure 5 shows that the two-step network generates more accurate features. There is less blur information in the red rectangular area where two-step strategy is used. Training when γ equals to zero. In this part, we introduce the perceptual loss to the loss function. To be specified, layers VGG 2,2 and VGG 4,3 of VGG19 are used in the final loss function by fixing α 1 = 0.3 and α 3 = 0.7 in (3). Here, to comprehensively distinguish the effect of perceptual loss, we display the comparison between training with only the perceptual loss and with the combination of MSE and perceptual loss in Figure 6. From the detail contrast, we can tell that with single perceptual loss, many features in local block are missing. In our opinion, this phenomenon is due to the upsampling stage where the input must be enlarged by Bicubic to the required input size of VGG network, i.e., 224 × 224. However, the I SR and I HR images in UMGSR is 120 × 120. As a result, a lot of unfitting information appears in up-scaled images. This local mismatching information further results in poor generations. Training with all loss settings. In this part, we use the loss by incorporating the MSE loss, the perceptual loss, and new multi-gram loss. With the multi-gram loss, the network learns feature map in both global and local aspects. Because multi-gram loss measure spatial style losses, it leads to better visual feeling results both in details and shapes. Referring to super-parameters, α = 1 and β = γ = 2 × 10 −6 . This setting is proved to be useful by SRGAN. In general, the final loss function is: Loss total = Loss mse + 2 × 10 −6 Loss vgg 5,4 + 2 × 10 −6 Loss gram The multi-gram loss is somehow similar to the perceptual loss. Both learn loss from inner layers of a pre-trained VGG network with the final SR image and its corresponding HR image as the inputs. For multi-gram loss, the VGG 2,1 and VGG 3,2 are chosen to be the specified loss layers. All chosen layers are down-scaled to five pyramid sizes for spatial adaption. The size of the chosen layers must be large enough. Then, five different sub-layers-like pyramid structure are used to calculate gram losses as mentioned in Section 3.3. Similar to the perceptual loss, extra noise appears in the SR results if the model trained only with multi-gram loss. The final PSNR of images are summarized in Table 1, and the visual comparison is shown in Figure 7. With the introduction of multi-gram loss, more pleasant features appear in generations, which can be clearly observed in Figure 1. Furthermore, the MSE changing chart shows the advantage of final loss (combination of MSE, perceptual loss, and multi-gram loss) in Figure 8. Discussion In this paper, we compare the proposed UMGSR with other state-of-the-art supervised and unsupervised methods with both traditional PSNR value and the power-spectrum image contrast. Referring to the unsupervised setting of UMGSR, more analysis needs to be involved, to better evaluate its performance. On the other hand, the latest research in [50] suggests that there is a trade-off between distortion and perception. Our research pays much attention to the visual satisfactory generation, which hurts the PSNR to some extent. Hence, traditional accuracy measurement, such as MSE, PSNR, and SSIM [51] cannot justify the advantage of our method properly. We exhibit the SR results of five different methods, EDSR, ZSSR, SRGAN, UMGSR (MSE), and UMGSR (total loss), with HR images in Figure 9. The PSNR scores are shown in Table 1. In detail, image 1 is from DIV2K [49]. It acts as the training image of EDSR. According to the PSNR values, EDSR achieves the best result. On the other hand, from Figure 7, we can infer that UMGSR produces SR image with more carving details, leading to better visual feeling than EDSR. The conclusion is in keeping with the viewpoint of SRGAN: higher PSNR does not guarantee a better perceptual result. This phenomenon is fairly obvious in the comparison between UMGSR with MSE loss and with total loss. In unsupervised SR learning, PSNR of ZSSR is much higher than ours while their SR images are in worse visual details. To highlight the difference among these methods, we compare the SR images by their 3D power-spectrum [52] in Figure 9. From the spectrum distribution, we can clearly see the distribution of the whole image. It distinctly shows that our method is much better than ZSSR and EDSR, which generate obvious faults. We assume that it is due to the mixture loss leading to better texture generation ability in our model. Figure 7: HR, EDSR, ZSSR, UMGSR with MSE loss, and UMGSR with total loss. Smooth edge of spectra reflects more colorful details and sharp fault means the lack of some color range. Even though abundant power spectra does not mean accurate, it indeed prove more vivid details in the image. As a result, our model can generate dramatic features than accurate pursuing models(EDSR, ZSSR). To better evaluate these models, we show generations in the same chosen patch in Figure 7. These results show that traditional accuracy-pursuing SR methods generate rough details and better shape lines, while UMGSR (total loss) results in satisfactory performance in image details, which are even better than the supervised SRGAN. This is also verified in 3D power-spectrum image, where our result is quite similar to the HR. In general, high-frequency information (like shape lines) is more sensitive to accuracy driven methods, such as EDSR. Meanwhile, SR images generated by these methods hardly provide pleasant visual feeling. Their ensembles are like drawn or cartoon images. For example, Roma Desert place (The second test image -3rdand 4th rows in Figure 7) generated by EDSR shows sharper edges but untrue effect. Visual feeling pursuing models (like SRGAN and UMGSR) generate more photo-realistic features accompanied by inaccurate information in pixel-level. For example, SRGAN introduces rough details in the local parts far away from the ground truth, especially for the large flat space. In our opinion, this is the common weakness of GAN related SR methods. In particular, our two-step learning partly overcomes it. Accordingly, the SR images of UMGSR show better shapes than SRGAN along with better visual feeling than EDSR. Conclusions and Future Work In this paper, we propose a new unsupervised SR method: UMGSR, for the scenario of no supervised HR image involved. Compared with former supervised and unsupervised SR methods, UMGSR mainly introduces both a novel architecture and a new multi-gram loss. With these modifications, our UMGSR can address SR issue with single input in any condition. Experimental results show that UMGSR can generate better texture details than other unsupervised methods. In the future work, we will pay more attention to combining our model with GANs on supervised SR problems. Author Contributions: Project administration, funding acquisition, guidance for the research, and revised the paper, Y.S.;writing-original draft preparation, data curation, software, methodology, writing-review and editing and supervision, B.L.; supervision, writing-review and editing, and funding acquisition, B.W.; guidance for the research, conceptualization, software, validation, and supervision, Z.Q.; visualization, supervision, J.L.. Conflicts of Interest: The authors declare no conflict of interest.
9,002
2019-07-26T00:00:00.000
[ "Computer Science" ]
A Study of the Impact of Digital Technology Capabilities on Firm Performance—A Moderated Mediation Model : Based on the dynamic capability theory and resource base theory, this study analyzes the mechanism of digital technology capability on enterprise performance with 556 questionnaires as data samples from grass-roots employees and middle managers in technology-based enterprises, and explores the mediating role of strategic flexibility and the moderating role of industry opportunities and organizational absorptive capacity. The study shows that: firstly, digital technology capability has a significant contribution to strategic flexibility and a significant contribution to enterprise performance; secondly, strategic flexibility has a mediating effect between digital technology capability and enterprise performance; thirdly, industry technological opportunities play a positive moderating role in digital technology capability and strategic flexibility; and fourthly, organizational absorptive capacity has a positive moderating role between strategic flexibility and enterprise performance. Fourth, organizational absorptive capacity has a positive moderating effect on the relationship between strategic flexibility and firm performance. By exploring the mechanism of digital technology capability and firm performance, this study is of great significance in guiding firms to improve their performance and emphasize organizational absorptive learning capability. Introduction The emergence of artificial intelligence, cloud computing, distributed and other underlying digital technology capabilities has fueled the rapid development of new forms of digital industry such as digital transformation, intelligent supply chain and intelligent manufacturing, and the industrial system has entered the era of digital economy.While digital technology provides support for the booming development of digital economy, it is also gradually introduced into production operations and daily management by enterprises.According to the theory of dynamic capability, in the face of the ever-changing external environment, enterprises must cultivate dynamic capabilities that can effectively respond to environmental changes and resource consumption in order to maintain competitive advantages.Relying on information technology and digital platforms, enterprise organizations can keep abreast of the environment and market information, and support enterprises to quickly reorganize and optimize the allocation of resources to cope with the environmental changes, so as to enhance their competitive advantages and industry status [1].Digital technology capability is based on digital technology and digital platform.Digital technology capability is an extended capability based on digital technology, specifically referring to the ability of enterprises to apply digital technology and management expertise in the process of developing new digital products [2].Digital technology capability is an extended capability based on digital technology.At present, more and more enterprises utilize digital technology to improve enterprise performance.The research on the mechanism of digital technology capability and enterprise performance is still in the exploratory stage. Digital technology capability has a driving effect on enterprise performance.Specifically manifested in the following aspects: first of all, digital technology capabilities to enhance the enterprise's access to external resources and the organization's internal individual information retrieval advantage, in the accelerated updating of the enterprise organization's internal knowledge accumulation at the same time vigorously promote the integration and coherence of knowledge accumulated in different fields [3] Secondly, digital technology capability has been greatly enhanced.Secondly, the vigorous promotion and application of digital technology can help enterprises and organizations to realize the sharing and flow of internal and external resources, thus avoiding problems such as the lack of core competitiveness and the lack of sustained capacity, and creating a new business development model [4].The application of digital technology can help organizations realize the sharing and flow of internal and external resources, thus avoiding the problems of lack of core competitive advantages and lack of sustained capabilities, and by creating new business development models [4].Finally, digital technology capabilities can reduce the hierarchical dispersion of enterprise organizations, create a flat structure, and provide a more flexible and efficient organizational management model for the business of enterprise organizations.Finally, the application of digital technology capabilities can penetrate deeper into the whole process of the supply chain, including upstream cooperative suppliers and downstream customers, and extend the trend of organizational performance to the entire supply chain, which not only helps enterprises to form new thinking modes and solutions in the process of development, but also points out the direction of the future growth of the enterprise and reduces the hindering factors.The greater the degree of development of digital technology, the greater the likelihood that an organization will create valuable products that will contribute to improved business performance. Strategic flexibility is an enterprise's ability to flexibly mobilize and handle the allocation of internal resources and to construct agile production and operation processes, reflecting the ecological adaptability of an enterprise in a compact and changeable market competitive environment.With the intensification of competition in digital technology, strategic flexibility has become an important ability for enterprise organizations to deploy all kinds of internal resources.According to the resource base theory and dynamic capability theory, the technical capability of an enterprise is an important source of sustainable competitive advantage.In the face of the turbulent enterprise survival environment, if Chinese enterprises want to obtain more production materials, grasp a larger market share and reap excessive market profits in the process of survival and development, they have to improve their technical capability and form and maintain their own competitive ability, which requires them to reallocate resources and rapidly adapt to the changing market environment in a short time.This requires enterprises to reallocate their resources and quickly adjust their original strategies in a short period of time [5].This requires the rapid reallocation of resources and the rapid adjustment of strategies.Therefore, the improvement of enterprise digital technology capability requires strategic flexibility, which can not only help enterprises to integrate and utilize various resources needed for internal and external survival, but also enable enterprises to utilize and configure social networks, and thus promote the formation of their core competitiveness [6].It also enables enterprises to utilize and configure social networks, thus promoting the formation of their core competitiveness.As a new type of knowledge and technology resources, digital technology capability can help enterprises try and develop various strategic approaches more conveniently.For enterprises, the digital technological capabilities acquired by enterprises must be flexibly adjusted by enterprises before they can be further transformed into performance-promoting dynamics [7].The company's digital capabilities can be further transformed into performance-enhancing dynamics.Enterprises with high digital technological capabilities are able to combine their own resources for restructuring more quickly; at the same time, enterprises with high strategic flexibility can efficiently and timely adjust the state of enterprise organization according to different characteristics of the competitive market environment, so as to achieve performance improvement through resource integration.Therefore, this study hypothesizes that strategic flexibility plays a mediating role in digital technology capability and firm performance. Industry technology opportunity refers to the degree of stability and performance reform that the entire industry in which a company is located can provide to the company when it comes to performance technology development and digital technology-related resource stockpiling, which influences the company's ability to recognize new technological capabilities ahead of time and the efficiency of carrying out corresponding performance activities [8].In the case of an industry with low technological opportunities, the process of identifying cutting-edge digital technological capabilities and changes in the market environment will be slower, and the development of emerging digital technological capabilities and R&D investment will be hindered; in the case of a full of technological opportunities in the industry, the enterprise will be able to detect market technological trends in a timely and sharp manner, and make safe technological reserves for the relevant performance technologies and carry out corresponding performance technology activities.Digital technology capability can innovate the enterprise organization coordination ability, and in the background of industry technology opportunity degree is higher, the enterprise will be more sensitive to the development trend of digital technology capability, so as to increase its own digital technology capability for the coordination of the strategic flexibility of the use of the ability to [9].The study hypothesizes that firms will be more sensitive to the development trends of digital technology capabilities in the context of a higher degree of industry technological opportunity.Therefore, this study hypothesizes that industry technological opportunities have a positive and significant moderating effect on the relationship between digital technology capabilities and strategic flexibility. Organizational absorptive capacity refers to the ability of an organization to learn, absorb and transform the application of new technological capabilities by making use of its own relevant knowledge reserves or relevant resource advantages.Resource base theory and organizational learning theory suggest that enterprises can acquire resources with competitive advantages through organizational learning, the use of cutting-edge knowledge and technology, and the construction of resource-based external networks, and effectively manage the use of these special resources, so as to enable enterprises to have a better development advantage [10].The organization will have a better development advantage by effectively managing and using these special resources.In order to make the resources and the organization more closely linked together, the enterprise needs to improve the organizational absorptive capacity, through organizational learning to link people and knowledge together, will be outside the organization of the advantageous type of resources for the enterprise's internal personal and organizational skills and reserves.Enterprises with strong absorptive capacity tend to have rich learning ability, flexible organizational structure, efficient absorption and conversion mechanism, and strong implementation of practical ability, which can improve the enterprise's ability to inspire the new technology and new products and performance awareness, and then converted into high efficiency and sustainable profits [5].The strategic flexibility of a company determines its own strategic flexibility.The strategic flexibility of an enterprise determines the degree of flexibility and coordination of digital technology capabilities, and the absorptive capacity affects the efficiency of the enterprise in converting digital technology capabilities into performance effectiveness.The stronger the absorptive capacity, the more the enterprise is able to dismantle and analyze the digital technology capabilities, the greater the breakthrough effect on performance improvement, and the higher the enterprise's ability of corporate performance [2].Therefore, this study hypothesizes that organizational absorptive capacity has a positive and significant moderating effect on the relationship between strategic flexibility and corporate performance. In summary, this paper focuses on the following issues: exploring the impact of digital technology capabilities on enterprise performance with strategic flexibility as a mediating variable; to study the moderating effect of industry opportunity on the relationship between digital technology capability and strategic flexibility from the perspective of industry opportunity; and to explore the moderating effect of organizational absorptive capacity on the relationship between strategic flexibility and enterprise performance with organizational absorptive capacity as the entry point to enrich the theoretical results and provide references and lessons to the practice of enterprise performance enhancement.The research model of this paper is shown in Figure 1.The research model of this paper is shown in Figure 1: Sample Characteristics To reduce the potential impact of common methodological bias, this study adopted a two-stage longitudinal research design with data collection at two time points, March 2023 and June 2023, respectively, and this research utilized an on-site.All questionnaires were distributed on-site and returned on the spot.At time point 1 (March 2023), we invited research respondents to evaluate digital technology capabilities, strategic flexibility, and firm performance, and to report demographic variables.A total of 675 questionnaires were distributed and 612 valid questionnaires were returned, which is a valid return rate of 90.67%.Two months later, at time point 2 (June 2023), in order to obtain data on technological opportunities and organizational absorptive capacity in the industry of the research sample, we again distributed questionnaires containing the variables of technological opportunities and organizational absorptive capacity in the industry to the 612 respondents who had completed the first questionnaire.In the end, 556 valid questionnaires were obtained, and the validity rate of the questionnaires was 90.85%. Measurement of Variables The questionnaires in this study were based on well-established domestic and international scales, and the variables were measured on a classic Likert 5-point scale, with "1" meaning "not at all consistent" and "5" meaning "1" means "not at all" and "5" means "fully conform". Organizational absorption ability (OAA) used the five-item Organizational Absorptive Capacity Scale developed with Wang et al.Cronbach's α = 0.852. Descriptive Statistics Pearson's coefficient was utilized to indicate the correlation between the variables, and the specific correlation coefficients are shown in Table 1.As can be seen in Table1, digital technology capabilities are associated with strategic flexibility ( = 0.479 , p<0.01), corporate performance ( = 0.481 , p<0.01) with significant positive correlation, meanwhile, strategic flexibility and enterprise performance ( = 0.460 , p<0.01) has a significant positive correlation.The results of the above correlation analysis provide preliminary evidence for the subsequent hypothesis testing. Validity Analysis Validity reflects whether the measurement instrument accurately measures the construct to be measured.In this study, convergent validity was tested by validated factor analysis (CFA) with Mplus8.The factor loadings of each scale on each question item were derived from CFA, and the combined reliability (CR) and average extracted variance value (AVE) of each scale were calculated to determine the convergent validity, as shown in Table 2 below.The combined reliability (CR) of all variables was above the acceptable level of 0.80, and the average extracted variance value (AVE) also satisfied the acceptable level of 0.50.Therefore, the measurement question items of each variable reflect the same construct and the aggregation validity of the variables is good.In testing the discriminant validity among the variables, this study used the validated factor analysis of Mplus8 to compare the different factor models to determine the goodness of fit of the data.The results of the comparison are shown in Table 3 below.Compared with the four-factor model, three-factor model, two-factor model, and one-factor model, the five-factor model has the best goodness-of-fit, with a 2/df of 1.549, which is less than 3.The values of CFI and TLI are 0.985, 0.982, which are greater than 0.9; the values of RMSEA and SRMR are 0.031, respectively, 0.027, which are less than 0.08.Therefore, the five-factor model has the best goodness-of-fit and good discriminant validity among the variables. Hypothesis Testing To test the mediating role of strategic flexibility, model 4 of the SPSS macro program PROCESS was used to test the mediating role of strategic flexibility between digital technological capabilities and firm performance, and the results showed that digital technological capabilities significantly predicted strategic flexibility, a=0.505,SE=0.039, p<0.001; digital technological capabilities and strategic flexibility entered the regression equation simultaneously, digital technological capability significantly predicts firm performance, c'=0.346,SE=0.138, p<0.001, and strategic flexibility significantly predicts firm performance, b=0.289,SE=0.039, p<0.001.The bias-corrected percentile Bootstrap method test indicates that strategic flexibility plays a significant role in the overuse of and firm performance is significantly mediated, ab=0.146,Boot SE=0.029, with a 95% confidence interval of [0.091, 0.206].The mediating effect as a proportion of the total effect ab/(ab+ c') = 29.61%. Second, model 21 of the SPSS macro program PROCESS was used to test the moderating role of organizational absorptive capacity and technological opportunities in the industry.None of the variance inflation factors of all the predictor variables in this study are higher than 5, indicating that there is no problem of multicollinearity.As shown in Table 4, Equation 1 is significant that digital technological capabilities and industry technological opportunities can significantly and positively predict strategic flexibility, with =0.572, SE=0.028, p<0.001, indicating that industry technology opportunity positively moderates digital technology capability and strategic flexibility.Equation 2 is significant that strategic flexibility and organizational absorptive capacity can significantly and positively predict firm performance, with = 0.469, SE = 0.021, p<0.001, indicating that organizational absorptive capacity positively moderates in strategic flexibility and firm performance.In order to explain more clearly the essence of the interaction effect between digital technology capabilities and industry technology opportunities, industry technology opportunities were divided into high and low subgroups according to the mean plus or minus one standard deviation, a simple slope test was performed and a simple effect analysis plot was drawn (Figure 2).The results show that for the low subgroup, i.e., low industry technological opportunities, digital technological capability positively predicts strategic flexibility significantly (Bsimple=0.153,SE=0.035, p<0.001); and for the high subgroup, i.e., high industry technological opportunities, digital technological capability has a significant and increasing positive predictive effect on strategic flexibility (Bsimple=1.068,SE=0.41, p<0.001).To explain the essence of the interaction effect between strategic flexibility and organizational absorptive capacity, organizational absorptive capacity was divided into high and low subgroups according to the mean plus or minus one standard deviation, a simple slope test was conducted and a simple effect analysis plot was drawn (Figure 3).The results showed that for the low grouping, i.e., enterprises with low organizational absorptive capacity, the negative prediction of strategic flexibility on enterprise performance was significant (Bsimple=-0.085,SE=0.033, p<0.01); for the high grouping, i.e., enterprises with high organizational absorptive capacity, the positive prediction of strategic flexibility on enterprise performance was significant (Bsimple=0.970,SE= 0.042, p<0.001). Conclusion On the basis of existing studies based on digital technology capabilities and firm performance, this study explores the direct influence mechanism of digital technology capabilities with strategic flexibility and firm performance respectively, and further explores the moderating role of industry technology opportunities and organizational absorptive capacity in it.Based on the empirical research, this paper obtains the following research conclusions: (1) Digital technology capability has a significant contribution to strategic flexibility.(2) Digital technology capability has a significant contribution to firm performance.(3) Strategic flexibility has a mediating effect on the relationship between digital technology capabilities and firm performance.(4) Industry technology opportunities positively moderated digital technology capabilities and strategic flexibility.( 5) Organizational learning capability significantly and positively moderates the effect of strategic flexibility on firm performance. Research Significance and Outlook Based on the dynamic capability theory, this study constructs a theoretical relationship model of digital technology capability on enterprise performance, confirms the facilitating effect of digital technology capability on enterprise performance, and thus enriches the research on the consequence variables of digital technology capability and broadens the research on the antecedent variables of enterprise performance.Based on the resource-based theory, we systematically and completely analyze the transmission mechanism of strategic flexibility between digital technology capability and enterprise performance from the perspective of rational resource allocation research, strategic flexibility is the ability of enterprises to effectively allocate and coordinate resources, and it is also a means for enterprises to better cope with the external environment full of uncertainty, and we find that the mediating role played by strategic flexibility has certain theoretical significance in explaining the influence path of digital technology capability on enterprise performance activities.It has certain theoretical significance in explaining the influence path of digital technology capabilities on enterprise performance activities, and provides a more complete analytical and explanatory framework for the subsequent research on the influence effect of digital technology capabilities.Finally, this study also considers industry opportunities as an external factor and organizational absorptive capacity as an internal factor as the important boundary conditions of digital technology capabilities affecting enterprise performance, and empirically examines their moderating roles in the process of digital technology capabilities affecting enterprise performance, which helps to further deepen the understanding of the issue of when and why digital technology capabilities affect enterprise performance. For future research, this paper makes the following outlook: knowledge and technology are important capabilities for today's enterprises to be undefeated in the competitive marketplace, and questions about how enterprises can adequately absorb new digitally based capabilities and how to convert knowledge and technology into competitive advantage [13].The issues of how enterprises can fully absorb the new digitalization-based capabilities and how to convert knowledge and technology into competitive advantages.It has been explored for a long time in the academic community and will be a future research hotspot in the field of business administration and will receive further attention from relevant scholars.In addition, the continuous development of enterprises in the complex and changing market environment, the pursuit of efficiency and performance will lead to endless problems and challenges, which requires the joint attention and efforts of academics and industry experts as well as enterprise management [14].Secondly, digital technology capabilities have a significant impact on firms.Secondly, the economic benefits of digital technology capabilities to enterprises have not been paid attention to in this paper, and it is worthwhile to examine whether the economic benefits generated by the introduction of digital technology can offset the costs incurred in purchasing the technology and allocating related resources in the early stage [15] The paper also discusses the economic benefits of the introduction of digital technology.Finally, this paper mentions the process of continuous change in the internal and external environment of the organization during the decision-making process of the firm, but it does not sufficiently consider the environmental turbulence, so future research could further explore the strategic change of firms in the context of turbulent environments through technological and product performance as well as the adoption of performance-based decision-making and proactive behaviors. Figure 2 : Figure 2: The moderating role of industry technology opportunities. Figure 3 : Figure 3: Regulatory role of tissue absorptive capacity. Table 1 : Descriptive statistics and correlation analysis. Table 4 : Results of hypothesis testing.
4,837
2023-01-01T00:00:00.000
[ "Computer Science", "Business" ]
Superentropic Black Hole Shadows in Arbitrary Dimensions We investigate the shadow behaviors of the superentropic black holes in arbitrary dimensions. Using the Hamilton-Jacobi mechanism, we first obtain the associated null geodesic equations of motion. By help of a spheric stereographic projection, we discuss the shadows in terms of one-dimensional real curves. Fixing the mass parameter m, we obtain certain shapes being remarkably different than four dimensional geometric configurations. We then study theirs behaviors by varying the black hole mass parameter. We show that the shadows undergo certain geometric transitions depending on the spacetime dimension. In terms of a critical value mc, we find that the four dimensional shadows exhibit three configurations being the D-shape, the cardioid and the naked singularity associated with m>mc, m = mc and m<mc, respectively. We reveal that the D-shape passes to the naked singularity via a critical curve called cardioid. In higher dimensions, however, we show that such transitional behaviors are removed. Introduction Black hole physics has received a remarkable interest from many years. This physics has become primordial to understand quantum gravity models. The associated contributions have been supported by the gravitational wave detections and the black hole imaging provided by Event Horizon Telescope international collaborations [1][2][3]. Concretely, many works have been elaborated dealing with the thermodynamic and the optical aspects of such fascinating objets. Interpreting the pressure as a cosmological constant in Anti-de Sitter (AdS) geometries, the black hole thermodynamic has taken a central place in gravity model investigations. This provides new developments in such a physics by unveiling data on certain transitions shearing similarities with Van der Waals fluids. Precisely, the Hawking-Page transition has been examined for four and higher dimensional gravity theories [4][5][6]. Precisely, it has been revealed that such a transition generates certain universalities [7,8]. Moreover, the optical aspect has been approached by investigating the deflection angle of the light rays and the shadow behaviors [9][10][11][12][13][14][15][16][17][18][19][20][21]. In four dimensions, the black hole shadows of various black holes have been engineered using one dimensional real curves [22][23][24][25]. In particular, the visualization of the shadow casts are obtained from the null geodesic equations. This finding has been supported by the study of geometrical observables showing information about the involved size and the shape of such closed real curves. For non-rotating black holes, it has been revealed that the black hole shadows exhibit circular geometric configurations. It has been remarked that the associated size can be controlled by internal and external moduli spaces including the dark field sector [10]. The circular geometric manifestation can be distorted by introducing the rotating parameter which generates non-trivial geometries involving either D or cardioid shapes [22,23,[25][26][27][28]. The latter has been appeared in the study of the AdS black holes obtained from type superstrings and M-theory scenarios using brane physics [27,29]. Certain distorted geometrical behaviors have been observed for rotating stringy solutions exhibiting cardioid shapes by varying the brane number. Most recently, the pulsar SGR J174-2900 near supermassive black holes SgrA* has been investigated providing physical aspects of the involved horizon and the horizonless of events [30,31]. The relation between the thermodynamical volume aspect and the black hole entropy, including the associated area, has been investigated by the help of the Reverse Isoperimetric Inequality [32]. Concretely, the black holes in the (AdS) spacetime provide interesting results. For generic values of the cosmological constant Λ, the domaine of outer communication is bounded by a cosmological horizon. This has been considered as a relevant relation which has been exploited to link the optical proprieties of the black hole with the associated spacetime. A special interest has been devoted to four dimensional superentropic black holes being a fascinate solution with non compact horizon topologies with exceeding maximum bound entropies [32][33][34]. Up to certain limits, the superentropic black holes have been investigated by using an ultra-spinning limit of the Kerr-Newman-AdS solutions [35][36][37]. The associated optical and thermodynamic aspects have been studied. Precisely, the thermodynamic behaviors of such black holes have been examined by exploiting ultra-spinning approximation limits [38,39]. Concerning the optical aspect, the four dimensional shadows have been studied using the Hamilton-Jacobi formalism [22]. Among others, it has been found many geometrical configurations including ellipse shaped and naked singularity behaviors. Motivated by various activities including the optical properties in higher dimensional supergravity models [9,33], the AdS space could open many interesting roads since it opens windows associated with the gauge-gravity duality in string theory and related topics. Among others, it has been remarked that the involved size parameter has been liked to the mass parameter of the superentropic black hole and and its charge Q. In this way, the mass behaves differently. This could bring different optical behaviors compared to the trivial black hole solutions. Moreover, the intrinsic symmetries of the associated metric gives a complete integrability of the geodesic motion including the separation of the Hamilton-Jacobi equations. Constrains on such black holes could make contact with current or future observations given by EHT collaborations. The aim of this work is to investigate the shadows of the superentropic black holes in arbitrary dimensions. We obtain the corresponding null geodesic equations of motion using the Hamilton-Jacobi scenarios. Exploiting a spheric stereographic projection, we approach the shadows in terms of one-dimensional real curves. Fixing the mass parameter, we find certain shadow shapes being remarkably different than four dimensional geometric configurations. We then examine theirs behaviors by varying the black hole mass parameter. Concretely, we reveal that the shadows undergo transitions depending on the spacetime dimension. Varying the mass with respect to a critical value m c , we show that the four dimensional shadows exhibit three configurations being the D-shape, the cardioid and the naked singularity associated with m > m c , m = m c and m < m c , respectively. Precisely, we observe that the D-shape passes to the naked singularity via the cardioid critical curve. In higher dimensions, however, we reveal that such behaviors are removed. This paper is organized as follows. In section 2, we present a concise review on superentropic black holes in higher dimensions. In section 3, we investigate the shadow behaviors in arbitrary dimensions. In section 4, we reconsider the study of four dimensions by showing a possible geometric transitions in shadow behaviors. In section 5, we provide a study for dimensions more than four. The last section is devoted to concluding discussions. Superentropic black holes in higher dimensions We start by exposing a concise review on higher dimensional superentropic neutral black hole solutions. Certain physical aspects of these solutions have been dealt with in arbitrary dimensions [36]. They could be considered as new solutions to the Einstien-Maxwell equations being supported by supergravity models relying on extra dimensions. In particular, we consider single rotating black hole solutions. Following [36,38], the associated metric line element corresponding to a d dimensional spacetime reads as where is a length parameter linked to the cosmological constant. ∆ and ρ 2 are relevant functions taking the following form where m is the mass parameter. dΩ 2 d−4 denotes the line element of the (d − 4)-dimensional unit sphere. To obtain a compact black hole object, one should introduce a new chemical potential K in order to consider a periodic direction φ as follows φ ∼ φ + α, where α is a dimensionless parameter [35,36,38]. A close inspection shows that the existence of the black hole horizon depends on the spacetime dimension d as well as on the involved moduli space. In d = 4, for instance, the horizon existence generates a constraint between the black hole parameters including m and . It is given by where the critical mass parameter will be involved in the discussion of the shadow behaviors of such black holes. For d 5, however, the above constraint reduces to a simple one provided by m > 0. Having elaborated the essential backgrounds, we move to investigate the superentropic neutral black hole shadow behaviors in higher dimensions. Varying the mass parameter, we will show that the shadow geometries undergo certain geometric transitions. This could be interpreted as possible transitions in the optical aspect going beyond to the ones observed in thermodynamics. This feature could be illustrated in terms of one dimensional real curves embedded in a two-dimensional plane supported by the above metric form. Shadows in arbitrary dimensions Motivated by string theory and related supergravity models, we would like to study the shadow behaviors of superentropic neutral black holes in arbitrary dimensions. Before elaborating shadow geometries in arbitrary dimensions, we establish first the null geodesic equations of motion. Employing the Hamilton-Jacobi formalism, we write down the equations of the photons near the superentropic black hole horizons. Following [40], certain relations are needed. Indeed, one has where τ is an affine parameter along the geodesics. The action S is proposed to take the following form where one has used E = −p t and L = p φ being the total energy and the angular momentum of the photons, respectively. In this regards, S r (r), S θ (θ) and S ψ i (ψ i ) represent functions depending on r, θ and ψ i variables, respectively. It is worth noting that the variables ψ i and the functions S ψ i (ψ i ) are associated with the extra dimensions. Sending these extra dimensional functions to zero, we recover the expressions of the four dimensional action reported in [22]. Using the separation method and the Carter constant, we can get the complete null geodesic equations. Precisely, they are given by where one has used λ = (r 2 + 2 ) − ξ. The quantities ξ = L φ E and η = K E 2 , representing the impact parameters, have been introduced in such equations. K is a separable constant [10,40]. The computation shows that R(r), Θ(θ), and the extra dimension functions Ψ i (ψ i ) take the following forms At this level, it is interesting to comment these equations. Taking ψ i = 0, we recover the four dimensional geodesic equations reported in [22]. The functions Ψ i (ψ i ) and ∆ share data on the shadow behaviors in higher dimensions. The radial and the polar contributions in Ψ i could be understood in terms of the fibration properties used in the compactification scenarios of higher dimensional supergravity models including superstrings, M-theory, and related topics. This means that four dimensional models could be considered as a base space where the (d − 4)-dimensional real sphere moves on it. Roughly, the unstable circular of the photons around the black hole horizon can be obtained by solving the following equations where r s is the circular orbit radius of the photon [10,[24][25][26]. The computations provide Fixing the observer distance r ob , we can find the shadow behaviors in the domain of outer communications (∆ > 0) [23][24][25]. In this way, the corresponding vectors of the observer are needed to get the associated null geodesic equations of motion. The extra dimensions push one to introduce new vectors from four dimensional point of views. These vectors are given by , (3.14) where one has used i = 1, . . . , d − 4. The timelike vector e 0 indicates the four-velocity of the observer and e 3 represents the third vector along the spatial direction pointing toward the center of the black hole. Here, e 0 ± e 3 are considered as tangent directions to the one of principal null congruences where r ob and θ ob are the distance and the angle of the observer, respectively. The vectors e i+3 are associated with the higher extra dimensions. Taking d = 4 and evincing such vectors, we recover the four dimensional ones proposed in [22]. In generic configurations, the light equation being tangent to the observer position can be defined via the relationλ where the (d − 1) vectors of the spacelike can be represented in a basis corresponding to the spherical coordinates. This can be exploited to establish the tangent equation in terms of the orthonormal vectors {e 0 , . . . , e d−1 } and the celestial coordinates (γ, δ, σ i ). Using the spherical coordinates in higher dimensions, we obtaiṅ λ = β − e 0 + cos δe 3 + sin δ cos γe 1 + sin δ sin γ cos σ 1 e 2 + sin δ sin γ where β is a scalar factor. Combining the equations of the light rays and Eq.(3.20), we get . (3.21) An examination reveals that the celestial coordinates are functions only of the impact parameters ξ and η needed to illustrate one-dimensional real curves describing the associated shadow behaviors. Exploiting the null geodesic equations and comparing the coefficients of Eq.(3.19) with Eq.(3.20), we can express the celestial coordinates in terms of ξ and η. Indeed, it follows that the spheric coordinates should verify . (3.24) Applying the S d−2 sphere stereographic projections, we can get the local cartesian coordinates of the coordinate system (x, y, z 1 , . . . , z d−4 ). These coordinates could be exploited to represent the shadow geometries in an appropriate spheric projection. The computations give where one has used j = 1, . . . , d − 5. In extra dimensions, it has been remarked that the equations needed to get such geometries describing the optical aspects of the neutral superentropic black holes involve a factor given by cos 2 θ. Placing the observer in the equatorial plane, we get Up to the periodicity conditions, it is obvious that these constraints are solved by These conditions automatically impose z i = 0, i = 1, . . . , d − 4. It is interesting to note that the remaining information on the extra dimensions are now hidden only in ∆. This allows one to consider only the cartesian coordinates (x, y) to visualize the shadow behaviors in arbitrary dimensions by exploiting one-dimensional real curves. This situation of the observer matches perfectly with the stereographic projection procedure S d−2 −→ R 2 . In this way, the relevant parameter in the shadow discussion is the dimension d. In what follows, we inspect such optical behaviors by varying two essential parameters m and d. First, we consider the variations of the dimension d. After that, the mass variation will be discussed. In Fig(1), we illustrate the shadow behaviors for different values of d by fixing m and varying . The left panel presents the shadow behaviors of the special case d = 5 for different values of . It has been remarked that the shadow size increases by decreasing . The right panel shows the effect of the spacetime dimension d on the shadow aspect. It has been observed that the shadow size decreases by increasing d. An examination reveals that the geometric configurations of the higher dimensional black hole solutions are different than the ones obtained in d = 4 [22]. In particular, the elliptic geometry has been modified. It should be interesting to make contact with the shadow observation of the supermassive black hole associated with M 87 * data, obtained by the EHT international collaboration. Indeed, the observational data can put certain constraints on the relevant black hole parameters. Motivated by such activities, we could compare the shadow of the superentropic black hole with such observational data by taking M = 1 in units of the M 87 * black hole mass given by M BH = 6.5 × 109 M and r 0 = 91.2 kpc. According to [1,2,41], it has been remarked that the experimental shadow size is around 5.19. However, the shadow size of the superentropic black hole is around 2 for generic regions of the moduli space. The shadow size given by EHT is bigger compared with the present studied black holes. We believe that such a difference is due to the involved geometry. Indeed, the shadow shape given by EHT is almost D-shape circle. However, the shadow shape of the superentropic black holes involve an elliptic geometrical form. This could be supported by the relation between the mass parameter m and of the superentropic black holes. For higher dimensional theories, we could speculate on a possible link with primordial black holes having certain relations with extra dimension models. This could be supported by the fact that such black holes, involving a small mass parameter, exhibit also a small length scale. We could expect that such black holes could find a place in future observational data associated with EHT collaborations including the recent one. Shadow transitions in four dimensions A close inspection on the study of the black hole shadows shows that the involved geometries exhibit several configurations including circular and D-shapes. In AdS backgrounds, a non expected geometry called cardioid have been found [25,27,28]. We will show that this geometry is relevant to unveil a nice phenomena in shadow behaviors which could be understood as a transition in the optical aspect of the neutral superentropic black holes in certain dimensions. Moreover, the elliptic geometry arises naturally in the domain of the horizon existence [22][23][24][25]. Due to the horizonless, however, the naked singularity for certain black hole solutions has been observed [22,25,42]. It is worth noting that it appears when ∆ involves complex roots. Motivated by non-trivial horizon geometries of the superentropic black holes, we study the associated shadow behaviors by varying the mass parameter being fixed in the previous investigations. Various dimensions can be dealt with. In this way, the roots of the equation will be needed in the elaboration of the shadow behaviors. The geometry and the mass constraints of such black holes push one to unveil new data on the associated shadow aspects. To show that, we first reconsider the study of four dimensions. Precisely, we investigate the shadow geometrical configurations for the superentropic black holes by varying the mass parameter m with respect to the critical value m c in certain ranges of the AdS radius . Three situations m > m c , m = m c , and m < m c will be examined. In Fig.(2), we present the associated behaviors. It follows form this figure that the shadows of these black holes exhibit an interesting phenomena according to the value of the critical mass parameter. It has been remarked that for m < m c , we obtain the naked singularity. For m > m c , however, the horizon of the black hole exists and the corresponding shadow involves either the elliptic or the D-shape elliptic geometry for the mass values bigger or a litter bigger to m c , respectively, for different values of . Considering a generic value of and identifying m with m c , an unusual cardioid shadow geometry appears. The latter could be supported by limaçon approximations in which such a geometry can be considered as a special case. It is noted that this approximation provides similar configurations using other methods [25,28]. Varying the mass, these three different geometric configurations can be obtained by passing from one shape to another one. We refer to them as a transition in the optical aspect. To check this phenomena, we consider higher dimensional black holes. Shadow behaviors of higher dimensional solutions It seems possible to extend the previous analysis to higher dimensions. The extension of the four dimensional transition picture to higher dimensions is based on Eq.(4.1). A rapid examination shows that one should consider two situations associated with d = 5 and d > 5, respectively. Five dimensional behaviors We first engineer the shadow shapes for five-dimensional solutions by using the above mentioned stereographic projection. Considering the constraint m > 0 and taking different values of m and , we can approach the shadow behaviors. Solving ∆ = 0 in five dimensions, we find that one has only two geometric configurations based on the following constraints The associated shadows are presented in in Fig.(3). It has been remarked that the cardioid geometry disappears. In this way, the shadow geometry passes directly from a naked singularity to an elliptic geometry. An examination shows that these behaviors are different than the four dimensional ones. Fixing , indeed, the naked singularity arises for small mass values contrary to four dimensions in which large values are needed. Similar aspects are observed in elliptic geometries. Behaviors in more than five dimensions Here, we study the shadow behaviors for d > 5. Concretely, we show that the naked singularity should be evinced. To reveal that, we need to solve the equation of ∆ = 0 in higher dimensions. It has been remarked that it is complicated to provide analytical solutions. However, we can reveal that this equation involves at least a real root which removes the naked singularity behaviors. For generic values of r, it is obvious that ∆ is a continuous radial function. Taking into account the limits lim r→0 + ∆ = −∞ and lim r→+∞ ∆ = +∞, we can safely say that ∆ = 0 has a real solution. Based on this argument, the naked singularity has been removed for d > 5 due to the existence of the real roots. To inspect the associated behaviors, we illustrate the six dimensional shadow geometries in terms of m and . In particular, we fix one parameter and vary the remaining one. Fig.(4) provides the performed computations. The left panel illustrates the mass variations, while the right one gives variations. For a fixed value of , we remark that the shadow size increases with m contrary to . Fixing m, the shadow size decreases by increasing being an expected behavior. 6 Discussion and concluding remarks The study of the optical properties of the black holes in higher dimensions could bring global pictures associated with non-trivial gravity models including superstrings and Mtheory. Motivated by such supergravity extended models, we have investigated the shadow behaviors of the superentropic black holes in arbitrary dimensions. Applying the Hamilton-Jacobi method, we have first obtained the null geodesic equations of motion in terms of the space-time dimension d where a discussion on the extra direction contributions has been elaborated. Considering a mass constraint in arbitrary dimensions, we have generalized the shadow equations by exploiting the higher dimensional spheric coordinates. Applying a spheric stereographic projection, we have studied shadow behaviors in terms of one-dimensional real curves. The present results recover the previous findings obtained in four dimensions [22]. Fixing the mass parameter, we have investigated the shadow shapes in arbitrary dimensions. We have shown that the shadow size decreases by increasing d. For a fixed value of d, the size increases by decreasing . In addition, we have remarked that d and involve the same effect. Varying the mass parameter, we have found nice optical properties. In four dimensions, for instance, we have shown that the optical behaviors exhibit transitions from the D-shaped elliptic geometry to the naked singularity via the cardioid curve associated with a critical mass value. It has been remarked that the metric of the elliptic geometry depends on certain parameters. According to theirs values, we have discussed event horizon and horizonless behaviors [43]. A close inspection shows that a similar critical geometry has been obtained being called critical curve [44]. Concerning five dimensions, this intermediate critical geometry has been evinced, due to the absence of the mass parameter in terms of the involved remaining ones. In dimensions more than five, however, we have revealed that the naked singularity has been disappeared. For higher dimensions, the four dimensional transition behaviors have been removed. The present work could be compared with certain results associated with non-rotating and rotating black hole solutions. First, for small values of the , the shadow of the superentropic black hole could involve certain similarities with non-rotating black holes in non-trivial backgrounds. A close examination shows that the obtained shadows could be compared with the ones of the black holes in the presence of strong magnetic fields [45,46]. Motivated by these activities, this could open new windows to provide a comparative discussion with Kerr and Kerr-like black holes. It has been observed certain distinctions associated with the shadow size. Such distinctions could be understood from the involved solutions where the mass parameter is constrained with other involved ones. In this way, it has been not considered as a free parameter. These constraints many provide certain differences compared to known black hole properties including thermodynamic and optical aspects. Coming back to the transition of the superentropic black holes in four dimensions, the cardioid geometry has appeared as a relevant shadow configuration linked to the critical value of the mass parameter. This mass is considered as a primordial parameter control the thermodynamic aspect. Indeed, many works show that the thermodynamic behaviors of the superentropic black holes are different than the trivial solutions [32,[35][36][37][38][39]. This difference has appeared also in the optical behaviors. Precisely, we have remarked a distinction in the shadow geometries compared with ordinary black hole solutions [47,48]. In this way, the optical transition in four dimensional could be considered as a relevant difference. In addition, this optical transition of the superentropic black holes could be exploited to unveil the optical behaviors of certain black holes in non-trivial solutions. We hope to link such a transition with future EHT dada to support the present results. This work comes up with certain open questions. A possible issue concerns the observational data supporting the four dimensional shadow geometric transitions. It should be interesting to examine the present behaviors by considering external field contributions including dark energy and dark matter. It should be interesting to make contact with the associated interesting findings. Moreover, the shadows of higher dimensional multi-center black objects, obtained from supergravity theories, have been explored in [49]. It would therefore be of interest to try to make contact with such a work. We hope to address such open questions in future works.
6,042.4
2022-03-13T00:00:00.000
[ "Physics" ]
Whole genome and transcriptome analyses of environmental antibiotic sensitive and multi-resistant Pseudomonas aeruginosa isolates exposed to waste water and tap water The fitness of sensitive and resistant Pseudomonas aeruginosa in different aquatic environments depends on genetic capacities and transcriptional regulation. Therefore, an antibiotic-sensitive isolate PA30 and a multi-resistant isolate PA49 originating from waste waters were compared via whole genome and transcriptome Illumina sequencing after exposure to municipal waste water and tap water. A number of different genomic islands (e.g. PAGIs, PAPIs) were identified in the two environmental isolates beside the highly conserved core genome. Exposure to tap water and waste water exhibited similar transcriptional impacts on several gene clusters (antibiotic and metal resistance, genetic mobile elements, efflux pumps) in both environmental P. aeruginosa isolates. The MexCD-OprJ efflux pump was overexpressed in PA49 in response to waste water. The expression of resistance genes, genetic mobile elements in PA49 was independent from the water matrix. Consistently, the antibiotic sensitive strain PA30 did not show any difference in expression of the intrinsic resistance determinants and genetic mobile elements. Thus, the exposure of both isolates to polluted waste water and oligotrophic tap water resulted in similar expression profiles of mentioned genes. However, changes in environmental milieus resulted in rather unspecific transcriptional responses than selected and stimuli-specific gene regulation. Introduction The increasing numbers of infections by multi-resistant bacteria turn out to be a great threat to our daily life. Bacteria develop resistance against antibiotics used in human health care, agriculture and animal husbandry or against pollutants from industry by accumulating genetic adaptations or acquisition of mobile genetic elements via horizontal gene transfer. To overcome multi-drug resistance, we have to undergo a thorough study to unravel how bacteria adapt to different habitats, to finally discover novel strategies to handle such infections. One of the most prominent bacterial pathogens that is infamous for its high potential to develop multi-drug resistance is Pseudomonas aeruginosa, a Gram-negative, ubiquitous opportunistic bacterium that can cause acute and chronic infections especially in patients in intensive care or suffering from predisposing conditions like cystic fibrosis. The rate of infections in human body differs according to the site of infection as 2% on skins, 3.3% on nasal mucosa, 6.6% for the throat, 24% for fecal samples (Morrison and Wenzel, 1984). Pseudomonas aeruginosa is found in hospital waste water, respiratory equipment, solutions, medicines, disinfectants, sinks, mops, food mixtures and vegetables (Trautmann et al., 2005). An important characteristic of P. aeruginosa is its ability to form biofilms as an adaptation to adverse environmental conditions. The microbes attach to the surface and embed themselves in extracellular polymeric substances such as proteins (e.g. extracellular enzymes), lipids and nucleic acids (Flemming and Wingender, 2001), usually leading to increased resistance towards harsh conditions such as temperature changes, pH fluctuations, presence of antibiotics (Kwon and Lu, 2006) and immune cells of humans (Donlan and Costerton, 2002). Sequencing of several P. aeruginosa strains genomes revealed that a large fraction (around 10%) of the genome is dedicated to gene regulation, which is consistent with its high versatility (Stover et al., 2000;Mathee et al., 2008). This high versatility enables evolutionary adaptations and facilitates the bacterium to colonize vigorous and diverse ecological niches. The core genome is usually highly conserved between different Pseudomonas strains (Mathee et al., 2008;Klockgether et al., 2011). It has a disparate variety of metabolism; it can degrade very distinct compounds such as alcohols, fatty acids, sugars, di-and tri-carboxylic acids, aromatics, amines and amino acids, which can be used up as sources of carbon. Pseudomonas aeruginosa has both aerobic and anaerobic metabolism. It is capable of anaerobic metabolism by converting nitrate to nitrite (Schreiber et al., 2007). Additionally, the genome harbours a huge repertoire of enzymes and efflux pumps that contribute to a high intrinsic resistance towards different classes of antibiotics. Additional resistance can easily develop by mutation or horizontal gene transfer, rendering P. aeruginosa a common cause of multi-drug resistant infections (Breidenstein et al., 2011). Regular use of high amounts of antibiotics in hospitals and other practices were assumed to be the sources of origin of antibiotics in the waste water systems and responsible for supporting emergence of multi-resistant bacteria (Rizzo et al., 2013). These resistances may not only develop from chromosomally encoded genes but also from mobile genetic elements like plasmids or integrons (Merlin et al., 2011). Not only waste water systems contribute to the development of resistance in bacteria, but also pollutants from industries and agricultural activities where the antibiotics and pollutants are directly released into the environmental water like rivers and lakes, creating selective pressure on these bacteria and making them evolve as resistance strains. The sensitive strains accept the resistant genes from these resistant donors and propagate as resistant strains. The concentrations of antibiotics in waste water might not be high enough to stimulate inhibitory effects but stimulate stress response mechanisms, which contribute to horizontal gene transfer and relevant transcriptional activities. It has also been proven by mutant investigations that sub-inhibitory concentrations of antibiotics can drive the evolution of antimicrobial resistance (Pedró et al., 2011). It all depends on the substance, concentration and strain present in the waste water systems. There is no final suggestion about long terms effects of subinhibitory concentration antibiotics and other micropollutants. It is commonly accepted that beside the linkage between antibiotics and antibiotic resistance, co-selection and the presence of heavy metal ions in the environments contributes to increasing resistance mechanisms due to the localization of resistance genes in close neighborhood on genetic mobile elements (Seiler and Berendonk, 2012). Beside antibiotic and heavy metal stress, starvation is another widespread adverse stimulus present in many aquatic environments where P. aeruginosa is found in nature (Bernier et al., 2013). Tap water represents an oligotrophic matrix with very low organic matter, and P. aeruginosa has recently been shown to persist and proliferate as biofilms in municipal drinking water distribution systems (Wang et al., 2012). The molecular responses of P. aeruginosa strains to starvation stress in tap water are so far unknown. In this study, we compared the transcriptional response of an antibiotic sensitive and a multi-resistant P. aeruginosa waste water isolate cultivated in municipal waste water and tap water focusing on regulatory mechanisms that could promote the development of antibiotic resistance. Results and discussion Bacteria have developed highly orchestrated processes to respond to environmental stresses, which when elicited alter the cellular physiology in a manner that enhances the organism's survival and its ability to cause disease. This study focused on the behaviour of two natural isolates of P. aeruginosa as a Gram-negative bacterium exposed to municipal waste water containing complex mixtures of xenobiotics and, as a second scenario, exposed to tap water simulating nutrient limitation (starvation). Since bacteria have to deal with unfavourable growth conditions in addition to diverse stresses in nature, bacteria that reached the stationary growth phase were used to imitate this environment and then exposed to stress. During transition from exponential growth to stationary phase, growth becomes unbalanced especially in laboratory systems, i.e. the synthesis of different macromolecules and cell constituents do not slow down synchronically (Nyström, 2004). Thus, stationary phase is an operational definition and does not describe a specific and fixed physiological state or response of the bacteria. It is more or less a change in physiology due to, e.g. phosphate limitation or accumulation of toxic waste products. Beside the changes in morphologies of bacteria, the gene expression pattern could be altered in stationary phase. In consequence, transcriptome analyses were run with ribonucleic acid (RNA) extracted from early stationary growth phase. In the present study, two different P. aeruginosa isolates were exposed to water matrices containing quite different compositions. Waste water from the influent of a municipal waste water treatment plants (WWTP) is composed of complex mixtures of xenobiotics like antibiotics, other pharmaceuticals, biocide etc., whereas tap water, in opposite, contains very low level of organic matter (including xenobiotics) as a result of the intensive drinking water conditioning processes at waterworks. We analysed the transcriptional responses from two P. aeruginosa isolates: the antibiotic sensitive strain PA30 and the multi-resistant strain PA49. Both P. aeruginosa strains did neither show any differences in growth in diluted brain heart infusion (BHI) or BM2 broth nor in yields of extracted total RNA after exposure in tap water or waste water. Genome analyses Large fractions of the P. aeruginosa genome belong to the highly conserved core genome containing only few highly variable genes (Dötsch et al., 2010), while most of the genetic variation between species is restricted to the so-called accessory genome organized in various regions of genomic plasticity (RGPs) (Mathee et al., 2008). Most of these RGPs represent mobile elements originating from horizontal gene transfer and include transposons, phages, plasmids and genomic islands, which are a major source of resistance genes (Battle et al., 2009;Kung et al., 2010;Klockgether et al., 2011). The large amount of homology between the core regions of different P. aeruginosa strains enabled us to employ the genomic sequences of strain PAO1 chromosome and a selection of genomic islands as a blueprint for de novo assembly. The resulting draft genomes consist of 207 contigs with a total length of 6.77 Mb for the strain PA30 and 269 contigs with 7.01 Mb for strain PA49 respectively (Table S1). An alignment of the contigs with P. aeruginosa reference strain PAO1 showed a huge overlap of 95.8% for PA30 and 96.4% for PA49 ( Fig. 1; Table S2), reflecting the highly conserved character of the P. aeruginosa core genome. Comparing the contigs with the genome islands that were used in the alignment process revealed a distinct pattern of accessory genomic elements for the two strains covering large fractions of the various genomic islands ( Fig. 1; Table S2). Strain PA30 contains full length or near-full length sequences of PAGI-5 to PAGI-11, larger fractions of PAGI-1 and PAGI-2 and several regions of PAGI-3, whereas only insignificant fractions of the remaining genomic elements occurred. In case of the multiresistant strain PA49, all genomic islands except the smaller PAGI-9 to PAGI-11 were covered at varying percentages (Table S2). The scattered distribution of regions within the genomic islands that actually showed homology with PA49 contigs may be partially explained by incomplete sequence assembly. However, the fact that both the contigs and the genomic island reference sequences contained a large amount of non-overlapping regions (data not shown) suggests that at least in some cases, the accessory elements found in PA30 and PA49 only partially contain sequences that are homologous to the genomic islands and also include a substantial amount of new and previously uncharacterized sequences. Since the genomes of PA30 and PA49 were nearly completely covered, the sequence types according to the multi-locus sequence typing (MLST) scheme by Curran and colleagues (2004) could be determined, enabling a phylogenetic classification of the two strains. As demonstrated by the phylogenetic tree (Fig. 2), PA30 and PA49 are members of the lineage that includes the type strain PAO1 and some recently sequenced strains. The question about their origin is open, since the sampling sites Fig. 1. Coverage of genomic reference sequences by the newly assembled genomes. The reference sequences that were used for the hybrid de novo assembly are displayed on the outer circles (diagram to the right) with an additional display of the accessory elements alone (missing the PAO1 chromosome, diagram to the left). Regions that are covered by the contigs of strains PA30 and PA49 or overlap with the chromosome of another reference strain PA14 are highlighted by coloured areas in the concentric inner circles as specified by the color legend. were influenced by hospital and housing waste waters. Selective pressures like the presence of antibiotics and other environmental criteria are a general concept that refers to many factors that create an evolutionary landscape and allow organisms with novel mutations or newly acquired characteristics to survive and proliferate (Kümmerer, 2009). There is evidence that even in subinhibitory concentration, antibiotics or other xenobiotics may still exert their impact on microbial communities (Goh et al., 2002;Davies et al., 2006). The direct link between antibiotics or heavy metal ions and development/selection of resistance mechanisms is obvious and manifold described (Seiler and Berendonk, 2012;Rizzo et al., 2013). The impact of other harsh environmental conditions on resistance activities, recombination and horizontal gene transfer remains to be determined. Long-term effects of environmental exposure to low levels of antibiotics like these present in surface waters or in the outflow of sewage plants are also still unknown. The prediction of protein coding sequences (CDS) yielded for both strains a comparatively large number of genes, about 99% of which were successfully annotated according to their best-hit BLAST alignments (Table S1). The vast majority of the predicted genes were found in both strains and also in the PAO1 reference genome (5262 genes), representing the conserved core genome of P. aeruginosa. Regarding the development of multidrug resistance, P. aeruginosa is known for its high intrinsic resistance that is caused by a combination of low membrane permeability, efflux pumps and resistance genes encoded in the core genome (Nikaido, 2001;Schweizer, 2003), together with the potential to develop high-level resistance by accumulation of small mutations (Fajardo et al., 2008;Dötsch et al., 2009;Martinez et al., 2009;Alvarez-Ortega et al., 2010;Breidenstein et al., 2011;Bruchmann et al., 2013). However, the most obvious cause of multi-drug resistance is the acquisition of resistance genes by horizontal gene transfer (Davies and Davies, 2010). Therefore, we performed a blast search of the predicted genes of the two strains in the Comprehensive Antibiotic Resistance Database (CARD) (McArthur et al., 2013) and scanned both genomes for genetic variations of intrinsic resistance determinants. In a previous work, strain PA49 was found to be resistant towards the antibiotics gentamicin (GM), amikacin (AN), azlocillin (AZ), ceftazidime (CAZ), piperacillin/tazobactam (PT), ciprofloxacin (CIP) and imipenem (IPM) (Schwartz et al., 2006). Searching its genome for resistance determinants revealed the presence of one aminoglycoside acetyltransferase of the AAC(6')-type, two aminoglycoside adenylyltransferases of type ANT(2'') and ANT(3'') and one VIM metallo-beta-lactamase (Table 1). Two additional genes were annotated as beta-lactamases in PA49 only by the BLAST search in the National Center for Biotechnology Information (NCBI) non-redundant (nr) protein database but not found in the CARD database Multi-locus sequence typing phylogenetic tree. Phylogenetic relations of the two newly sequenced strains PA30 and PA49 (bold) with eight previously published genomes of P. aeruginosa. The phylogenetic tree is based on seven genes that are commonly used for MLST scheme by Curran and colleagues (2004). Table 1. Comparison of antibiotic resistance determinants found in the genomes of PA30 and PA49. Identifiers state PAO1 gene IDs or RefSeq Accession where applicable. Genotypes refer to presence or absence or genes or specific alleles with 'wt' indicating the genotype found in the reference strains PAO1. Gene ID/accession Gene name Resistance type PA30 genotype PA49 genotype Affected antibiotics a gi|32470063 aac(6')-Ib ( Fig. 3A). Taken together, these genes confer resistance towards a wide range of aminoglycosides and betalactam antibiotics, explaining the resistance towards GM, AN, AZ, CAZ and PT. Fluoroquinolones like CIP target the DNA gyrase and Topoisomerase IV enzyme complexes, and high-level resistance towards these antibiotics is often caused by sequence variations of the two subunits GyrA (gyrase) and ParC (topoisomerase) (Ruiz, 2003) and indeed, both proteins contained a single amino acid exchange in the resistance determining region (Table 1). These two mutations represent the most common type of variations found in fluoroquinolone resistant isolates of P. aeruginosa and have recently been shown to be sufficient for the development of high-level resistance towards CIP (Bruchmann et al., 2013). Finally, a frameshift mutation in the outer membrane porin OprD was found that is likely to cause misfolding or decreased functionality of the protein. Defective mutations of OprD are known to cause resistance towards carbapenems including IPM in combination with intrinsic beta-lactamases and efflux pumps (Pirnay et al., 2002). Of note, the strain PA30 that is sensitive towards all these antibiotics did not contain any known horizontally acquired resistance genes and harboured wild-type alleles of the target genes gyrA, parC and oprD (Table 1). In summary, these results provide a comprehensive explanation for the resistance phenotype covering all the antibiotics that were tested, since all resistance determining genes and alleles (besides the ones intrinsic to P. aeruginosa) were exclusively found in PA49 ( Fig. 3A; Table 2). Both PA30 and PA49 harbour a set of genes involved in metal ion resistance that are not found in the Comparison of specific gene classes found in the genomes of PA30 and PA49 with type strain PAO1. The circles of this Venn Diagram contain the numbers of genes that were predicted from the genome sequence of the two newly sequenced strains, in comparison with the known genes of the PAO1 reference genome. PAO1 genome annotation was taken from www.pseudomonas.com (Winsor et al., 2011). A. Genes involved in antibiotic resistance (excluding efflux pumps). B. Genes involved in metal ion resistance. C. Gene involved in genetic mobilitytransposases, integrases, recombinases and conjugation-related proteins. P. aeruginosa core genome. Strain PA30 contains several genes encoding resistance genes related to copper, mercury and arsenic/arsenate, while most of the genes could not be found in PA49 ( Fig. 3B; Table 3). The extent of the accessory genomes found in PA30 and PA49 point towards a high incidence of horizontal gene transfer in the evolutionary history of these strains. Therefore, we also searched the annotated genomes for genes associated with genomic mobility, mostly classified as recombinases, transposases, integrases or conjugative elements. Since mobile genetic elements per definition belong to the accessory genome, it is not surprising that nearly all genes associated with genetic mobility that were found in PA30 and PA49 are not present in the genome of PAO1 (Fig. 3C). Both strains contain a large number of mobility genes (75 in PA30, 103 in PA49) ( Table 4). Transcriptome analyses In order to investigate the impact of waste water and tap water on the transcriptional activities, we performed RNA sequencing on both the sensitive and multi-resistant P. aeruginosa strain. The de novo assembled genomes were used as references for the mapping of reads obtained from RNA sequencing. In total, 95% of the reads mapped to the genome reference, which is comparable to results for RNA sequencing of known genomes and indicates a high quality and completeness of the two assembled genomes (Table S3). Between 1.5 and 3.6 million reads mapped uniquely to coding regions, yielding a median read count per gene of 54 to 130 and was sufficient for an in depth analysis. Both strains, PA30 and PA49, were exposed to tap water and waste water, and differential gene expression was analysed between the different water matrices as well as between the two strains. Upon exposure to waste water, 222 genes were at least fourfold differentially expressed in strain PA30 (94 upregulated, 128 downregulated) as compared with tap water exposure (Table S4). Most of the differentially expressed genes encode for hypothetical proteins. Investigating whether any functional groups of genes were significantly overrepresented among the differentially expressed genes, we performed an enrichment analysis of gene ontology (GO) terms. Genes that were associated with 'copper ion binding' (GO:0005507) and 'potassium-transporting ATPase activity' (GO:0008556) were significantly overrepresented with six (out of 17) and three (out of three) genes being differentially expressed respectively. In strain PA49, 144 (51 upregulated, 93 downregulated, Table S5) gene showed differential expression upon exposure to the different water matrices, but no significant enrichment of GO terms was observed. A comparison of the expression of orthologous genes between PA30 and PA49 revealed a differential in the expression of 32 genes in tap water (e.g. some phenazine biosynthesis genes and a potassiumtransporting ATPase, kdpABC, were upregulated in PA30), while only five gene coding for hypothetical proteins were found to be differentially expressed in waste water. This low number of differentially expressed genes between the two strains indicates a high similarity in their response to these specific environments. The four horizontally acquired antibiotic resistance genes found in strain PA49 (Table 1) were transcriptionally active independent from the water matrix and therefore most likely are a main cause of the observed resistance towards a wide spectrum of aminoglycoside and beta-lactam antibiotics (Table 2). Genes associated with antibiotic resistance (not including multi-drug efflux pumps) showed a higher average expression as compared with the rest of the genome in PA49 (Fig. 4), which is obviously a result of the generally high expression of horizontally acquired resistance genes (Table 2). This tendency was independent from the water matrix and not found in the transcriptome of PA30 (Fig. 4), which lacks such additional resistance genes (Fig. 2). A common cause of antibiotic resistance in P. aeruginosa is the overexpression of multi-drug efflux pumps (usually termed 'Mex' pumps). Indeed, the genes encoding the MexCD-OprJ efflux pump were overexpressed in PA49 in response to waste water (Table 5 and Table S5). This pump system can confer resistance towards a broad spectrum of antibiotics (Poole et al., 1996) and thus may further contribute to the multi-resistance phenotype of PA49. The induced expression of this efflux pump specifically in waste water is indicating a specific stimulation presumably by one or multiple of antibiotics found in the used waste water or via so far unknown waste water components. However, since the expression of specific resistance genes and presence of resistance-related target mutations already sufficiently explains the broad resistance phenotype in PA49 (Table 1), the exact contribution of a MexCD-OprJ overexpression remains unclear. It should be again pointed out that the expression of resistance genes (with the exception of MexCD-OprJ) in PA49 was independent from the water matrix. Similarly, the antibiotic sensitive strain PA30 does not show any difference in expression of the intrinsic resistance determinants. Thus, the exposure of both strains to polluted waste water and oligotrophic tap water resulted in similar expression profiles of resistance genes. It seems to be obvious that changes in environmental milieus result in rather unspecific transcriptional responses than selected and stimuli-specific gene regulation. A small set of genes associated with heavy metal tolerance was also found in the genomes of PA30 and PA49 ( Fig. 3B; Table 3). However, no differential expression in the two water matrices was detected. Comparing the average expression of these genes with genes not related to metal tolerance also showed no general difference, independent of strain background and water matrix (Fig. 4). Waste waters are already known to stimulate genetic transfer due to the sublethal noxa of pharmaceutical residues (e.g. antibiotics, heavy metal ions) or other xenobiotics. But, the expression of mobile genetic element was also found to be induced after exposure to tap water. Here, physiological shifts to oligotrophic habitats and/or starvation might be responsible for the genetic activities and might contribute to horizontal gene transfer, as discussed in Davies and colleagues (2006). The genomic analysis identified a large number of mobile genomic elements in the genomes of both PA30 and PA49 ( Fig. 3C; Absolute gene expression values are depicted as box plots for the different samples of strain PA30 and PA49 cultivated in tap water (T) and waste water (W). Genes were manually selected by their functional classification as resistance (associated with modification and deactivation of antibiotics), mobility (associated with horizontal gene transfer and recombination), metal (associated with heavy metal tolerance), efflux (associated with multi-drug efflux pumps) or other (not included in any other class). Asterisks indicate a significant difference in the medians of the particular gene class and the other genes determined with the Mann-Whitney-Wilcoxon test (*P < 0.05; **P < 0.001). Table 4). The genes that can be directly associated with horizontal gene transfer and recombination (recombinases, integrases, transposases and genes related to conjugative transfer) were mostly found to be actively expressed in both strains and independent from the water matrix. On average, these 'mobility genes' were expressed on a lower level than the 'other' genes of the genome (Fig. 4). However, their expression is insensitive to the strain background and to the water matrix. In conclusion, the multi-drug resistance of strain PA49 can be attributed to the presence and expression of genes encoding a set of antibiotic-modifying enzymes located both in the core genome and on mobile genetic elements that were presumably acquired by horizontal gene transfer. Thus, the multi-drug resistant phenotype of PA49 seems directly linked with this set of resistance determinants. The impact of one overexpressed efflux pump being induced in waste water on the resistance characteristics of PA49 is so far an open question. Both, the antibiotic resistant and the sensitive strain, showed similar transcriptomic responses to the different water matrices but no strain-specific stress responses to both matrices (with exception to one efflux pump). Isolation and cultivation of P. aeruginosa strains PA30 and PA49 Bacterial strains were enriched and isolated from a German waste water treatment plant compartment as described in a previous study (Schwartz et al., 2006). For routine culturing, bacteria were grown on agar plates containing BM2 minimal medium (Yeung et al., 2009) supplemented with 15 g l −1 agar (Merck, Darmstadt, Germany). For overnight cultures, a colony from the agar plate was inoculated in BM2 minimal medium as well as BHI (Merck, Darmstadt, Germany) broth (1:4 diluted) and incubated at 37°C. The growth behavior of the strains was observed by diluting overnight cultures to an optical density (OD) of 0.1 in BM2 and BHI medium, incubation at 37°C with gentle agitation for a time span of 24 h and monitoring the OD over time for each strain (Infinite 200 PRO,Tecan,Männedorf,Switzerland). No difference in growth behavior between the two isolates was observed in BM2 and BHI broth respectively (data not shown). Incubation in tap water and waste water Distinct colonies of each strain were inoculated in 25 ml BHI medium (Merck, Darmstadt, Germany) diluted 1:4 with distilled water in a 50 ml sterile tube (Falcon, Nürtingen, Germany) and incubated on a shaker at 37°C at 100 rpm overnight. A volume of 2.5 ml of this overnight culture was used to inoculate 25 ml of 1:4 diluted BHI medium and incubated on a shaker at 37°C at 100 rpm. At an optical density (OD600nm) of 1.0 (early stationary growth phase), bacterial suspension were pelleted at 5000 g at 20°C for 15 min. Pellets were re-suspended in 20 ml sterile tap water (T) or sterile filtered waste water (W) collected from the influent of a municipal WWTP. The OD of these suspensions with PA30 and PA49 were adjusted at 0.5. The samples were incubated on a shaker (80 rpm) at 22°C for 3 h. The tap water conditioned from groundwater at the municipal waterworks met the requirements of the German drinking water guideline. The average total organic carbon value was measured as 0.9 mg l −1 . The chemical and physical characteristics of the final conditioned drinking water are listed in Jungfer et al. (2013;see reference waterworks). The used waste water originated from the effluent of a municipal waste water treatment plant of a city with 445 000 inhabitants and is equipped with a conventional three treatment process (nitrification, denitrification, phosphor elimination). Chemical analyses demonstrated the presences of different classes of antibiotics (e.g. clarithromycin, roxithromycin, erythromycin, sulfamethoxazol, and trimehoprim) in a range of 0.5-1.5 μg l −1 (unpublished data). DNA extraction and purification Previous to the DNA extraction 25 ml BHI was inoculated with a single colony of PA30 and PA49, respectively, and cultivated at 37°C and 150 rpm on a rotary shaker until ODs reached 1.0 value. An aliquot of 5 ml of each culture was pelleted at 3000 g for 10 min. Subsequent DNA extraction was performed according to the protocol of QIAGEN Genomic-tip 100/G kit system (Qiagen, Germany). The concentration and purity of the obtained DNA was determined using the NanoDrop 1000 Spectrophotometer (Thermo Scientific, Germany). The quality of the genomic DNA was also controlled by agarose gel electrophoresis. RNA extraction and purification Ribonucleic isolation of the samples was performed in quadruplicates that were pooled before sequencing. One millilitre of each of the four independent bacterial suspensions (T or W) was mixed with 1 ml of RNA protect (Qiagen, Hilden, Germany) and incubated for 5 min at room temperature. The bacteria were pelleted at 12.000 g for 10 min, and the supernatant was discarded. Prior to RNA extraction from bacteria, four replicate cultures from parallel experiments (tap water and waste water) from each type (PA30 and PA49) were combined. Ribonucleic acid isolation was performed using the RNeasy extraction kit (Qiagen, Hildern, Germany) according to the manufacturer's protocol, and the RNA was eluted in 50 μl RNase-free water. To eliminate residual DNA contamination, the RNA was treated with TURBO Desoxyribonuclease (DNase, Ambion Inc., Kaufungen, Germany). Five microlitres of 10× TURBO DNase buffer and 1 μl of TURBO DNase were added to 50 μl RNA solution and incubated at 37°C for 30 min. Desoxyribonuclease inactivation reagent (5 μl) was added to the RNA solution and incubated under occasional mixing for 5 min. The sample was centrifuged at 10 000 rpm for 1.5 min, and the RNA was transferred to a new tube, and RNA concentration was measured in triplicate using the Nanodrop ND1000 spectrophotometer (PeqLab Biotechnology GmbH, Erlangen, Germany). The integrities of all RNA samples were tested using the Agilent 2100 Bioanalyzer (Agilent Technologies Sales & Services GmbH & Co.KG, Waldbronn, Germany). Removal of the ribosomal RNA Removal of ribosomal RNA (rRNA) was performed with each sample. Fourteen microlitres of total RNA was mixed with 1 μl of RNase inhibitor SUPERase IN (Ambion). Ribosomal RNA was removed with the MICROBExpress KIT (Ambion) according to the manufacturer's protocol. Purified RNA was re-suspended in 25 μl TE buffer (1 mM EDTA, 10 mM Tris, pH: 8.0). The resulting purified mRNA yields were quantified with Nanodrop ND1000. Library preparation and Illumina sequencing Deoxyribonucleic acid sequencing libraries were produced from 1 μg of genomic DNA and RNA libraries from 50 ng of rRNA depleted RNA, following the recommendations of the TruSeq DNA and TruSeq RNA protocols (Illumina) respectively. Briefly, the quality and quantity of ribosomal depleted RNA were assessed with the Bioanalyzer 2100 (Agilent), and the RNA-seq libraries were fragmented chemically, purified with AMPure XP beads (Beckman Coulter) and ligated to adapters with specific DNA barcode for each sample following the Illumina protocol. For DNA-seq libraries, genomic DNA was sheared to 200 bp fragments by sonication with a Covaris S2 instrument using the following settings: peak incidence power 175 W, duty factor 10%, cycle per burst 200, time 430 s. Sizes and concentrations of both RNA and DNA sequencing libraries were determined on a Bioanalyzer 2100 (DNA1000 chips, Agilent). Paired-end sequencing (2 × 50 bp) was performed on two lanes on a Hiseq1000 (Illumina) platform using TruSeq PE Cluster KIT v3 -cBot -HS and TruSeq SBS KIT v3 -HS. Cluster detection and base calling were performed using RTAv1.13 and quality of reads assessed with CASAVA v1.8.1 (Illumina). The sequencing resulted in at least 40 million pairs of 50 nt long reads for each sample, with a mean Phred quality score > 35 (Tables S1 and S3). These sequence data have been submitted to the GenBank Sequence Read Archive and are available under the accession numbers SRP041029 (PA30 genome), SRP041030 (PA49 genome), SRP041150 (PA30 transcriptomes) and SRP041151 (PA49 transcriptomes). Genome assembly and annotation Raw sequence reads were trimmed and filtered using the fastq-mcf tool of the EA-UTILS software package (Aronesty, 2011) with a Phred quality cut-off of 20 and the appropriate Illumina TruSeq adapter sequences to detect and remove adapters. The genomes of strains PA30 and PA49 were assembled independently using the idba_hybrid assembler (Peng et al., 2012) with a range of k-mer sizes from 20 to 50 bp (step size 10 bp). The genome reference used to guide the hybrid assembly included the full genomic sequence of P. aeruginosa PAO1 and the sequences of 13 genomic islands and one plasmid (Table S2). To confirm the resulting contigs, the filtered and trimmed reads were mapped against the contigs using BOWTIE 2 (Langmead and Salzberg, 2012), and the read coverage of each contig was calculated by dividing the number of reads mapping to that contig by its length. Contigs with a length of less than 250 bp or with a read coverage of less than one read per base pair were discarded for being too short and/or potentially artefacts. Thereby, 207 (out of 775) and 269 (out of 887) contigs remained for the genomes of PA30 and PA49 respectively ( Table S1). The overlap of the contigs with the reference sequences was determined by multi-sequence alignment using the PROMER function of the MUMMER 3.0 software package (Kurtz et al., 2004). The filtered contigs were scanned for protein coding genes using METAGENEMARK (Trimble et al., 2012) with default parameters yielding 6572 and 6781 genes for PA30 and PA49 respectively (Table S1). The genes were annotated in two steps by using a BLASTP search (Camacho et al., 2009) first against the P. aeruginosa PAO1 genome with protein sequences taken from the Pseudomonas genome database (Winsor et al., 2011) and then for the remaining unidentified protein sequences against the database of nr protein sequences available for download from the NCBI FTP site (ftp://ftp.ncbi.nlm.nih.gov/blast/db). Thereby, 65 and 97 of the predicted genes of PA30 and PA49 remained unidentified. Additionally, the predicted protein sequences were compared by BLASTP to the CARD (McArthur et al., 2013) to identify putative resistance genes. To enable direct comparisons between the genomes of PA30 and PA49, the orthologous genes were determined by reciprocal alignment of the protein sequences using BLAST. Genes in both genomes that reciprocally yielded the highest BLAST score for each other in the alignment with at least 80% sequence identity were considered to be orthologs. Analysis of gene expression Raw reads generated from complementary DNA were trimmed and filtered using the fastq-mcf tool of the EA-UTILS software package with a Phred quality cut-off of 20 and the appropriate Illumina TruSeq adapter sequences to detect and remove adapters. Gene expression was determined by mapping the reads to the newly assembled and contigs using BOWTIE 2 and counting the reads that uniquely overlapped with the annotated genes. Differential gene expression was analysed using the R-package DESEQ (Anders and Huber, 2010). Gene were considered to be differentially expressed, when the absolute log2 fold change was greater than 2 (equivalent to fourfold upregulation or downregulation) with a P-value (adjusted for multiple hypothesis testing) below 0.05. A GO enrichment analysis was performed using the BLAST2GO software (Conesa et al., 2005;Götz et al., 2008), testing for the enrichment of GO terms in the set of differentially expressed genes with a false discovery rate (FDR) of less than 0.05. Multilocus Sequence Typing The sequence types of the strains PA30 and PA49 were determined following the multilocus sequence typing (MLST) scheme described by Curran and colleagues (2004), by comparing the sequences of seven variable genes commonly found in P. aeruginosa (acsA, aroE, guaA, mutL, nuoD, ppsA, trpE) with the public online database PubMLST, http:// www.pubmlst.org (Jolley and Maiden, 2010). The sequences of the seven MLST regions were concatenated for both strains and aligned with the concatenated MLST sequences of eight previously published genomes (P. aeruginosa strains PAO1, PA14, PA7, PASC2, LESB58, NCGM2, DK2 and M18, all taken from the Pseudomonas genome database http:// www.pseudomonas.com (Winsor et al., 2011) ) using CLUSTALW (Larkin et al., 2007). A phylogenetic tree was constructed from the alignment using CLUSTALW and TREEVIEW (Page, 1996). Supporting information Additional Supporting Information may be found in the online version of this article at the publisher's web-site: Table S1. Genome sequencing and assembly statistics. Table S2. Coverage of references sequences. Percentages indicate the fraction of the references sequences that overlapped with the newly assembled contigs. See also Fig. 1. Table S3. Ribonucleic acid sequencing statistics. Table S4. MS Excel table file (.xslx) of genes that were differentially expressed comparing the transcriptomes of isolate PA30 in waste water and tap water. The file contains two table sheets listing the genes that were upregulated or downregulated upon exposure to waste water as compared with exposure to tap water. Table S5. MS Excel table file (.xslx) of genes that were differentially expressed comparing the transcriptomes of isolate PA49 in waste water and tap water. The file contains two table sheets listing the genes that were upregulated or downregulated upon exposure to waste water as compared with exposure to tap water.
8,449
2014-09-03T00:00:00.000
[ "Biology", "Engineering", "Environmental Science" ]
Radioiodine Thyroid Remnant Ablation after Recombinant Human Thyrotropin or Thyroid Hormone Withdrawal in Patients with High-Risk Differentiated Thyroid Cancer To supplement limited relevant literature, we retrospectively compared ablation and disease outcomes in high-risk differentiated thyroid carcinoma (DTC) patients undergoing radioiodine thyroid remnant ablation aided by recombinant human thyrotropin (rhTSH) versus thyroid hormone withdrawal/withholding (THW). Our cohort was 45 consecutive antithyroglobulin antibody- (TgAb-) negative, T3-T4/N0-N1-Nx/M0 adults ablated with high activities at three referral centers. Ablation success comprised negative (<1 μg/L) stimulated serum thyroglobulin (Tg) and TgAb, with absent or <0.1% scintigraphic thyroid bed uptake. “No evidence of disease” (NED) comprised negative unstimulated/stimulated Tg and no suspicious neck ultrasonography or pathological imaging or biopsy. “Persistent disease” was failure to achieve NED, “recurrence,” loss of NED status. rhTSH patients (n = 18) were oftener ≥45 years old and higher stage (P = 0.01), but otherwise not different than THW patients (n = 27) at baseline. rhTSH patients were significantly oftener successfully ablated compared to THW patients (83% versus 67%, P < 0.02). After respective 3.3 yr and 4.5 yr mean follow-ups (P = 0.02), NED was achieved oftener (72% versus 59%) and persistent disease was less frequent in rhTSH patients (22% versus 33%) (both comparisons P = 0.03). rhTSH stimulation is associated with at least as good outcomes as is THW in ablation of high-risk DTC patients. Introduction Postsurgical thyroid remnant ablation with radioiodine (131-iodine, 131 I) in low-risk patients with differentiated thyroid carcinoma (DTC) has engendered considerable controversy [1]. However, current guidelines and consensus strongly favor the procedure in high-risk patients [2,3]. Numerous published comparisons [4,[9][10][11][12][13][14][15][16] have confirmed that rhTSH-aided ablation achieves high remnant eradication rates that are not statistically inferior to those attained with THW-assisted ablation. At the same time, relative to THW, rhTSH use avoids hypothyroid morbidity, improving patient quality-of-life [4,14,15,[17][18][19]. Compared to THW, rhTSH use also lessens extra-thyroidal radiation exposure [20,21], improving safety [22]. Additionally, a number of published comparisons have documented statistically not different, modest DTC recurrence rates after rhTSH-or THW-aided ablation [9-11, 14, 16, 23]. rhTSH has a relatively high acquisition cost. However, the literature suggests that from the societal and patient/family perspectives, this cost may be balanced by the benefits of shorter hospital length-of-stay (where this variable is determined by whole-body dose rate), shorter absence from work, and improved on-the-job performance. These advantages are related to the preservation of euthyroidism and hence, of cognitive and physical function, when rhTSH is used [24][25][26][27][28]. One study also suggests that from an institutional perspective, the rhTSH acquisition cost may at least partly be offset by increased "patient throughput," that is, more efficient use of radioiodine treatment rooms [28]. However, the preponderance of patients in publications regarding rhTSH-assisted versus THW-assisted ablation had low-intermediate postsurgical DTC recurrence risk; only two groups have published comparisons of the two modalities with respect to remnant eradication and disease persistence or recurrence focusing all or in part on high-risk DTC [9,[29][30][31]. The larger, more invasive primary tumors often characterizing high-risk disease might render complete cancer excision more difficult. Higher stage DTC also might be associated with increased risk of occult malignancy. Because of these challenges, it is important to compare outcomes in the postsurgical high-risk setting with rhTSH-aided versus with THW-aided ablation. We therefore undertook the present retrospective analysis. Endpoints, Patients, and Ethics. We examined rates of ablation success and of disease outcomes after mediumterm follow-up according to the TSH preparation method for ablation in 45 consecutive adults ablated at any of three Argentine referral centers from March 2002 to June 2009. This cohort had initial T3-T4/N0-N1-Nx/M0 staging according to the American Joint Committee on Cancer/Union Internationale Contre le Cancer (AJCC/UICC) system, 6th edition [32], with undetectable antithyroglobulin antibodies (TgAb) by immunometric assay at the time of ablation. All T3 patients had gross invasion and the entire cohort had high recurrence risk according to the Latin American Thyroid Society (LATS) classification [3] and intermediate or high risk according to the American Thyroid Association (ATA) classification [2]. M0 status was confirmed by postablation whole-body scintigraphy (WBS). All patients were totally thyroidectomized in a specialized center. Thirty-six (80%) also underwent lymph node dissection. In 10 of the 36 (28% of the subgroup), central and lateral neck dissection was performed when intrasurgical anatomopathological frozen section analysis verified lymph node metastasis. In the remaining 26 of the 36 (72% of the subgroup), central neck (level VI) dissection was mostly indicated after T3 status confirmation, when suspicious lymph nodes were noted during surgery, or when both conditions pertained. Based on postsurgical histological analysis, 7 of these 26 patients eventually were found to have microscopic central lymph node metastasis. Therefore, of the 36 patients undergoing lymph node dissection, 17, or 47%, ultimately had confirmed nodal involvement. Among the 45 patients in the overall cohort, ablation was aided by rhTSH (Thyrogen, thyrotropin alfa, Genzyme, Cambridge, MA, USA) in 18 (40%) and by THW in 27 (60%). The choice between rhTSH and THW was individualized according to physician and patient preferences and the patient's circumstances. Among the 18 rhTSH patients, indications for such preparation included poor general physical condition or advanced age (n = 6), patient preference (n = 6), generally due to desire to avoid hypothyroid morbidity or to decrease time missed from work or study, depression (n = 5), or cardiac disease (n = 1). Tables 1(a) and 1(b) summarize key baseline patient characteristics by treatment group. rhTSH patients were on average a decade younger at DTC diagnosis than were their THW counterparts, a statistically significant difference. Nonetheless, the rhTSH group had a significantly greater proportion of patients ≥45 years old (Table 1(a)). Moreover, although there were no significant intergroup differences when T or N classifications were considered as individual categories for patients of all ages (Table 1(a)), patients aged ≥45 years tended to have more advanced T and N classifications, and thus, later AJCC/UICC stages, in the rhTSH group than in the THW group (Table 1(b)). The rhTSH patients were similar to their THW counterparts with respect to all other tested baseline variables. Institutional review board approvals were obtained for the study. Ablation Protocol. Our ablation protocol used fixed radioiodine activities based on the extent of initial disease, without adjustment according to the TSH preparation. Patients typically received 3.70 GBq (100 mCi) 131 I for T3 disease with gross extension beyond the thyroid capsule and N0 status, 5.55 GBq (150 mCi) for T3/N1a-N1b disease, and 7.40 GBq (200 mCi) for T4 tumor. T4 patients (n = 12) received a second therapeutic 131 I activity (mean ± standard deviation [SD] 3.37±0.74 GBq [91±20 mCi]), rhTSH-aided in all cases, a mean ± SD 9 ± 3 months after ablation. A lowiodine diet was prescribed from one week before radioiodine administration through two days afterwards. Pretherapeutic urinary iodine testing was not routine; however, patients were queried about exposure to possible sources of iodine excess, which was not reported in any case, but would have been grounds to delay 131 I administration. To reduce the risk of actinic thyroiditis from administering large amounts of radioiodine to bulky residues, we assessed preablation thyroid remnant size in some patients (n = 6) operated on by surgeons with whose thyroid procedure expertise we were unfamiliar. In those patients, a 3.7 MBq (100 µCi) 131 I tracer activity was given the day preceding, and cervical percentage uptake was ascertained just before ablative activity administration [33]. Uptake above our 3%-5% norm would have warranted consideration of a reduced ablative activity or referral for reoperation, but was not seen. Posttherapy WBS was performed 5-7 days after ablation and any second radioiodine therapy. TSH Preparation . rhTSH was given as two consecutive daily 0.9 mg intramuscular injections, with the tracer activity T3N0 I 2 3 T3Nx I 0 2 T3N1a I 2 2 T3N1b I 0 4 T4aN0 I 0 4 T4bN1a I 0 1 T4bN1b I 0 1 Age ≥ 45 years T3N0 III 7 4 T3Nx III 0 2 T3N1a III 2 3 T4aN0 IVa 0 1 T4bN0 IVb 3 0 (when applied) administered at the time of the second injection and the ablative activity or second radioiodine therapy (T4 patients only) administered ∼24 hr after the second injection. rhTSH was given while patients were euthyroid with suppressed TSH and receiving levothyroxine, except that this hormone was briefly withdrawn, from 2 days before through 2 days after radioiodine therapy administration, to reduce the risk of iodine interference [34]. THW comprised at least 3 weeks without thyroid hormone, starting from thyroidectomy. Radioiodine was administered following that interval, in all cases with TSH levels above 50 mIU/L. Thyroglobulin (Tg)/TgAb Measurement. Samples for Tg and TgAb measurement were taken on day 5 after the first rhTSH injection in the rhTSH group and on the day of ablative radioiodine administration in the THW group. Tg and TgAb levels were assessed in one of three reference laboratories (depending on the center), using either of two commercial immunometric assays; the same laboratory and assay were used throughout a patient's follow-up. Tg assays comprised the Elecsys Tg Electrochemiluminescence Immunoassay (Roche Diagnostics GmbH, Mannheim, Germany), which has a 0.5 µg/L detection limit, or the Immulite 2000 Tg Chemiluminiscence Assay (Siemens Corp., Los Angeles, CA, USA), with a 0.2 µg/L analytical sensitivity. TgAb assays comprised the Elecsys Anti-Tg Electrochemiluminescence Immunoassay (RSR Ltd., Pentwyn, Cardiff, UK), or the Immulite 2000 Anti-TG Ab chemiluminescent immunometric assay method (Siemens). For both TgAb assays, values >20 IU/mL were considered to be positive, and to render Tg measurements uninterpretable. Ablation status was assessed using rhTSH-stimulated Tg testing and rhTSH-stimulated diagnostic WBS (dxWBS; 150 MBq [4 mCi] activity) performed 6-12 (mean ± SD 9 ± 4) months after ablation. Ablation success was defined as negative stimulated Tg (<1 µg/L) in the absence of TgAb, plus absent or <0.1% thyroid bed uptake on dxWBS. Follow-Up Including Ablation Success Assessment. Neck ultrasonography (US) using an 11 MHz linear array transducer was performed every 6 months after ablation. Patients with measurable stimulated or unstimulated Tg, suspicious neck US findings, or both during follow-up underwent morphological or functional imaging or both, including computed tomography (CT) (n = 7 [39%] in the rhTSH group; n = 13 [48%] in the THW group) or 18fluorodeoxyglucose positron emission tomography (FDG-PET) (n = 6 [33%] in the rhTSH group; n = 4 [15%] in the THW group). All ultrasonographically suspicious nodules ≥1 cm in diameter underwent fine needle aspiration with measurement of Tg in the aspirate. Disease Status Definitions. We defined disease status according to the latest LATS [3] and ATA guidelines [2]. Patients had "no evidence of disease" (NED) when unstimulated and stimulated Tg were negative (<1 µg/L), TgAb were negative (<20 IU/mL), neck US was free of suspicious signs, and there were no pathological findings on any other imaging (WBS, radiography, CT, FDG-PET, or any other modality) or in any biopsy specimen. However, patients with a single stimulated Tg measurement ≥1 to ≤2 µg/L without additional signs of DTC were considered to have indeterminate status until a subsequent stimulated Tg measurement became negative or exceeded 2 µg/L; the latter increase was considered a sign of disease. Patients who never attained NED status were classified as having "persistent disease," while those who lost NED status were defined as having "recurrent disease." Disease sites were classified as local (thyroid bed), lymph node (metastasis confirmed by fine-needle aspiration biopsy with positive cytology), distant (metastasis confirmed by imaging), or unknown ("biochemical only") (stimulated Tg >2 µg/L without structural evidence of disease). 2.7. Statistics. Data are expressed as mean ± SD unless otherwise noted. Categorical comparisons were made using chi-square testing with the Fisher's exact test when appropriate. Analysis was performed using SPSS software (version 15.0.0: SPSS, Inc., Chicago, IL, USA). P values ≤ 0.05 were considered to be statistically significant. Table 2 provides key data regarding ablation and ablation success for both treatment groups. On average, the treatment groups did not differ regarding the ablative activity, the proportions of patients in different activity categories or receiving a second therapy, the cumulative activity, or the interval between ablation and ablation success assessment. However, successful remnant ablation was observed in a significantly greater proportion of the rhTSH group than of the THW group (83% versus 67%, P = 0.02). (9) dxWBS: diagnostic whole-body scintigraphy; rhTSH: recombinant human thyroid-stimulating hormone; rxWBS: posttherapy whole-body scintigraphy; SD: standard deviation; Tg: serum thyroglobulin; THW: thyroid hormone withdrawal or withholding; TSH: thyrotropin. Table 3 highlights the disease status at the end of followup according to the treatment group. The mean duration of follow-up was approximately 3 1/3 years in the rhTSH group and a year longer than that in the THW group, a statistically significant difference (P = 0.02). The rhTSH patients achieved NED status significantly more often and showed persistent disease significantly less often than did their THW counterparts. The groups did not differ with respect to recurrence rates. No patient developed TgAb or received more than the planned one (T3 patients) or two (T4 patients) ablative radioiodine therapies during follow-up. Journal of Thyroid Sites of persistence and recurrence appeared to have a roughly similar distribution in the treatment groups; all recurrences affected lymph nodes, and there was no structural evidence of distant recurrence. The times to recurrence detection were 22 months in the rhTSH patient and 19 and 20 months, respectively, in the THW patients who lost NED status. Patients with neck lymph node metastases or recurrence in the neck lymph nodes were reoperated. The patients were reevaluated at 6 ± 4 months after reoperation with rhTSH-stimulated Tg testing and all had biochemical persistence without structural evidence of disease at the latest follow-up. Discussion This retrospective analysis in DTC patients with an high recurrence risk but no known distant metastases had three main findings regarding rhTSH versus THW stimulation of radioiodine thyroid remnant ablation. rhTSH use was associated with firstly, a significantly higher ablation success rate, and secondly, significantly more frequent favorable medium-term disease outcomes, that is, more frequent NED status and less frequent persistent disease. Thirdly, despite our cohort's high-risk status, both TSH preparation methods appeared to be associated with modest mediumterm recurrence rates that did not differ statistically. It is worth noting the context of these observations: more than double the proportion of the rhTSH group than of the THW group was ≥45 years old (78% versus 37%, Table 1(a)), and the rhTSH patients ≥45 years old tended to have more advanced T and N stages than did their THW counterparts (Table 1(b)), even though the groups did not differ significantly when T or N stages were considered as individual categories for patients of all ages (Table 1(a)). Our findings confirm those of the Santa Casa do Belo Horizonte [9] and Memorial Sloan Kettering Cancer 6 Journal of Thyroid Research Center (MSKCC) groups [29][30][31] of numerically similar or statistically not different or superior ablation success and disease outcome rates associated with rhTSH versus THW preparation of ablation in initial high-risk DTC patients. Regarding ablation success, the Belo Horizonte investigators noted respective 80% versus 79% rates (68% and 67% in patients with Tg > 1 µg/L at ablation) in the rhTSH (n = 77) and THW subgroups (n = 198) of a slightly lower risk cohort than ours (T3/N0-N1 but no T4 patients) [9]. The MSKCC investigators reported respective 16% versus 9% rates of "excellent" response to initial therapy and respective 13% versus 7% rates of "acceptable" response in their rhTSH (n = 69) and THW subgroups (n = 92) of patients with initial ATA high-risk classification; these rates did not significantly differ [30]. Our ablation success rates were 83% for the rhTSH group (n = 18) versus 67% for the THW group (P = 0.02) ( Table 2). It should be noted that the MSKCC "response to initial therapy" variable encompassed a longer postablation follow-up (2 years) than did our or the Brazilian investigators' "ablation success" variables (6-12 and 9-12 months, resp.). Additionally, both the Belo Horizonte and MSKCC groups included ultrasonographic findings in their assessment of response to ablation or "initial treatment," while we did not. With respect to disease outcome, the Belo Horizonte investigators observed numerically similar, low 9-12-month disease persistence rates in rhTSH patients (n = 70) and THW patients (n = 169) whose M0 status was confirmed by the postablation WBS: 7.1% versus 7.7%. After a presumably much longer follow-up (median not reported for the initial ATA high-risk patients, but 9 years for their overall cohort [N = 586]), the MSKCC group noted statistically not different disease outcomes for rhTSH and THW patients with initial ATA high-risk status: 17.1% NED, 82.6% disease persistence, and 0% recurrence rates for the rhTSH patients versus 15.2%, 83.7%, and 1.1%, respectively, in their THW counterparts. In our study sample, after a mean followup of ∼3.3 years in the rhTSH patients and ∼4.5 years in the THW patients, the NED rate was significantly higher (72% versus 59%, P = 0.03) and the disease persistence rate was significantly lower (22% versus 33%, P = 0.03) in the rhTSH group (Table 3). The differences in ablation success and disease outcomes among the Belo Horizonte, MSKCC, and our groups may be attributable to one or more of (1) different follow-up durations; (2) noninclusion of US findings among our ablation success criteria; (3) lack of T4 patients in the Belo Horizonte group; (4) inclusion of only N1 patients with especially extensive neck nodal involvement in the MSKCC cohort; (5) somewhat higher (by ∼700 MBq, ∼19 mCi) mean cumulative 131 I activities in our patients than in their Belo Horizonte counterparts-and perhaps also the MSKCC initial ATA high-risk patients (mean cumulative activity not reported for the latter subgroup, but ∼500 MBq [∼14 mCi] lower in the overall MSKCC study sample than in our cohort) [9,30]. Limitations of our study should be noted. Among these were its retrospective nature (shared with the Belo Horizonte and MSKCC studies), relatively small patient cohort, and discrepant mean follow-up durations for the treatment groups. Regarding the first of these limitations, our rhTSH and THW patients nonetheless had quite similar tested baseline and treatment characteristics; key characteristics that differed significantly (proportion of patients ≥45 years old, AJCC/UICC stage) presumably would have favored the THW group. Regarding the follow-up length, which was on average significantly shorter in rhTSH patients (Table 3), one would expect this difference to be relevant mainly to the observed DTC recurrence rate, which did not differ between the treatment groups, rather than to the NED or disease persistence rates, which differed in favor of rhTSH. Conclusion In a cohort of adult referral center patients with LATS and ATA initial high-risk DTC without known distant metastases, rhTSH-stimulated radioiodine thyroid remnant ablation was associated with significantly greater ablation success rates than was THW-aided ablation. Additionally, after mediumterm follow-up, rhTSH stimulation was associated with significantly more frequent NED status, significantly less frequent disease persistence, and statistically not different, low DTC recurrence rates. These results suggest that patients and clinicians do not have to consider initial disease classification in their choice of TSH preparation for postsurgical ablation in DTC without distant metastasis; these observations should be prospectively confirmed.
4,449.2
2012-12-06T00:00:00.000
[ "Medicine", "Biology" ]
Two Total Syntheses of Trigoxyphins K and L Two total syntheses are presented for trigoxyphins K and L, tricyclic terpenoids from Trigonostemon xyphophylloides. The first proceeds via electrophlic cyclization in A/C-ring substrates to close the B ring at C4–C5 and then 1O2-mediated hydroxybutenolide formation to trigoxyphin L, with Luche reduction leading to trigoxyphin K. The second route develops from tetralone ring expansion to a B/C-ring intermediate that, by one-step O-demethylation–lactonization–isomerization, affords trigoxyphin K and then trigoxyphin L following enolate oxygenation. General methods Procedures are presented in the order given in the Schemes except for the natural products which are described at the end of the section. All solvents for anhydrous reactions were obtained dry from Grubbs solvent dispenser units after being passed through an activated alumina column under argon.THF was additionally distilled from sodium/benzophenone ketyl under argon. Commercially available reagents were, in general, used as supplied; amines and dipolar aprotic solvents were purified by standard methods before use."Petrol" refers to the fraction of light petroleum ether boiling in the range of 30-40 °C; "ether" refers to diethyl ether.Unless stated otherwise, all reactions were carried out in oven-dried glassware and under an inert atmosphere (N2 or Ar as specified); reactions performed above ambient temperature were heated using a thermostatically-controlled oil bath.Silica gel chromatography was carried out using Geduran Silicagel 60, particle size 40-63 µm.Thin-layer chromatography (TLC) was conducted after all reactions whenever practical, using Merck aluminium-backed Silicagel 60 F254 fluorescent treated silica; visualisation was enabled by UV light (λmax = 254 nm) and staining with potassium permanganate or phosphomolybdic acid solution to give the retention factors (Rf) quoted. Compound names are as generated by PerkinElmer ChemDraw Professional 22.2.Melting points (mp) were recorded (uncorrected) in degrees Celsius (°C), using a Griffin MFB-700-010U melting point apparatus.IR spectra were recorded on a Bruker Tensor 27 FT-IR spectrometer as a thin film on a diamond ATR module.Only selected absorption maxima (νmax) are reported, in wavenumbers (cm −1 ). 1 H and 13 C NMR spectra were recorded using a Bruker AVIIIHD-400 spectrometer using the solvents specified.Chemical shifts are quoted in ppm downfield of tetramethylsilane (δ = 0) and referenced in MestReNova to the appropriate solvent peak: CDCl3, 7.26/77.16;C6D6, 7.16/128.06;(CD3)2CO, 2.05/29.84,CD3OD, 3.31/49.00.Coupling constants (J) are quoted in Hz, rounded to the nearest 0.5 Hz.All 1 H NMR spectra are reported as follows: ppm (number of protons, multiplicity, coupling constants).High-resolution mass spectra (HRMS) were recorded by the staff at the Chemistry Research Laboratory (University of Oxford) using a Bruker Daltonics MicroTOF spectrometer; mass-to-charge ratios (m/z) are reported in Daltons. The crude product was purified by flash chromatography (petrol/ether, 9:1 to 4:1) to afford the title compound (6) as a Iodomethane (1.18 mL, 19.0 mmol) was then added and the mixture was stirred for 14 h.The reaction was quenched with water (10 mL) and the mixture extracted with ether (3 × 20 mL).The combined organic extracts were washed with water (20 mL) and brine (20 mL), and were then dried (Na2SO4), filtered, and concentrated.The crude product was purified by silica gel chromatography (pentane/ethyl acetate, 9:1), affording ketone 20 as a colourless solid (1.05 g, 89%
723.8
2023-10-06T00:00:00.000
[ "Chemistry" ]
State-changing processes for ions in cold traps: LiH− molecules colliding with He as a buffer gas We report in the present study a quantum analysis of the collisional dynamics involving a negative ion, LiH− in its 2Σ+ ground electronic state, and He as a buffer gas in the environment of cold ion traps. The work focuses on the evaluation of the internal cooling collisional rates, treating both the anion's rotational quantum numbers and the spin-changing processes. The calculations are carried out over a range of energies capable of yielding the corresponding rates for state-changing events over a rather broad interval of temperatures, thus covering those usually reached in the cold traps experiments and even beyond to lower temperatures. Introduction The buffer-gas technique for cooling trapped molecular ions has been successfully established as a convenient method for producing internally cold molecular species, as well as for cooling their translational degrees of freedom. To be able to generate a broad variety of such cold species, which are then trapped for further processing, has provided new opportunities for a range of important applications: from precision measurements [1], to quantum information models [2], quantum-controlled chemistry [3] and the formation of such molecules in the interstellar medium (ISM) [4]. Although the above schemes are applicable to either cations or anions, the latter types of charged molecular species play a central role in a wide range of areas, from the chemistry of highly correlated systems [5] to various aspects of atmospheric science [6]. They also turn out to be relevant for a broad variety of molecular processes involving the ISM environments [7]. On the other hand, it is currently not as yet simple to experimentally investigate negative ions in a controlled manner at the cold and ultracold temperatures which are expected to be relevant for many of the processes mentioned above. Indeed, by using supersonic beam expansion methods or by trapping the particles and then acting on them, the use of electron cooling, buffer gas cooling or resistive cooling [8][9][10][11] have already achieved temperatures of a few kelvin, while the ability to cool molecular anions to subkelvin temperatures is still in the planning [12]. To achieve it would finally allow the investigation of their chemical and physical properties to be extended over a much enlarged range of possibilities. We already know that most temperature measurements usually rely on high-resolution spectroscopy which can then resolve the molecular Doppler profile in order to determine the kinetic temperature of the trapped ions [13], while the corresponding effective temperatures for their internal degrees of freedom have also been measured using rotationallyresolved spectroscopy for rotational temperatures [14] and sometimes also vibrational hot band analysis for the vibrational temperatures [15]. On the other hand, the recent work done in the Innsbruckʼs group on trapped molecular anions clearly demonstrates that the near-threshold, bound-free photodetachment spectroscopy can act as a 'thermometry scheme' for specific molecular anions which are kept into a multipole radiofrequency ion trap [16][17][18]. In such a scheme the ions are further subjected to buffer-gas cooling using cold helium gas, and the temperature of the latter can be varied between a few kelvin and room temperature. The above experiments indeed point at the real possibility of investigating a new class of chemical processes involving molecular anions where one would then be able to prepare the ionic partner in a specific initial, selected internal state. The latter can then be used to give us more specific information on the chemical paths along which the experiments could develop [18]. To complete the task as best as possible, however, one needs to collect additional data on the details of the collisional dynamics in the traps: How is the buffer gas effectively cooling the translational motions? How large would the internal rotational cooling rates be for a specific initial population and how efficiently are the internal states excited to higher levels by collisional excitations from the buffer gas, in clear competition with the collisional cooling paths? To try to answer such questions for the case of the OH − anion trapped with either He [18] or Rb [19] atoms, we have already studied in detail the forces at play and further employed them to obtain quantum data on the dynamics of state-changing collisions in the trap environments [18][19][20]. Any extension of such studies to other possible molecular anions, as those discussed in [12], would therefore require all of the above steps and the modelling of the ensuing collisional state-changing efficiency via realistic quantum calculations. In the supplementary material of [12], one of the light anionic molecules which was suggested to be suitable for internal states cooling down to the ground rotovibrational levels, under the conditions of cold ion traps, was the LiH − ( 2 Σ + ) system which has been known for a while as an interesting and stable anion. It has also been studied by us by looking at the dynamics of it internal rotational quenching at the microkelvin regimes under the collisional action of 3 He and 4 He projectiles [21]. In the present study we want to extend the analysis of [21] to more realistically higher temperatures which could then model and match the actual environment that was observed in the work on the OH − anion in [18]. We therefore intend to provide realistic data for the internal cooling of rotations in the case of LiH − anion trapped at a few kelvin of translational temperature and interacting with 4 He as the buffer gas uploaded into the trap. Hence, we shall span a more extended energy range with respect to our earlier study of [21] and provide new data on the efficiency of the production of LiH − ions in their ground rotovibrational state under the same experimental conditions as those reported by [18]. It is also interesting to note that a very recent computational study [22] on the dynamics of LiH − formation surmised that the processes of electron transfer and electron detachment were not expected to be very efficient, thus indicating that several paths to LiH − formation in its ( 2 Σ + ) electronic states could be used experimentally to successfully stabilize the anion. The next section 2 will briefly review the feature of the relevant potential energy surface for the LiH − as a rigid rotor and the 4 He projectile in its ground electronic state. The computational quantum dynamics will be summarized in section 3 and our results will be given by section 4. The final conclusions will be in section 5. The anisotropic interaction potential The original calculations of the interaction potential between the LiH − ( 2 Σ + ) anion and 4 He( 1 S) neutral atom have been described in detail in our earlier publication [21]. We therefore report here only a brief outline of its anisotropic features. The actual ab initio calculations were carried out using the MOLPROʼs suite of codes [23] and all the details over the basis set expansion choice, the number of points of the 2D potential energy surface, defined as ( | ) q V R r , eq , where the bond distance of the target anion was kept fixed at 1.697 Å, are all given in [21]. Here R and θ indicate, using Jacobi coordinates, the distance between the 4 He atom and the molecular centre-of-mass, while θ is the angle formed between that distance and the molecular bond: the q =  0 corresponds to the Li-H-He configuration. The system, as expected, turned out to be very anisotropic and with only a very shallow attractive well of about −15 cm −1 , located around q =  40 . The corresponding interaction was further extended to the long-range (LR) region by smoothly connecting the actual ab initio points with the expression: where the ( ) b f R n are the Tang-Toennies damping functions, discussed in detail in our earlier work [24]. The C 4 , C 5 , coefficients correspond to the dipole polarizability term and to the dipole-induced dipolar term, respectively [21]. To provide an overall view of the interaction forces, we present in figure 1 the shape of the corresponding multipolar coefficients for the full rigid-rotor (RR) potential: where: RR One clearly sees from the radial strength of the coefficients that the strong anisotropy of this potential is revealed by the fact that the isotropic polarization is only very weekly attractive (not visible on this energy scale) while the dipoledriven polarization term, depending on the ( ) q P cos 1 angular function, is the only one showing a marked attractive interaction in the short-range of the potential. As a result, we expect strong effects from this dominant coupling term (together with the l = 3 contribution) on the relative sizes of the rotationally inelastic, state-changing, collisions which we shall describe below. An outline of the quantum scattering equations The time-independent scattering eigenstates , as yet without explicit inclusion of spin-rotation contributions, ( ) Y a + L , , are usually expanded in terms of diabatic asymptotic target eigenstates: where α is the collective index of the molecular initial state while a¢ runs over all the included asymptotic states of the diatomic target. L is the relative angular momentum between the atom and the target. The ( ) terms are the asymptotic eigenstates of the whole system (the channel eigenstates). The G terms are the channel components of the scattering wave function, which must be determined by solving the Schrödinger equation, subject to the boundary conditions which we have extensively discussed in our earlier work. The scattering algorithm we are using in the present study is described in [25]. Once the dynamics is cast in the conventional, timeindependent formulation (e.g. see [26]), then the coupled equations in the space-fixed (SF) reference frame of the multichannel scattering problem for M channels can be expressed in atomic units as is the diagonal matrix of the asymptotic (squared) wavevectors, and G a column vector for the radial ¢ G s channel components of the scattering wave function. The parameter V, m = V U 2 , represents the potential coupling matrix. In the case of atom-RR diatomic scattering is given by where α denotes the diatom initial states, ( ) l V R the multipolar coefficients already shown in equation (3) and by figure 1. The f λ terms represent the integrals over the angular coordinates of the coupling potential in the relevantly required basis set. The special case where spin-rotation coupling appears was discussed by the earlier work of Corey and McCourt [27] and will be further discussed in more detail in the subsection below. The scattering equations of (5) were solved by employing out in-house scattering code ASPIN, [28]. Numerical convergence of the final S-matrix elements, and hence of the final state-to-state inelastic cross section was achieved within more than 1%. Some of the specific parameters employed will be further discussed in the following section. In the present case we further need to take into account the doublet electronic state for the target molecule. Hence, we have to consider two levels for each total angular momentum since the rotational levels are now split by spin-rotation coupling contributions. In the pure Hundʼs case (b) the electronic spin momentum S couples with the nuclear rotational angular momentum N (N=R for a Σ state) to form the total angular momentum J given by Here each level (with N1) is split in two sublevels F 1 and F 2 , and the corresponding rotational wave functions are where m is the projection of the J quantum number along the SF z axis. For a given N value, the energies of the two levels in equations (8) are [29] ( ) where B is the rotational constant and γ is the spin-rotation interaction. In this representation each state has a definite parity with respect to the inversion of the SF axis [27]. The energies of some of the fine-structure levels are given in table 1, where the columns are labelled by the quantities defined via equations (8). The actual values of the target parameters were = B 6.657 cm −1 and g =´-6.124 10 4 cm −1 . The error bars for each parameter involve the third significant figure. The spin-parity  is defined as 'even' or 'odd' for the two spin states of |+ ñ 1 2 and |-ñ 1 2 indicated in equation (9). We checked the importance of centrifugal distortion on the rotational energy of LiH − by extracting them from the computed potential energy curve of the molecular anion. We used a complete active space self consistent field wave function, an aug-cc-pV5Z basis set for the H atom and a cc-pV5Z basis set for the Li atom. The CAS used in the calculation has been of 3 active electrons within 5 active orbitals. We found a slight change in the value of B, found to be 6.659 cm −1 , and a distortion correction = -´-D 8.36 10 4 cm −1 . Given the smallness of the latter, we did not include the centrifugal distortion and employed instead the simpler formulas (9), as already discussed in [21]. As mentioned before, we give in table 1 some of the numerical values associated with the fine-structure level splitting of equation (8) and the corresponding ( ) N J , values associated to them. We further summarize in figure 2 a pictorial view of the levels, where the smallness of the energy splitting due to the fine-structure effects is also clearly displayed. From the energy spacings of both the table and the figure we see that the D ¹ N 0 transitions are obviously much larger than the spin-rotation splitting values which correspond to spin-reorientation processes within each term of the N manifold. If one further recognizes that the present interaction potential is basically describing electrostatic interactions without direct magnetic terms, we already see that spinreorientation can only come in via the angular momentum coupling anisotropy, where the spin term acts essentially as a spectator. This feature has been discussed many times before in the relevant literature (e.g. see [30]) and will be further discussed below when analysing our present findings. Computed inelastic cross sections It is important to note at this point, as already discussed in the relevant literature, that the present coupling situation includes an additional degree of freedom which does not exist for the simpler case of a 1 S atom and a 1 Σ molecule [27,30,31], namely the parity index of the initial and final states described by equation (8). These collisions can be inelastic in S and elastic in J and vice versa. They can also be inelastic in both S and J. We know, in fact, that in the present case N and S are coupled (see equation (7)) to form J, the total angular momentum. Thus neither N nor S are fixed in their directions: both precess around J. Since for = -< S J N 1 2 , it follows that S and N form an acute angle: · < N S 0. The opposite occurs for = + S 1 2. Hence, since there are no spin-dependent terms in the interaction potential, the collisions in the traps cannot directly affect either the magnitude or the orientation of S. On the other hand, the collisions can cause such changes for the N vector and therefore, after the collision, a recoupling of the new N′ with S to form J can result in changes of both | | J and the spin parity from | ñ 1 2 to | ñ 1 2 . We therefore see that the anisotropy of the interaction potential (fully electrostatic) can modify the collisional torque and therefore cause 'spin-flip' effects through the angular momenta recoupling process after the collision has occurred. The above analysis was clearly presented years ago by Alexander et al [30,32], who considered the recoupling formalism that establishes the cross section for a particular  ¢ ¢ NJ N J transition as a sum of irreducible tensorial where { } ::: is a j 6 symbol. In the last expression, K represents a momentum transfer index which can be classically interpreted as the degree of reorientation of the nuclear rotational angular momentum during the collisions. Thus, equation (10) can be regarded as a weighted sum of probabilities associated with increasingly large degrees of angular momentum transfers. The low order tensor opacities, and therefore the small collisional transfers of angular momentum, correlate with small nuclear rotational angular momentum reorientation angles as mentioned earlier. The contribution of each term decreases as K increases, which is in fact a manifestation of the propensity toward the conservation of the orientation of N during collisions. Various aspects of the above analysis will be brought to light by our calculations of the present study. Three specific examples of their behaviour are reported by the inelastic cross sections shown by figures 3-5, where three different types of processes are shown starting from three different initial N values, N=1 through N=3 from figure 3 through 5. All the initial states are associated with the even spin parity of the S=+1/2 orientation. The quantities reported in all three figures represent the possible options during inelastic collisions that were outlined in the previous qualitative discussion: (i) 'pure spin-flip' processes indicate that the transitions involved correspond to | + ñ N, 1 2 as initial state that goes into | ¢ = -ñ N N, 1 2 as a final state. In other words, through the recoupling terms of equation (10) the J-orientation process controls the spin-parity change while the orientation of N is conserved and the P K contribution increases since the momentum transfer to total-reorientation vanishes; (ii) the processes labelled as 'rotational cooling' describe rotational state-changing collisions where both N and S change to all available values of the final ¢ N level. Hence, all transitions from the selected initial N go down to all ¢ < N N levels available and to all its Scontrolled levels. For the data of figure 2, only ¢ = N 0 is obviously available for the N=1 initial state; (iii) the processes labelled as 'rotational heating' describe the opposite collisional events whereby transitions occur, from an initially selected state, to all accessible upper final states (without the pure elastic channels) and involve also the spin-parity changes. The overall behaviour of the above three types of inelastic processes is fairly similar in the three figures and can be described by the following general features: (1) in the low-collision energy regimes (e.g. from 10 −3 cm −1 to about 40 cm −1 ) the parity-reorientation process seems to dominate the collisions since it corresponds to the recoupling situation whereby no angular momentum transfer occurs and the electronic spin changes because now the anisotropy of the interaction acts at reorienting the rotational angular momentum N; (2) as the collision energy increases into the region of realistic trap conditions, the collisional cooling of molecular rotations involves changes in both N and S through the reorientation of J and therefore their sum overcomes the pure 'spin-flip' processes since it also include cases where spin-parity changes form = + S 1 2 to = -S 1 2. Such contributions increase at higher collision energies since the corresponding cross sections extend their range of opacity contributions (contributing relative angular momentum L values) as a larger number of their values contribute to the P K s probabilities of equation (10). In the lower energy range, in fact, fewer L values contribute and therefore the sum in equation (10) has fewer terms with ¹ K 0, while the latter terms dominate over spin-parity change processes [30,32] as the collision energy increases. (3) Another interesting point to note is the large values of 'rotational heating' cross sections that we obtain beyond about 100 cm −1 of collision energy. We see from the levels' positions given by figure 2 that the accessible excitation processes become more numerous and the variety of transitions involving both rotationalstate changes and spin-parity changes increase thereby contributing to the dominance of the excitation channels over the cooling channels that are instead larger at lower energies. One should also keep in mind that the present study treats the molecular target as a RR in its ground state vibrational level. The weakness of the interaction with He atoms, and the energy range considered, would not make vibrational excitations to be significant for the present study. The RR approximation further implies that the level of anharmonicity for the ground-state vibrational wavefunction is very small. We have tested this assumption and found it to be verified for the target anion. We therefore can consider the present treatment to realistically handle the collisional energy transfer processes in the title system. Investigating the resonant features The data from the present calculations reported and discussed in figures 3 through 5 indicate the presence in the computed cross sections of several marked structures which could be attributed to resonant behaviour and that may be worth analysing more closely, in order to gain further insight into the quantum dynamics of the light system at hand. The most prominent of them appears on all types of cross sections and is located down to the low collision energies around 30×10 −3 cm −1 . As is well known, resonant structures could originate either from closed-channel effects, as in Feschbach resonances, or from dynamical trapping beyond centrifugal barriers as in the case of shape resonances [33] due to open-channel effects. To briefly investigate the options, we have repeated the calculations around that range of collision energies and have excluded all the closed-channels which we had included in the previous calculations: the data in figures 3 through 5 included at least four closed J states to reach convergence at all energies. The results are compared in the two panels of figure 6, where we provide as an example the case of one of the initial molecular states discussed in the previous figures, i.e. consider the transitions from the | = = + ñ N S 3, 1 2 initial state to all the lower ones. The following information could be gathered by looking at the modified behaviour of the cross sections indicated in both panels of figure 6: (i) as expected, the spin quantum number plays the role of a simple spectator: the parity changes or the parityconservation processes do no affect the coupled channel dynamics and all cross section curves shown in the present work are essentially the same in both panels of that figure; (ii) since we are considering in the example the 'rotational cooling' processes, we see that when -DN increases along each series of cross sections their corresponding values decrease quite markedly. Around the resonance peak, in fact, the cross section values change by more than three orders of magnitude when going from -D = N 0 through -D = N 3. Furthermore, the possible role of closed-channels dynamics becomes more marked as -DN increases; (iii) additionally, spin-conserving cross sections are uniformly larger than those where both N and S change: in the former cases the larger momentum transfer terms in equation (10) play a greater role than in the latter situation where spin-parity changes can result in fewer K values contributions to the probabilities of equation (10). Thus result is similar to what we have previously discussed in figures 3 through 5; (iv) a remarkable finding is also given by the fact that essentially no size changes are seen in the D = N 0 and −1 cross sections when the closed-channels are excluded in the computed dynamics. The resonances in these channels are therefore largely due to openchannel effects of barrier trappings of the complex of light atoms behind the few angular momenta contributing at such low energies (shape resonances). As -DN increases, however, we see that the changes in the cross sections increase, suggesting that greater effects from closed channel dynamics could occur for the transitions with -D > N 1. The opacity function was also computed at the energy for which another resonant structure appears in the 'rotational cooling processes reported earlier by figure 3. The opacity data as a function of the total relative angular momentum L (classically speaking, the collisional impact parameter) are reported by figure 7. In the present case only the -D = N 1 transition was possible and we see once more, in fact, that no visible changes in the cross sections occur when all the closed channels are excluded during the dynamics. Furthermore, we also see that the dominant relative angular momentum values appear to cover a broad range of opacity values and uniformly increase their contributions as L increases. The largest contributions are clustered around the L=25 value. Such values indicate that mainly one type of shape resonance contributes to the resonant cross section, while those at the lower L values undergo tunnelling more rapidly, thereby giving rise less efficiently to the formation of trapped complex. In other words, the resonant features appear to be linked to centrifugal trapping dynamics between weakly interacting light particles rather than to virtual excitations into closed rotational states of LiH − , these being the signature features of a Feshbach resonance. Another example of the above behaviour could be had by analysing yet another of the cross-section features, i.e. the one appearing in figure 4 around 60 cm −1 for the cross sections associated to the 'spin-flip' changes due to the recoupling dynamics. The behaviour of the relevant cross sections is reported by figure 8. The figure reports the opacity functions associated with processes that cause only spin-symmetry changes and no changes in the nuclear rotational angular momentum N of the initial LiH − target. The coupled equation expansion involves a set of J angular momentum values which are energetically closed and increase up to 13/2 to get numerical convergence. When the number of closed channels is reduced up to = J 9 2 max , the spin-changing opacity changes very little, while it gets modified more markedly when the spin-changing transitions involve only the levels only up to = J 7 2. Here again the coupling is modified not by state-changing dynamics (here D = N 0) but rather by the extension of the recoupling terms in equation (10). When only two J values are possible the reorientation processes between N and J are artificially reduced and therefore the opacity functions change for that process. The 'close' channels are needed not to control the dynamics (as seen by the very similar dependence on L of all cross sections) but rather to correctly describe the recoupling between N and J when S changes parity. Calculations of state-changing rates The previous cross sections can now help us to identify the behaviour of the corresponding state-changing rates over a range of temperatures which can realistically describe that in a cold ion trap. We have carried out additional calculations for the corresponding state-changing rates by a convolution over the Boltzmannʼs distribution of relative velocities at the trapʼs translational temperature. Here k B is Boltzmannʼs constant, E the relative collision energy and T the trapʼs translational temperature. We have calculated the above rates from a few kelvins up to 50 K of trap temperature and employing cross section values up to 1000 cm −1 . For the integrationʼs convergence we started with E values above threshold and employed for each state-to-state rate a unevenly-spaced number of energy values. Numerical convergence was thus checked to be better than 10 −2 . The data we are presenting below involve both 'rotational cooling' and 'rotational heating' process and include all cross sections to and from all available N values for each initial state. The rates for the pure spin-changing process (D = N 0) are also reported. The lower three nuclear rotational angular momenta = N 1, 2 and 3 are given by figures 9-11. In each figure the lower panels report the 'rotational heating' processes with and without spin-symmetry changes. The following comments could help understanding the data reported by figure 9: (i) as expected, the D = N 0 rates are the largest ones and describe the reorientation of N around J after the spinparity changes. As discussed before, the K value goes to zero and the recoupling effects in equation (10) are dominating the process; (ii) the state-changing 'rotational cooling' rates are also fairly large and become comparable in size as the trap temperature reaches about 50 K. The largest differences are seen around 5-10 K when the dynamics involve fewer contributing L values during the reorientation of N and J; (iii) the 'rotational heating' processes are seen to be largest when D = + N 1 and favour the situation where D = D N J and the spin-symmetry does not change [30,32]. On the other hand, they are slowly increasing with temperature and, around 50 K, still remain smaller by one or more orders of magnitude than the 'rotational cooling' process of the upper panel. When we now examine the results reported by figures 10 and 11, the general trend remains essentially the same as that in figure 9, although more transitions are now possible since the initial states selected for LiH − target are now | = = + ñ N S 2, 1 2 in figure 10 and | = = + N S 3, 1 2 in figure 11. What we basically see, therefore, is that the state changing processes involving D = -N n transitions are largely dominated by the D = -N 1 process in both sets of data in the two figures. Furthermore, the corresponding D = + N 1 excitation processes remain largely less efficient than the cooling processes while dominating the 'rotational heating' events. As mentioned before, we also see that the D = D N J processes remain the preferential ones. Conclusions and outlook In the present study we have analysed in some detail the quantum dynamics of state-changing collisions of a simple molecular anion, the 2 Σ + LiH − , at the relatively low temperatures of a cold trap when 4 He( 1 S) participates as a buffer gas partner. The special situation created by the presence of a doublet electronic configuration for the molecular partner has been taken into account to discuss the fine-structure effects in the possible dynamics, at least from the computational viewpoint. The resonance features which appear in the various types of examined cross sections are briefly analysed and indicate that their occurrence is chiefly due to open-channel, relative angular momentum trapping of the colliding light partners, rather than to the presence of virtual excitations of the molecular anionʼs states during the collisions. It therefore seems that the fairly weak interaction forces of the system The behaviour of the rotational cooling rates indicate also general similarities between the present system and the one already analysed earlier experimentally: the OH − ( 1 Σ + ) plus 4 He ( 1 S) [18]. In the latter case, in fact, the D =j 1 computed state-changing rates turned out to depend very little on the temperatures of the trap in the range between 10 and 50 K and to be of the order or about 5×10 −11 cm 3 s −1 . Our corresponding rates for D = -N 1 (see figure 9) start around 5×10 −11 cm 3 s −1 at 10 K but markedly vary with temperature up to 50 K so that they double in size up to about 10 −10 cm 3 s −1 . Correspondingly, the rotational heating, state-changing cross sections also seem to follow comparable patterns: in the case of OH − with He, in fact, the rotational excitation rates from the | = ñ j 0 level varied from about 10 −13 cm 3 s −1 at 10 K to about 5×10 −11 cm 3 s −1 at 50 K. Our calculations of figure 9 (lower panel) show the D = + N 1 excitation rates from the | = = + ñ N S 1, 1 2 to vary from 5×10 −13 cm 3 s −1 at 10 K up to about 5×10 −11 cm 3 s −1 at 50 K. The two systems therefore exhibit very similar behaviour, at least when tested through computational models. Since the experiments of [18] further indicate remarkable agreement between measurements and calculations around 15 K of trap temperature, it could also be interesting to design similar experiments for the LiH − anion under trap conditions: the calculations suggest, in fact, that the size of the state-changing probability might also turn out to be rather similar and thus also becoming observable.
7,644
2016-11-04T00:00:00.000
[ "Physics" ]
Neural Semantic Role Labeling with Dependency Path Embeddings This paper introduces a novel model for semantic role labeling that makes use of neural sequence modeling techniques. Our approach is motivated by the observation that complex syntactic structures and related phenomena, such as nested subordinations and nominal predicates, are not handled well by existing models. Our model treats such instances as sub-sequences of lexicalized dependency paths and learns suitable embedding representations. We experimentally demonstrate that such embeddings can improve results over previous state-of-the-art semantic role labelers, and showcase qualitative improvements obtained by our method. Introduction The goal of semantic role labeling (SRL) is to identify and label the arguments of semantic predicates in a sentence according to a set of predefined relations (e.g., "who" did "what" to "whom").Semantic roles provide a layer of abstraction beyond syntactic dependency relations, such as subject and object, in that the provided labels are insensitive to syntactic alternations and can also be applied to nominal predicates.Previous work has shown that semantic roles are useful for a wide range of natural language processing tasks, with recent applications including statistical machine translation (Aziz et al., 2011;Xiong et al., 2012), plagiarism detection (Osman et al., 2012;Paul and Jamal, 2015), and multi-document abstractive summarization (Khan et al., 2015). The task of semantic role labeling (SRL) was pioneered by Gildea and Jurafsky (2002).In Table 1: Outputs of SRL systems for the sentence He had trouble raising funds.Arguments of raise are shown with predicted roles as defined in Prop-Bank (A0: getter of money; A1: money).Asterisks mark flawed analyses that miss the argument He. their work, features based on syntactic constituent trees were identified as most valuable for labeling predicate-argument relationships.Later work confirmed the importance of syntactic parse features (Pradhan et al., 2005;Punyakanok et al., 2008) and found that dependency parse trees provide a better form of representation to assign role labels to arguments (Johansson and Nugues, 2008). Most semantic role labeling approaches to date rely heavily on lexical and syntactic indicator features.Through the availability of large annotated resources, such as PropBank (Palmer et al., 2005), statistical models based on such features achieve high accuracy.However, results often fall short when the input to be labeled involves instances of linguistic phenomena that are relevant for the labeling decision but appear infrequently at training time.Examples include control and raising verbs, nested conjunctions or other recursive structures, as well as rare nominal predicates.The difficulty lies in that simple lexical and syntactic indicator features are not able to model interactions triggered by such phenomena.For instance, con-sider the sentence He had trouble raising funds and the analyses provided by four publicly available tools in Table 1 (mate-tools, Björkelund et al. (2010); mateplus, Roth and Woodsend (2014);TensorSRL, Lei et al. (2015); and easySRL, Lewis et al. (2015)).Despite all systems claiming stateof-the-art or competitive performance, none of them is able to correctly identify He as the agent argument of the predicate raise.Given the complex dependency path relation between the predicate and its argument, none of the systems actually identifies He as an argument at all. In this paper, we develop a new neural network model that can be applied to the task of semantic role labeling.The goal of this model is to better handle control predicates and other phenomena that can be observed from the dependency structure of a sentence.In particular, we aim to model the semantic relationships between a predicate and its arguments by analyzing the dependency path between the predicate word and each argument head word.We consider lexicalized paths, which we decompose into sequences of individual items, namely the words and dependency relations on a path.We then apply long-short term memory networks (Hochreiter and Schmidhuber, 1997) to find a recurrent composition function that can reconstruct an appropriate representation of the full path from its individual parts (Section 2).To ensure that representations are indicative of semantic relationships, we use semantic roles as target labels in a supervised setting (Section 3). By modeling dependency paths as sequences of words and dependencies, we implicitly address the data sparsity problem.This is the case because we use single words and individual dependency relations as the basic units of our model.In contrast, previous SRL work only considered full syntactic paths.Experiments on the CoNLL-2009 benchmark dataset show that our model is able to outperform the state-of-the-art in English (Section 4), and that it improves SRL performance in other languages, including Chinese, German and Spanish (Section 5). Dependency Path Embeddings In the context of neural networks, the term embedding refers to the output of a function f within the network, which transforms an arbitrary input into a real-valued vector output.Word embeddings, for instance, are typically computed by forwarding a one-hot word vector representation from the input layer of a neural network to its first hidden layer, usually by means of matrix multiplication and an optional non-linear function whose parameters are learned during neural network training. Here, we seek to compute real-valued vector representations for dependency paths between a pair of words w i , w j .We define a dependency path to be the sequence of nodes (representing words) and edges (representing relations between words) to be traversed on a dependency parse tree to get from node w i to node w j .In the example in Figure 1, the dependency path from raising to he is raising NMOD − −− → trouble OBJ − − → had SBJ ←− he.Analogously to how word embeddings are computed, the simplest way to embed paths would be to represent each sequence as a one-hot vector.However, this is suboptimal for two reasons: Firstly, we expect only a subset of dependency paths to be attested frequently in our data and therefore many paths will be too sparse to learn reliable embeddings for them.Secondly, we hypothesize that dependency paths which share the same words, word categories or dependency relations should impact SRL decisions in similar ways.Thus, the words and relations on the path should drive representation learning, rather than the full path on its own.The following sections describe how we address representation learning by means of modeling dependency paths as sequences of items in a recurrent neural network. Recurrent Neural Networks The recurrent model we use in this work is a variant of the long-short term memory (LSTM) network.It takes a sequence of items X = x 1 , ..., x n as input, recurrently processes each item x t ∈ X at a time, and finally returns one embedding state e n for the complete input sequence.For each time step t, the LSTM model updates an internal memory state m t that depends on the current input as well as the previous memory state m t−1 .In order to capture long-term dependencies, a so-called gating mechanism controls the extent to which each component of a memory cell state will be modified.In this work, we employ input gates i, output gates o and (optional) forget gates f.We formalize the state of the network at each time step t as follows: (1) In each equation, W describes a matrix of weights to project information between two layers, b is a layer-specific vector of bias terms, and σ is the logistic function.Superscripts indicate the corresponding layers or gates.Some models described in Section 3 do not make use of forget gates or memory-to-gate connections.In case no forget gate is used, we set f t = 1.If no memoryto-gate connections are used, the terms in square brackets in (1), (2), and (4) are replaced by zeros. Embedding Dependency Paths We define the embedding of a dependency path to be the final memory output state of a recurrent LSTM layer that takes a path as input, with each input step representing a binary indicator for a part-of-speech tag, a word form, or a dependency relation.In the context of semantic role labeling, we define each path as a sequence from a predicate to its potential argument.1 Specifically, we define the first item x 1 to correspond to the part-of-speech tag of the predicate word w i , followed by its actual word form, and the relation to the next word w i+1 .The embedding of a dependency path corresponds to the state e n returned by the LSTM layer after the input of the last item, x n , which corresponds to the word form of the argument head word w j .An example is shown in Figure 2. The main idea of this model and representation is that word forms, word categories and dependency relations can all influence role labeling decisions.The word category and word form of the predicate first determine which roles are plausible and what kinds of path configurations are to be expected.The relations and words seen on the path can then manipulate these expectations.In Figure 2, for instance, the verb raising complements the phrase had trouble, which makes it likely that the subject he is also the logical subject of raising. By using word forms, categories and dependency relations as input items, we ensure that specific words (e.g., those which are part of complex predicates) as well as various relation types (e.g., subject and object) can appropriately influence the representation of a path.While learning corresponding interactions, the network is also able to determine which phrases and dependency relations might not influence a role assignment decision (e.g., coordinations). Joint Embedding and Feature Learning Our SRL model consists of four components depicted in Figure 3: (1) an LSTM component takes lexicalized dependency paths as input, (2) an additional input layer takes binary features as input, (3) a hidden layer combines dependency path embeddings and binary features using rectified linear units, and (4) a softmax classification layer produces output based on the hidden layer state as input.We therefore learn path embeddings jointly with feature detectors based on traditional, binary indicator features. Given a dependency path X, with steps x k ∈ {x 1 , ..., x n }, and a set of binary features B as input, we use the LSTM formalization from equations (1-5) to compute the embedding e n at time step n and formalize the state of the hidden layer h and softmax output s c for each class category c as follows: 3 System Architecture The overall architecture of our SRL system closely follows that of previous work (Toutanova et al., 2008;Björkelund et al., 2009) and is depicted in Figure 4. We use a pipeline that consists of the following steps: predicate identification and disambiguation, argument identification, argument classification, and re-ranking.The neural-network components introduced in Section 2 are used in the last three steps.The following sub-sections describe all components in more detail. Predicate Identification and Disambiguation Given a syntactically analyzed sentence, the first two steps in an end-to-end SRL system are to identify and disambiguate the semantic predicates in the sentence.Here, we focus on verbal and nominal predicates but note that other syntactic categories have also been construed as predicates in the NLP literature (e.g., prepositions; Srikumar and Roth (2013)).For both identification and disambiguation steps, we apply the same logistic re- gression classifiers used in the SRL components of mate-tools (Björkelund et al., 2010).The classifiers for both tasks make use of a range of lexicosyntactic indicator features, including predicate word form, its predicted part-of-speech tag as well as dependency relations to all syntactic children. Argument Identification and Classification Given a sentence and a set of sense-disambiguated predicates in it, the next two steps of our SRL system are to identify all arguments of each predicate and to assign suitable role labels to them. For both steps, we train several LSTM-based neural network models as described in Section 2. In particular, we train separate networks for nominal and verbal predicates and for identification and classification.Following the findings of earlier work (Xue and Palmer, 2004), we assume that different feature sets are relevant for the respective tasks and hence different embedding representations should be learned.As binary input features, we use the following sets from the SRL literature (Björkelund et al., 2010).Other features Relative position of the candidate argument with respect to the predicate (left, self, right); sequence of part-of-speech tags of all words between the predicate and the argument. Reranker As all argument identification (and classification) decisions are independent of one another, we apply as the last step of our pipeline a global reranker.Given a predicate p, the reranker takes as input the n best sets of identified arguments as well as their n best label assignments and predicts the best overall argument structure.We implement the reranker as a logistic regression classifier, with hidden and embedding layer states of identified arguments as features, offset by the argument label, and a binary label as output (1: best predicted structure, 0: any other structure).At test time, we select the structure with the highest overall score, which we compute as the geometric mean of the global regression and all argument-specific scores. Experiments In this section, we demonstrate the usefulness of dependency path embeddings for semantic role labeling.Our hypotheses are that (1) modeling dependency paths as sequences will lead to better representations for the SRL task, thus increasing labeling precision overall, and that (2) embeddings will address the problem of data sparsity, leading to higher recall.To test both hypotheses, we experiment on the in-domain and out-of-domain test sets provided in the CoNLL-2009 shared task (Hajič et al., 2009) and compare results of our system, henceforth PathLSTM, with systems that do not involve path embeddings.We compute precision, recall and F 1 -score using the official CoNLL-2009 scorer. 2 The code is available at https://github.com/microth/PathLSTM. Model selection We train argument identification and classification models using the XLBP toolkit for neural networks (Monner and Reggia, 2012).The hyperparameters for each step were selected based on the CoNLL 2009 development set.For direct comparison with previous work, we use the same preprocessing models and predicate-specific SRL components as provided with mate-tools (Bohnet, 2010;Björkelund et al., 2010).The types and ranges of hyperparameters considered are as follows: learning rate α ∈ [0.00006, 0.3], dropout rate d ∈ [0.0, 0.5], and hidden layer sizes |e| ∈ [0, 100], |h| ∈ [0, 500].In addition, we experimented with different gating mechanisms (with/without forget gate) and memory access settings (with/without connections between all gates and the memory layer, cf.Section 2).The best parameters were chosen using the Spearmint hyperparameter optimization toolkit (Snoek et al., 2012), applied for approx.200 iterations, and are summarized in Table 2. Results The results of our in-and out-of-domain experiments are summarized in Tables 3 and 5 sults by 0.4 and 0.2 percentage points, respectively.At a F 1 -score of 86.7%, our local model (using no reranker) reaches the same performance as state-of-the-art local models.Note that differences in results between systems might originate from the application of different preprocessing techniques as each system comes with its own syntactic components.For direct comparison, we evaluate against mate-tools, which use the same preprocessing techniques as PathLSTM.In comparison, we see improvements of +0.8-1.0 percentage points absolute in F 1 -score. In the out-of-domain setting, our system achieves new state-of-the-art results of 76.1% (single) and 76.5% (ensemble) F 1 -score, outperforming the previous best system by Roth and Woodsend (2014) the same preprocessing methods.Table 4 presents in-domain test results for our system when specific feature types are omitted.The overall low results indicate that a combination of dependency path embeddings and binary features is required to identify and label arguments with high precision. Figure 5 shows the effect of dependency path embeddings at mitigating sparsity: if the path between a predicate and its argument has not been observed at training time or only infrequently, conventional methods will often fail to assign a role.This is represented by the recall curve of mate-tools, which converges to zero for arguments with unseen paths.The higher recall curve for PathLSTM demonstrates that path embeddings can alleviate this problem to some extent.For unseen paths, we observe that PathLSTM improves over mate-tools by an order of magnitude, from 0.9% to 9.6%.The highest absolute gain, from 12.8% to 24.2% recall, can be observed for dependency paths that occurred between 1 and 10 times during training. Figure 7 plots role labeling performance for sentences with varying number of words.There are two categories of sentences in which the improvements of PathLSTM are most noticeable: Firstly, it better handles short sentences that contain expletives and/or nominal predicates (+0.8% absolute in F 1 -score).This is probably due to the fact that our learned dependency path representations are lexicalized, making it possible to model argument structures of different nominals and distinguishing between expletive occurrences of 'it' and other subjects.Secondly, it improves performance on longer sentences (up to +1.0% absolute in F 1 -score).This is mainly due to the handling of dependency paths that involve complex structures, such as coordinations, control verbs and nominal predicates. We collect instances of different syntactic phenomena from the development set and plot the learned dependency path representations in the embedding space (see Figure 6).We obtain a projection onto two dimensions using t-SNE (Van der Maaten and Hinton, 2008).Interestingly, we can Finally, terms of recall of proto-agent (A0) and protopatient (A1) roles, with slight gains in precision for the A2 role.Overall, PathLSTM does slightly worse with respect to modifier roles, which it labels with higher precision but at the cost of recall. Path Embeddings in other Languages In this section, we report results from additional experiments on Chinese, German and Spanish data.The underlying question is to which extent the improvements of our SRL system for English also generalize to other languages.To answer this question, we train and test separate SRL models for each language, using the system architecture and hyperparameters discussed in Sections 3 and 4, respectively.We train our models on data from the CoNLL-2009 shared task, relying on the same features as one of the participating systems (Björkelund et al., 2009), and evaluate with the official scorer.For direct comparison, we rely on the (automatic) syntactic preprocessing information provided with the CoNLL test data and compare our results with the best two systems for each language that make use of the same preprocessing information. The results, summarized in Table 7, indicate that PathLSTM performs better than the system by Björkelund et al. (2009) semantic role labeling.They developed a feedforward network that uses a convolution function over windows of words to assign SRL labels.Apart from constituency boundaries, their system does not make use of any syntactic information.Foland and Martin (2015) extended their model and showcased significant improvements when including binary indicator features for dependency paths.Similar features were used by FitzGerald et al. (2015), who include role labeling predictions by neural networks as factors in a global model. These approaches all make use of binary features derived from syntactic parses either to indicate constituency boundaries or to represent full dependency paths.An extreme alternative has been recently proposed in Zhou and Xu (2015), who model SRL decisions with a multi-layered LSTM network that takes word sequences as input but no syntactic parse information at all. Our approach falls in between the two extremes: we rely on syntactic parse information but rather than solely making using of sparse binary features, we explicitly model dependency paths in a neural network architecture. Other SRL approaches Within the SRL literature, recent alternatives to neural network architectures include sigmoid belief networks (Henderson et al., 2013) as well as low-rank tensor models (Lei et al., 2015).Whereas Lei et al.only make use of dependency paths as binary indicator features, Henderson et al. propose a joint model for syntactic and semantic parsing that learns and ap-plies incremental dependency path representations to perform SRL decisions.The latter form of representation is closest to ours, however, we do not build syntactic parses incrementally.Instead, we take syntactically preprocessed text as input and focus on the SRL task only. Apart from more powerful models, most recent progress in SRL can be attributed to novel features.For instance, Deschacht and Moens (2009) and Huang and Yates (2010) use latent variables, learned with a hidden markov model, as features for representing words and word sequences.Zapirain et al. (2013) propose different selection preference models in order to deal with the sparseness of lexical features.Roth and Woodsend (2014) address the same problem with word embeddings and compositions thereof.Roth and Lapata (2015) recently introduced features that model the influence of discourse on role labeling decisions. Rather than coming up with completely new features, in this work we proposed to revisit some well-known features and represent them in a novel way that generalizes better.Our proposed model is inspired both by the necessity to overcome the problems of sparse lexico-syntactic features and by the recent success of SRL models based on neural networks. Dependency-based embeddings The idea of embedding dependency structures has previously been applied to tasks such as relation classification and sentiment analysis.Xu et al. (2015) and Liu et al. (2015) use neural networks to embed dependency paths between entity pairs.To identify the relation that holds between two entities, their approaches make use of pooling layers that detect parts of a path that indicate a specific relation.In contrast, our work aims at modeling an individual path as a complete sequence, in which every item is of relevance.Tai et al. (2015) and Ma et al. (2015) learn embeddings of dependency structures representing full sentences, in a sentiment classification task.In our model, embeddings are learned jointly with other features, and as a result problems that may result from erroneous parse trees are mitigated. Conclusions We introduced a neural network architecture for semantic role labeling that jointly learns embeddings for dependency paths and feature combinations.Our experimental results indicate that our model substantially increases classification performance, leading to new state-of-the-art results.In a qualitive analysis, we found that our model is able to cover instances of various linguistic phenomena that are missed by other methods. Beyond SRL, we expect dependency path embeddings to be useful in related tasks and downstream applications.For instance, our representations may be of direct benefit for semantic and discourse parsing tasks.The jointly learned feature space also makes our model a good starting point for cross-lingual transfer methods that rely on feature representation projection to induce new models (Kozhevnikov and Titov, 2014). Figure 1 : Figure 1: Dependency path (dotted) between the predicate raising and the argument he. Figure 2 : Figure 2: Example input and embedding computation for the path from raising to he, given the sentence he had trouble raising funds.LSTM time steps are displayed from right to left. Figure 3 : Figure3: Neural model for joint learning of path embeddings and higher-order features: The path sequence x 1 . . .x n is fed into a LSTM layer, a hidden layer h combines the final embedding e n and binary input features B, and an output layer s assigns the highest probable class label c. Figure 4 : Figure 4: Pipeline architecture of our SRL system. (Figure 6 : Figure 6: Dots correspond to the path representation of a predicate-argument instance in 2D space.White/black color indicates A0/A1 gold argument labels.Dotted ellipses denote instances exhibiting related syntactic phenomena (see rectangles for a description and dotted rectangles for linguistic examples).Example phrases show actual output produced by PathLSTM (underlined). Figure 5 : Figure 5: Results on in-domain test instances, grouped by the number of training instances that have an identical (unlexicalized) dependency path. Figure 7 : Figure 7: Results by sentence length.Improvements over mate-tools shown in parentheses. Table 2 : Hyperparameters selected for best models and training proceduresLexico-syntactic features Word form and word category of the predicate and candidate argument; dependency relations from predicate and argument to their respective syntactic heads; full dependency path sequence from predicate to argument.Local context features Word forms and word categories of the candidate argument's and predicate's syntactic siblings and children words. Table 3 : Results on the CoNLL-2009 in-domain test set.All numbers are in percent. Table 4 : Ablation tests in the in-domain setting. Table 5 : Discussion To determine the sources of individual improvements, we test PathLSTM models without specific feature types and directly compare PathLSTM and mate-tools, both of which use 3 Results are taken from Lei et al. (2015).Results on the CoNLL-2009 out-ofdomain test set.All numbers are in percent. Table 6 : Table 6 shows results for nominal and verbal predicates as well as for different (gold) role labels.In comparison to mate-tools, we can see that PathLSTM improves precision for all argument types of nominal predicates.For verbal predicates, improvements can be observed in Results by word category and role label. Table 7 : in all cases.For German and Chinese, PathLSTM achieves the best overall F 1 -scores of 80.1% and 79.4%, respectively.Results (in percentage) on the CoNLL-2009 test sets for Chinese, German and Spanish.
5,719
2016-05-24T00:00:00.000
[ "Computer Science" ]
A Method to Measure the Diffusion Coefficient in Liquids Molecular diffusion in liquids is a key process in numerous systems: it is often the reaction rate limiting factor in biological or chemical reaction. Molecular diffusion has been recognized as the ultimate mechanism by which substances concentration get homogenized and, thus, their mixing and dilution occur. Here, we propose a novel method to directly measure the diffusion coefficient D of solutes or suspensions in liquids. Differently from current methods, as Dynamic Light Scattering or Fluorescent Correlation Spectroscopy, our method does not rely on previous knowledge on the fluid or tracer properties, but it is based on directly measuring the concentration spatial profile of a considered tracer with optical techniques within a diffusion chamber. We test this novel method on a sample of mono-dispersed suspension of spherical colloids for which an estimate for D can be made based on Einstein–Stokes relation. We, then, use this technique to measure the diffusion coefficient of a non-spherical tracer. We further quantify mixing of the considered tracers in the confined domain of the diffusion chamber: we show that, since diffusion-limited mixing (quantified in terms of the dilution index) in a confined space happens faster than un-confined domain, the finite size of the diffusion chamber must be taken into account to properly estimate D and the tracer mixing degree. Introduction Molecular diffusion in liquids is a key process in numerous natural and engineering systems (Graham 1849;Dentz et al. 2011). It is often the reaction rate limiting factor in biological or chemical reactions (de Anna et al. 2014a, b). Generally, it is the ultimate mechanism by which substances concentration get homogenized and, thus, their mixing and dilution occur (Ottino 1989;Villermaux 2019;Le Borgne et al. 2015). Molecular diffusion of a given dissolved or suspended compound originates from the individual molecules (or particles) motion that is associated to their thermal agitation: a famous example is the early observation of pollen grains movement in water by Pas (1971): the macroscopic 1 3 consequence of this microscopic phenomenon is that the mass of that compound spreads in space as time passes. The description of the macroscopic spreading of a compound c under diluted conditions is given by Fick's first law: it states, in analogy with Fourier's law of thermal conductivity, that the diffusive mass flux J(x) at a location x is proportional to the concentration gradient where the constant of proportionality D is the so-called diffusion coefficient. The negative sign implies that mass moves from locations with higher concentration towards areas of lower concentrations. Since their gradient changes with time as the substance diffuses, mass conservation must be invoked to describe the concentration spatio-temporal dynamics: It states that for a given location x, a change in the mass flux is associated with a change of concentration in time. Combining the two Fick's laws, we obtain the well-known diffusion equation describing the spatio-temporal distribution of a diffusing substance: The knowledge of D is crucial to describe the fate of a diffusing substance and all the diffusion-related phenomena, like mixing or reactions. For spherical objects, the value of the diffusion coefficient can be theoretically derived from the well-known Stokes-Einstein relation (Reif 1985) which compares the velocity associated to the kinetic energy of particles to the viscous drag experienced, by the particles themselves, while moving within a fluid of viscosity . It reads: where k is the Boltzmann constant [J/K], T is the absolute temperature [K], is the dynamic viscosity [Pas ] and r is the particles radius [m]. For objects of approximately spherical shape (e.g. many types of molecules, colloids or bacteria) for which the radius is known, several methods have been developed in the past decades to measure the value of D based either on the microscopic (individual motion) or macroscopic (concentration distribution) properties of the process. Dynamic Light Scattering (DLS) measures intensity fluctuation of light scattered by particles and relates it to the particle velocity. It is a technique typically used to determine the size distribution of particles in suspension. It assumes quasi-elastic scattering of light by a homogeneous set of spherical objects of similar diameter. Measuring the size of diffusing particles, based on Eq. (4), the diffusion coefficient can be calculated (Stetefeld et al. 2016). Fluorescent Correlation Spectroscopy (Yu et al. 2021) is another widely used method to estimate D by measuring the temporal autocorrelation of the detected fluorescence signal emitted by a volume which is tiny and controlled (typically via confocal microscopy) of liquid containing a well diluted compound. Due to the extremely short time range over which the autocorrelation is measured (from microsecond to second) and the tiny signal emitted, the light must be detected by a fast acquisition device as a photomultiplier, an avalanche photodiode or a superconducting nanowire single-photon detector. The measured decay of the signal autocorrelation reveals the time needed by a molecule to diffuse through the observation volume of linear size a: this time scale is expected to be D = a 2 ∕4D . If a is known and D is measured, D can be estimated. Other methods to measure the diffusion coefficient D in liquids are based on macroscopic mass transfer. For instance, the one based on Taylor dispersion within a Poiseuille flow, where a pulse of a substance is injected within a tube stream and the concentration measured at the outlet. The obtained profile is then fitted to the solution of dispersion equation where the proportionality constant D t is Taylor diffusivity. The value of the diffusion coefficient D can be then back computed knowing the tube radius r and mean flow velocity u through D t = r 2 u 2 ∕(48 D) (Alizahed et al. 1980;Ouano 1972). Another method exploits the diaphragm cell (Northrop and Anson 1928;Gordon 1945;Lozar et al. 1975): two reservoirs of volume V are separated by a porous membrane and a solute diffuses from one to the other through the membrane. The concentration is measured in one reservoir at time interval dt and thus the rate of change of solute concentration dc∕dt = (c 2 − c 1 )∕(t 2 − t 1 ) in the reservoir is given by Fick's law and depends on the membrane width l and effective porosity A, from which the value of D is determined. A calibration with a solute of known diffusion coefficient is required to determine A. All these methods are i) based on indirect measurements or ii) require previous knowledge on both solute and solvent properties or iii) require an expensive and hard to use/calibrate instrumentation. We propose, here, a novel and simple method to measure the diffusion coefficient D that, with the proposed set-up, has uncertainty of about 3% and requires no prior knowledge on either the target substance or on its solvent. Method Let us consider a tracer of concentration c dissolved, or suspended, in a given liquid. The main idea behind our method is to measure the spatio-temporal evolution of the concentration profile c(x, t) with optical techniques, under initial and boundary conditions for which an analytical solution of the diffusion equation Eq. (3), depending only on D, is known. By fitting this analytical solution c(x, t) to the measured concentration profile will provide an estimate of the diffusion coefficient D. To validate our experimental set-up, we use a tracer for which the diffusion coefficient can be predicted by the Stokes-Einstein relation (Reif 1985): we choose a mono-dispersed suspension of fluorescent micro-spheres. We will, then, apply the same methodology to a colored tracer whose molecule is non-spherical. Fluorescent Spheres as Tracer We use polystyrene fluorescent micro-spheres (Fluoro-Max, Thermo Fisher B150) of radius r = 0.075 m that are provided at 1% solid concentration. From the original suspension, we prepare a concentration c 0 20 times diluted in a milliQ-water and heavy-water ( D 2 O ,) mixture of density 1.05 g/ml, matching the one of the micro-spheres to avoid their sedimentation. With the optical system used, the particles are too small to be individually detected. Instead, we observe the overall fluorescent signal emitted by the suspension within the field of view. A calibration procedure showed us that the amount of light detected and recorded by our acquisition system is proportional to the tracer concentration in the range [0, c 0 ]. The light detected by the camera is recorded into a greyscale image and stored as a matrix im of integer values between 0 (black) and 2 bit − 1 (white), where bit represents the color depth of the camera. We used a Nikon DS-Qi2 which is equipped with a CMOS full-frame sensor recording at 12-bit. If the tracer is not so concentrated to block part of the incoming and its own emitted light, the value of this matrix im is proportional to the tracer concentration as im = s c + im B , where s is a proportionality constant and im B represents the background signal detected in absence of tracer, c = 0 . Thus, where the value of s can be found via a calibration procedure collecting pictures of samples of known concentration. We verified via a calibration that the tracer at the adopted concentration satisfies Eq. (5): however, to avoid propagation of error associated to the estimation of the parameter s, we express the concentration c relative to its initial value c 0 : where im 0 is the matrix representing image collected when only the tracer at concentration c 0 is present, so that c∕c 0 does not depend on the estimation of the parameter s or the initial concentration c 0 . Colored Dye as Tracer The second tracer we use is a solution of methyl blue dye (Sigma-Aldrich) of concentration c 0 = 0.15 mg/l. The solution is prepared with a mixture of milliQ water (80% ) and glycerol (20% ). Once a sample of this solution is irradiated with light (bright field microscopy), only a portion of the signal passes through while portion of it is absorbed. The more concentrated is the tracer, the more light is absorbed and the less of it is transmitted and detected. The light absorbance, the logarithm of the ratio between incoming and transmitted light, is a linear function of the tracer concentration according to the Beer-Lambert law (Bouguer 1729). The exponential dependence of the transmitted light to the concentration can be simplified as linear for low concentrations, so that: where im = im(c) is the transmitted light intensity through the tracer at concentration c (and detected by the camera), im 0 = im(c = c 0 ) and im B = im(c = 0). Diffusion Chamber In order to reproduce the conditions for which a tracer is diffusing along one dimension and compare its concentration profile to the solution of Eq. (3), we build a microfluidics device (used as diffusion chamber). In it, we continuously inject, side by side, the considered tracer solution/suspension and its solvent (in the following called blank solution). Thus, we design a channel mold with rectangular cross section and a parallel injection entrance Fig. 1). In this flow cell, the solutions flow along the channel longitudinal, main, direction only. The mass transfer mechanisms taking place along the transverse direction is molecular diffusion alone. The cell geometry is printed onto transparent glass at high resolution in chrome (JD Photodata, UK). Micro-channels are fabricated using standard techniques of soft lithography and PDMS molding. They are then plasma-bonded to a 1-mm-thick sodalime glass slide. The resulting channel has width w = 1 mm, thickness h = 0.08 mm (thus rectangular cross section area A = 0.08 mm 2 ) and a length L = 40 mm (see Fig. 1). Flow System Each inlet is connected with Tygon tubing (internal diameter of 0.5 mm) to a reservoir (15 ml Falcon tubes). One contains 4 ml of the blank solution, the other contains 4 ml of tracer solution. The outlet is connected to a waste reservoir containing 4 ml of water. Tubing connecting the microfluidic chip to the reservoirs can be open/closed at will by means of 2-ways microfluidic valves (MaxWire from Elveflow), all three reservoirs are pressurized using a pressure controller (OB1 MK3+ from Elveflow) so that the flow is established by a pressure drop between inlet and outlet of Δp = 50 mbar. Once the flow is interrupted (by closing simultaneously all valves and stopping the pressure drop), the tracer diffuses transversely towards the blank solution. In this configuration, the one-dimensional tracer concentration profile along the channel transverse direction is the solution of Eq. (3). Optical System and Image Processing The microfluidic device is placed under a microscope (an inverted Nikon Eclipse Ti-E2) equipped with a low numerical aperture (NA = 0.3) objective in order to observe in focus the whole depth of the channel. On the one hand, for imaging the fluorescent particles, excitation and emission light are selected using a filter-cube (Nikon, DAPI, excitation bandpass 395 ± 10 nm and emission bandpass 475 ± 11 nm). On the other hand, for imaging the methyl blue solute, a custom filter selects the irradiating white light (Semrock single-band band pass filter 662 ± 11 nm), so that only near-blue light reaches the sample, the one that is the most absorbed. For all cases, greyscale images are captured and stored using a Nikon DS-Qi2 camera. Each image is composed by 4908 × 3264 pixels whose physical size in the camera sensor is 7.3 m: thus, considering the objective magnification used (objective 10X plus the internal microscope 1.5X extra magnification, for a total of 15X), an overall size of 2.3 × 1.6 mm. The images acquired are matrices of pixels whose value ranges from 0 to 2 12 − 1 . We crop each image (im, im 0 and im B ) to a desired region of interest (rectangles in Fig. 2a and b) which goes from wall to wall of the microfluidic and spans 300 pixels longitudinally (along the flow direction, y) and we compute its profile (Fig. 2c) by averaging values along y-direction. Finally, for both tracers (fluorescent spheres and colored dye), the concentration profiles are obtained from the above equations where im, im 0 and im B are the profiles of the considered pictures. Theoretical Estimate of D The polystyrene particles diffusion coefficient D can be theoretically estimated with the Stokes-Einstein relation (Reif 1985), Eq. (4). Working at T = 293 K with a suspension 7) for colored dye of viscosity = 1.06 10 −3 Pa s and particles of size r = 7.510 −8 m, the diffusion coefficient is estimated as D b = 2.7 10 −6 mm 2 /s. Methyl blue molecules present a non-spherical structure, closer to a sheet (much thinner than wide), thus, we define an effective radius r = 6.5 10 −10 m (Kipling and Wilson 1960;Hang and Brindley 1970;Taylor 1985). The dynamic viscosity of the waterglycerol mixture is m = 1.98 Pa s and, thus, we estimate D m = 1.7 10 −4 mm 2 /s. Solution of Diffusion Equation Since the fluid flow is stopped by closing the valves, the only mechanism of mass transfer is molecular diffusion. For the experimental configuration chosen, the only spatial direction along which a nonzero macroscopic tracer gradient and, thus, a mass flux takes place is the transverse one (denoted as x in Fig. 1). The tracer spreads between the microfluidics boundaries, i.e the PDMS walls: the concentration profile that we measure is the solution of the one-dimensional diffusion equation (Eq. (3)), with no-flux boundary conditions at x = 0, L , as derived in (Hamada et al. 2020): where c f = 1∕L is the homogeneous concentration reached at times much larger than the characteristic diffusive time scale over the channel width t > D = L 2 ∕D and B m is a coefficient that depends on the initial concentration distribution f 0 (x): Note that, the initial condition f 0 (x) corresponds to the concentration profile collected at any given time t 0 ( f 0 (x) = c(x, t 0 ) ) for which it will be imposed that t 0 = 0 : the initial profile can be chosen at convenience. The characteristic time by which molecular diffusion smooth the concentration gradient, homogenizing the concentration field, over le the length scale L is commonly considered to be D = L 2 ∕D . However, for a confined condition, i.e. no-flux at boundaries, the solution of the diffusion equation is Eq. (8) which is the superposition of modes m that decay exponentially fast as e − 2 m 2 Dt∕L 2 . For the case of a diffusive front, as considered here, the smallest nonzero mode is m = 1 (for a pulse it would be m = 2 ): thus, at time larger than L 2 ∕( 2 D) = D ∕ 2 , the whole solution Eq. (8) is dominated by the first mode and it decays exponentially fast with time. Physically, this means that the concentration profile experiences the presence of no-flux boundary conditions that prevent the solute mass to explore more space. In this context, this is relevant since the exponential decay of the solution fixes a temporal scale which is D ∕ 2 . In other words, for larger times, we should expect the concentration profile to be well homogenized within the diffusion chamber. Thus, the measurement must be performed over a shorter time scale: for the beads, D ∕ 2 is about 10 hours, and for the dye solution, it is about 10 minutes. We fit, for each time step t j , the analytical solution c(x i , t j ) , Eq. (8), to the measured concentration profile that we label c M (x i , t j ) by varying the only parameter D until it is reached the minimum of the mean-squared error where N is the number of points over which the concentration profile is detected (number of pixels along the transverse direction within the region of interest). To rigorously assess the uncertainty on D, we should estimate how the uncertainty c would propagate on the parameter D, by inverting the c(x, t, D) into D(x, t, c) and computing D∕ c , as D = c | D∕ c| . Unfortunately, for two reasons, this is not possible. First, the analytical expression Eq. (8) cannot be inverted due to the sum over the modes m, thus the derivative D∕ c cannot be computed analytically. Second, to estimate the derivative of the inverse of a function as the reciprocal of the function derivative, it is necessary that the function derivative is nonzero. Therefore, the derivative D∕ c cannot be estimated as the reciprocal of the derivative ( c∕ D) −1 since for m > 1 the derivative c∕ D = 0 at x = L∕m within the domain boundaries 0 < x < L. Therefore, we estimate the measurement uncertainty on the value of D as the ratio between the mean D and the standard deviation , defined as: where n is the number of time steps (or samples collected), D j is the fitted value of D at time t = t j . Dilution Index Once the value of diffusion coefficient D has been correctly estimated, one can predict the concentration profile c(x, t) at any time and for any initial condition f 0 using Eq. 8. The degree of mixing reached by the diffusive system can be described in terms of system entropy or dilution index (Kitanidis 1994): The dilution index E increases as the system homogenizes. According to (Hamada et al. 2020), for the initial condition considered in an un-confined system E should increase indefinitely as However, in a confined scenario, as in a single pore or in our microfluidics system, as soon as the concentration profile experiences the presence of the impermeable boundaries (i.e. for times larger than D ∕ 2 , as discussed in (Hamada et al. 2020), E should deviate from the scaling exp( √ t) to plateau at √ 2 (or ln(E) ∼ 0.346 ) once the concentration profile get homogenized and the system is well mixed. Polystyrene Fluorescent Particles We record images of a diffusive front of polystyrene particles over seven hours (about D ∕ 2 , the confined mixing time predicted by (Hamada et al. 2020)) at a rate of one image per hour. The measured concentration profiles are shown as dots in Fig. 3.a, as time increases, the profiles go from light to dark color. The fit of these profiles is superposed as solid lines while the initial condition f 0 (x) is shown as black dots. For this data set, the MSE between fitted and measured profiles is on average, over all times, 2.3 10 −4 , corresponding to an average deviation of the analytical model from the measured data of 2 √ 2.3 10 −4 ∼ 0.03 , about 3%. In Fig. 3b (diamonds) are shown the fitted values of D for each profile and the average value is D = 2.6 10 −6 mm 2 /s (Fig. 3b black dotted line). The standard deviation among fitted values, given by Eq. (11), is = 8.52 10 −8 mm 2 /s which indicates a deviation around the mean of 3.3% . The average value of the measured molecular diffusion coefficient is consistent, within 3% with the theoretical estimation by the Stokes-Einstein relation, showing that the novel method proposed is accurate. We compute the temporal evolution of the Dilution Index E, quantifying the overall mixing degree, as defined in Eq. (12). In Fig. 3c is shown the temporal evolution of log[E(t)] versus rescaled time t∕ D for the measured (diamonds) and fitted (solid line) profile. Note that, the system entropy increases as the particles diffuses, and it will eventually reaches a plateau when the system is completely homogeneous, since no more macroscopic gradients are present and, thus, no more mixing can happen (Hamada et al. 2020). The asymptotic value towards which E is approaching results to be √ 2 (or ln(E) ∼ 0.346 ), as predicted by replacing a stationary, flat, concentration profile in Eq. (12). This means that diffusion efficiently mixed the tracer within the confined space of the microfluidics device. Methylene Blue Dye Images of a methyl blue diffusive front are recorded over 11 minutes (about D ∕ 2 , the confined mixing time predicted by (Hamada et al. 2020)) at a rate of one image every Fig. 4a, as time increases the profiles are shown from light to dark color. We use the first profile (black dots in Fig. 3a) as initial condition f 0 (x) for the fit of Eq. (8). The MSE results to be 8.7 10 −5 , corresponding to an average deviation of the analytical model from the measured data of 2 √ 8.7 10 −5 ∼ 0.02 , about 2% and we obtain ten values of fitted diffusion coefficient, one per profile, as shown in Fig. 4b (diamonds). The average value (dotted line) is D = 2.4 10 −4 mm 2 /s and the standard deviation among these fitted values, as defined in Eq. (11), is = 4.89 10 −6 mm 2 /s which indicates a deviation around the mean of 2 %. The measured value of the diffusion coefficient is 70% higher than the prediction of the Stokes-Einstein relation using the average radius r . On the one hand, we argue that one of the benefits of the proposed method to measure D can be used to provide an estimate of the effective molecule radius r, using the Stokes-Einstein relation, eq. (4), which is a robust physical model to estimate D for spherical objects suspended in liquid bulk. On the other hand, we have to realize that the effective radius of a non-spherical object could fail in representing it over several applications and, sometimes, it is necessary to avoid that approximation. We compute the temporal evolution of the Dilution Index E for this tracer, as defined in Eq. (12). In Fig. 4c is shown the temporal evolution of log[E(t)] versus rescaled time t∕ D for the measured (diamonds) and fitted (solid line) profile. The result is the same as for the fluorescent micro-spheres: the system mixing degree, its entropy, increases as the tracer diffuses, and it eventually reach a plateau when the microfluidics channel is completely well mixed. The asymptotic value towards which E is approaching results to be √ 2 (or ln(E) ∼ 0.346 ), as predicted: diffusion efficiently mixed the tracer within our diffusion chamber. We verified that the method of fitting the analytical solution of the 1d diffusion equation to the measured concentration profile is robust: we repeated the measurement, for both the particles suspension and the dye solution, using a different camera, withdrawing flow with a syringe pump from two separated reservoirs or infusing them, instead of using a pressure controller, using stainless steal valves (from Swagelock) instead of microfluidics valves, obtaining the same results (same value for D within 3% uncertainty). On the one hand, the main limitation of the proposed methodology is that the substance of interest, the one for which an estimate of D is desired, must be detectable and distinguishable from the fluid where it is dissolved/suspended, with optical methods. On the other hand, scientific cameras used to detect the tracer signal are increasingly affordable and relatively easy to use: this makes the proposed method suitable for several applications and, likely, to be developed also for measurements on the field. As a final remark, we anticipate that it is better to collect pictures under experimental conditions so that the front is not too sharp and not too flat, to avoid large fit uncertainty. If a picture is collected when the solute/suspension diffused a lot across the channel, the tracer profile dependence on space is weak (flat profile). Under these conditions, at a given time t * , a variation in space x produces a small variation in c(x, t * ) : a variation in the guess value D, during the fit procedure, also produces small variation in c(x, t * ) and, thus, a large uncertainty. Note that, also images collected next to the inlet, or with a strong flow rate, result flat for a significant portion (next to the boundaries). In other words, the diffusive front, the portion of space where the profile is not flat, should be as large as possible. Conclusions The novel method presented here allows to measure the diffusion coefficient D of a tracer (dissolved or suspended) through a direct visualization of the concentration profile and its dynamics. The profiles measured at different times are fitted with the analytical solution of diffusion equation, with the single fitting parameter D. We tested the method measuring the diffusion coefficient of a mono-dispersed suspension of spherical particles for which the Stokes-Einstein relation provides a theoretical estimate. Our results show that the method is accurate: for our test tracer, the discrepancy between measured and theoretical value of the diffusion coefficient is smaller than the method uncertainty of 3%. We show, as reasonably expected, that for non-spherical particles using an effective radius to theoretically estimate the diffusion coefficient can lead to a substantial error: in the case of methyl blue, the value of D would be underestimated by 70%. Measuring the concentration profile as it spreads as time passes, we could also estimate the mixing degree of the system: we show experimentally that, as predicted theoretically by (Hamada et al. 2020), under confinement (as in porous systems), diffusion enhances mixing. In absence of confinement (for large domain or for a very short time scale), E would keep increasing slowly diluting the tracer: instead, as predicted by (Hamada et al , 2020), in a confined scenario, molecular diffusion is more efficient in mixing. This is quantified by the mixing time scale, the one needed to stop the growth of the dilution index, that is reduced by a factor 1∕ 2 ∼ 1∕10 with respect to the characteristic diffusion time D = L 2 ∕D over the confinement length scale L. Acknowledgements The research leading to these results has received funding from the Swiss National Science Foundation, project no. 200021 _ 172587, Flows in confined micro-structures: coupling physical heterogeneity and bio-chemical processes. Funding Open Access funding provided by Université de Lausanne. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
6,797
2021-10-20T00:00:00.000
[ "Physics" ]
Comparing Low and High-Temporal Resolution DCE- MRI Texture Analysis for Discrimination of Breast Lesions from Background Enhancement Yufeng Liu Zhejiang Chinese Medical University Jiaying Li Zhejiang Chinese Medical University Jingjing Qu Zhejiang Chinese Medical University Rui Tang Zhejiang University School of Medicine Second A liated Hospital Kun Lv Huashan Hospital Fudan University Chundan Wang Zhejiang Chinese Medical University Fengchun Xiao Zhejiang Chinese Medical University Jiali Zhou Ningbo First Hospital Min Ge Zhejiang Chinese Medical University Xuewei Ding Zhejiang Chinese Medical University Hong Ding Zhejiang Chinese Medical University Shiwei Wang Zhejiang Chinese Medical University Maosheng Xu ( <EMAIL_ADDRESS>) The First A liated Hospital of Zhejiang Chinese Medical University; The First Clinical Medical College of Zhejiang Chinese Medical University https://orcid.org/0000-0002-2396-1600 Page 4/18 Background Breast cancer is the second leading cause of cancer death among women and is the most common malignant tumors in women worldwide [1]. In China, its incidence and mortality are increasing while the age of onset is decreasing [2,3]. Around 1.6 million people are diagnosed each year and 1.2 million succumb to their illness. Therefore, early and accurate diagnosis is essential to guide the treatment of breast cancer. Traditional contrast-enhanced magnetic resonance imaging (CE-MRI) plays a central role in the diagnosis of breast pathologies due to its non-invasive and radiation-free nature. Yet, background enhancement (BE) may obscure enhanced lesions or contribute to false-positive readings [4][5][6]. Routine breast CE-MRI has high sensitivity and low speci city, which limits its diagnostic utility. In view of the heterogeneity of benign and malignant diseases and BE tissues, high-resolution dynamic contrast-enhanced MRI (DCE-MRI) can better evaluate the tissue microenvironment and texture characteristics. DCE-MRI can predict the biological characteristics of breast cancer, tumor environment, situation [16][17][18], molecular receptor subtypes [10,[17][18][19][20], and even genomics [9,21,22]. Quantitative assessment of ROI signal strength, spatial structure and other image parameter characteristics by high-throughput extraction can also offer diagnostic value [7,8]. For example, the quantitative analysis of the gray level cooccurrence matrix of heterogeneity may be a sign of metastasis and contribute to poor prognosis [9][10][11][12][13][14][15]. There is evidence to suggest that factors such as texture evaluation of vasculature, lymphatic permeation, density and angiogenesis are correlated with the prognosis of the disease [26]. Therefore, high-resolution DCE-MRI analysis of texture features can help predict disease prognosis [19,[23][24][25]. Such utility can help in the early diagnosis of breast cancers [4,10,27]. Therefore, in this study, we aimed to investigate the application value of texture analysis based on kinetic parametric maps from breast DCE-MRI for the discrimination of benign lesions, malignant lesions and background parenchymal enhancement using a two-compartment Extended Tofts model and volume of interest. The contrast agent was Gadopentetate dimeglumine (Omniscan, GE Healthcare) at the dosage of 0.1 mmol/kg administered by intravenous bolus injection using a double cylinder high pressure injector (MR injection system) at a ow rate of 2.0 ml/sec,followed by a 10 ml saline ush. Image analysis All DCE-MRI image data was transferred into the Omni-Kinetic software (GE Healthcare, version 2.10) to obtain the K trans texture features and the pharmacokinetic parameters (K trans, K ep , V e and V p ) of the lesion central area (Lesion), surrounding peripheral area (Peri) and background enhancement (BE) area in the malignant and the benign groups using a two-compartment extended Tofts model and three-dimensional volume of interest. All data were analyzed by 2 senior imaging diagnostic radiologists (J.L. with 5 years of experience, and J.Z. with 7 years of experience in breast MRI). The ROI (region of interest) was manually selected and delineated based on enhanced T1-weighted images and the pharmacokinetic maps. In Fig. 2, a malignant lesion was clearly shown, and a polygon ROI was drawn manually according to the shapes of lesion (lesion area), the surrounding peripheral area (peri, with a radius of 2.5-5.0 mm depending on pixel size) and background enhancement (BE) area (normal breast background parenchyma enhancement) avoiding the necrotic area and vascular beds. Hemodynamic parameters included K trans (ml/min, endothelial transfer constant refers to the rate of blood leakage to the extravascular extracellular uid gap (EES)), K ep (ml/min, re ux rate constant refers to the blood leakage rate from EES back to blood vessels), V e = K trans /K ep (ml/ml, fractional EES volume refers to the method that fractional EES volume in the EES of the contrast agents), and V p (ml/ml, fractional plasma volume refers to the plasma volume of contrast agent) was used to quantitatively evaluate the microcirculation characteristics of the lesions. Texture feature analysis refers to image conversion and quantitative analysis. The image conversion functions to decompose the conventional image into basic components such as space and frequency. Quantitative analysis included the structure, model, statistics and spectrum of various basic components. For each patient, all texture features were obtained based on the K trans map from DCE-MRI images. Texture feature analysis based on high resolution DCE-MRI included the rst-order distribution statistics, gray-level co-occurrence matrix (GLCM) and gray-level run-length matrix (GLRLM). The rst-order distribution statistics included energy, skew, maximum, median, mean, mean absolute deviation, range, root mean square, standard deviation, variance, etc. The gray-level histogram is a discrete function of image gray level (formula: H(i) = ni/N(i = 0, 1,..., L -1), where i represents the grayscale level, L represents the number of grayscale categories, ni represents the number of pixels with grayscale level i in the image, and N represents the total number of pixels in the image), which refers to the probability of occurrence of pixels in the image with grayscale i. GLCM is a statistical method based on the second-order combination condition probability density function to describe texture, local mode, random, and spatial statistical characteristics, and to represent regional consistency and inter-regional relativity. GLRLM is the joint probability density of two position pixels, which re ects the brightness distribution characteristics and the second-order statistical characteristics of the location distribution characteristics between pixels with the same or similar brightness. Statistical analysis All statistical analysis was carried out in IBM SPSS Statistics software (version 19.0; BM SPSS, Chicago, 1L, USA), with P < 0.05 statistical signi cance. The Student's t-test was used to compare the differences of hemodynamic parameters from high-resolution and low-resolution DCE-MRI of the lesion central area, surrounding peripheral area and background enhancement in the malignant and benign groups (multiple correction by Family-wise error rate). Logistic regression was used to build models based on texture features for the diagnosis of breast lesions and background enhancing e ciency. ROC analysis was used to determine the diagnostic performance of each parameter. Since the sample size in the benign and malignant groups were not uniform, over-sampling in the smaller group was conducted using the Smote (Synthetic Minority Oversampling Technique) method. For each missing sample in the smaller group, the Euclidean distance was used as the standard to calculate the distance to all samples in a small sample set, and the nearest neighbor was designated as K. A sampling ratio was set according to the sample imbalance ratio to determine the sampling ratio N. For each minority sample X, a number of samples were randomly selected from its K neighbor, assuming that the selected neighbor was Xn. For each randomly selected nearest neighbor Xn, the new sample was constructed with the original sample respectively according to the following formula: A new sample of high-resolution DCE-MRI group was constructed with the following: 62 cases --->96 cases; a new sample of low resolution DCE-MRI group was constructed with the following: 78 cases --->128 cases. The samples were randomly divided into the training and testing groups by 0.7:0.3. As the potential correlations among many features easily affect the accuracy of the model, we conducted a three-step dimensionality reduction for the obtained features. In the rst step, the single-factor method was used to analyze and select the features with differences; in the second step, the features related to dependent variables were selected through a general linear model; in the third step, feature dimension reduction was used by the LASSO (Least absolute shrinkage and selection operator) method. After dimension reduction, the model was built by logistic regression, and the classi cation accuracy of the test group was veri ed. The area under the roc curve (AUC) was calculated. The Delong-test was used to compare the ROC curves of the models. Study population There were a total of 62 patients who underwent preoperative high temporal resolution DCE-MRI (1 + 26 phases) scans. 39 malignant lesions (age range, 31-77 years; average age, 54.1 years; 17 cases located in the right breast, 22 cases located in the left) and 23 benign lesions (age range, 25-79 years; average age, 48.0 years) were identi ed. In the control group, 78 patients underwent preoperative low temporal resolution DCE-MRI (1 + 6 phases) scans, which demonstrated 46 malignant (age range, 32-70 years; average age, 50.0 years; 21 cases located in the right breast, 25 cases located in the left) and 32 benign lesions (age range, 23-70 years; average age, 43.5 years). Table 1 showed general characteristics in this study. In the malignant group, the correlation between hemodynamic parameters and pathological grade (Grade I to III) of invasive ductal carcinoma (IDC) was further analyzed. It was found that the lesion K trans -mean of the high-temporal resolution DCE-MRI group was signi cantly correlated to the pathological grading (r = 0.400, P = 0.012). In contrast, the low-temporal resolution DCE-MRI group had no correlation with the pathological grade (r = − 0,012, P > 0.05). Performance of K trans texture feature models based on two temporal resolution DCE-MRI According to the classi cation model constructed of the texture features based on K trans map from DCE-MRI, the AUC, accuracy, sensitivity and speci city of models from the high-temporal resolution DCE-MRI (1 + 26 phases) group were higher than those of the low-temporal resolution DCE-MRI (1 + 5 phases) group (Table 3, Fig. 3). Discussion In this study, we conducted a randomized controlled study on preoperative high-temporal resolution and low-temporal resolution DCE-MRI texture features, including the lesion central area, surrounding peripheral area, and background enhancement area. The quantitative parameters were measured by volume measurement, and the ROI area was selected to be larger, which was more comprehensive and accurate than the data measured at a single level in most previous studies. Neoadjuvant chemotherapy patients were excluded to avoid the effect of therapy on lesions. These results in this study indicated that high-temporal resolution DCE-MRI may be more helpful than low-temporal resolution DCE-MRI in the differentiation of breast disease from background enhancement. Most previous studies have focused only on the disease itself [8-18, 21, 22] or the peripheral interstitial areas [19,23,24]. The lesion center, periphery and background enhancement area may be related to the differentiation of benign and malignant lesions. Few studies have comprehensively focused on the aforementioned area by volume measurement. The region of interest in this study included the lesion central area, surrounding peripheral area and background enhancement area. Therefore, the data were as comprehensive. It was found that the three ROI's hemodynamic characteristics (K trans , K ep , V e , V p , TTP) obtained from two different temporal resolution DCE-MRI had a statistical difference. Particularly, the joint K trans texture features of three areas may be helpful for the differentiation of benign and malignant breast disease. These results indicated that the comprehensive measurement of lesions and background enhancement area could be more accurate in the diagnosis and differentiation of benign and malignant lesions. In this study, the ROC curve showed that models from the high-temporal resolution DCE-MRI group were slightly higher than those of the low-temporal resolution DCE-MRI group, which meant that DCE-MRI with high temporal resolution had high application value in the diagnosis of breast diseases. Conventional breast dynamic enhanced MRI mainly observed the characteristic of time signal intensity curve of breast disease (i.e curve shape, time to peak, early enhancement rate, etc.) and allowed for semiquantitative analysis of tumor characters. But these semiquantitative parameters were in uenced by the cardiac output, imaging sequences, contrast medium injection rate, blood ow and so on, which were prone to error [30]. This study used quantitative analysis based on breast DCE -MRI by the Extended Tofts model and obtained multiple hemodynamic parameters, such as the endothelial transfer constant of K trans , re ux rate of K ep , fractional EES volume of V e and fractional plasma volume of V p . The extended Tofts model was applicable to the time resolution of < 12 seconds, as high temporal resolution can more accurately measure the lesion and observe subtle pharmacokinetic changes in tissue for a very short time. This allows the sequence to accurately capture characters of the lesion, even including subtle differences of contrast agent concentration changes. Thus, high-temporal resolution may extract more texture features than low-temporal resolution for accurate diagnosis of breast lesions [31,32]. Although pathology is the gold standard for disease analysis and classi cation, it is limited by invasiveness and local sampling, which may cause inaccurate pathological results and the limited pathological classi cation of diseases. DCE-MRI has become an important method of diagnosis and observation of breast diseases due to its noninvasive, radiation-free and high-resolution nature with soft tissue. In this study, we analyzed the correlation between DCE-MRI and pathological grading of malignant lesions (invasive ductal carcinoma, IDC). The results showed that only the K trans -mean of lesion central area on the high-temporal resolution DCE-MRI was signi cantly correlated with pathological grade [33]. In this study, invasive ductal carcinoma was only included in the malignant group because the number of patients with other malignant types was too small (less than 5 cases) to be excluded by correlation statistics of pathological grade. These indicated that high-temporal resolution DCE-MRI may offer more diagnostic information in a lesion. It was determined that K trans -mean re ects the hemodynamic characteristics of the disease and was related to the number and immaturity degree of blood vessels in the central tumor and peritumor regions in different pathological grades. Tumor angiogenesis increased in most malignant lesions, and there were a large number of immature tumors with high permeability of their vessel walls. Therefore, low molecular weight contrast agents were able to enter the EES through the thin vessel walls. Some previous studies also support that high-temporal resolution DCE-MRI can more accurately evaluate the hemodynamic microenvironment of tissues, through high-throughput extracted image texture features and quantitative evaluation. For example, the gray level co-occurrence matrix with the quantitative enhancement of heterogeneity can predict breast cancer invasion, prognosis and curative effect [7,[9][10][11][12][13][14][15]. Interestingly, this study found that the performance of K trans texture feature models of lesion, peripheral and background enhancement area based on high-temporal resolution DCE-MRI had more slightly high than low resolution, particularly joint textural feature model of three areas better than others. Background enhancement (BE) of the breast re ects an increase of T1 relaxation after enhancement, which directly re ects the blood supply and permeability of breast tissue. Hormones, especially estrogen, increase the microvascular permeability and vasodilation of breast tissue, causing vascular hyperplasia and ductal gland epithelial proliferation. Progesterone increases metabolic activity by promoting mitosis and leads to increased perfusion of the breast tissue, resulting in background enhancement of the breast. Tissue enhancement in MR imaging is related to vascular distribution, the permeability of contrast agents and T1 relaxation of the tissue [28]. In breast tissue, the anatomy of the vascular system and the effect of hormones on breast tissue are factors affecting the morphology and degree of BE [29]. High resolution dynamic enhanced MR texture can quantitatively evaluate disease and tissue heterogeneity, which is helpful for accurate assessment and early diagnosis of breast diseases and BE [4,10,27]. Thus, not only can high-temporal resolution DCE-MRI provide a high application value in the diagnosis of breast disease and background enhancement, but also allows for joint texture features of the lesion. There were several limitations in this study: First, the sample size was relatively small. When the number of texture features to be extracted and screened was large, the sample size was challenged, which needs further study with large samples. Second, malignant breast lesions of other pathological types of the breast were not included in the group, which needs to be further studied. Third, high time resolution DCE-MRI scan still needs to be further improved to further optimize the scanning time. Fourth, the textural features model in this study was not further outside this center. Therefore, a multi-center validation can be implemented and add clinical characteristics to the model in the future. Conclusions In summary, quantitative analysis with high temporal resolution DCE-MRI has slightly high application value in the diagnosis of breast lesions and background enhancement. The joint texture analysis of the lesion, peripheral area and BE area based on high temporal resolution DCE-MRI may more helpful in the diagnosis, and the K trans -mean parameter may contribute to the pathological grading of malignant tumors. Thus, the use of texture analysis based on high temporal resolution DCE-MRI may potentially improve breast cancer diagnostic performance. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The Ethics Committee of the First A liated Hospital of Zhejiang Chinese Medical University has approved this study, in which informed consent was written, and patient con dentiality was protected. Consent for publication Not applicable Availability of data and materials: The data that support the ndings of this study are available from the corresponding author upon reasonable request.
4,093.6
2020-12-22T00:00:00.000
[ "Medicine", "Education", "Materials Science" ]
Curating a longitudinal research resource using linked primary care EHR data—a UK Biobank case study Abstract Primary care EHR data are often of clinical importance to cohort studies however they require careful handling. Challenges include determining the periods during which EHR data were collected. Participants are typically censored when they deregister from a medical practice, however, cohort studies wish to follow participants longitudinally including those that change practice. Using UK Biobank as an exemplar, we developed methodology to infer continuous periods of data collection and maximize follow-up in longitudinal studies. This resulted in longer follow-up for around 40% of participants with multiple registration records (mean increase of 3.8 years from the first study visit). The approach did not sacrifice phenotyping accuracy when comparing agreement between self-reported and EHR data. A diabetes mellitus case study illustrates how the algorithm supports longitudinal study design and provides further validation. We use UK Biobank data, however, the tools provided can be used for other conditions and studies with minimal alteration. summarises data standards developed from the Clinical Practice Research Datalink (CPRD) Aurum "acceptable patient flag" [1]. Data quality was assessed against these standards by data provider. Coding completeness was also assessed and results are summarised in tables S2 to S4. Records with missing dates (as defined in table S1) or codes were excluded. Dates were estimated where records had been de-identified by UK Biobank [2]. Birth-dated records (dated 02/02/1902) were set to the estimated date of birth 1 . Records recorded during the birth year (dated 03/03/1903) were assumed to have been recorded at age six months. Data providers 1 to 3 had large numbers of registration records with missing de-registration dates. These were assumed to be open periods of registration at the date of data extract. Provider 4 (Wales) appeared to use a future date placeholder (07/07/2037) for open periods of registration. Table S1: Data standards developed from the CPRD Aurum "acceptable patient flag" [1]. CPRD Aurum exclusion criteria Adapted UK Biobank criteria Year of birth is empty. Available for all participants. Gender other than male, female or indeterminate. Available for all participants. Age is greater than 115 at end of follow-up (based on registration end date, death or last collection date). Same criteria applied. Patients are not permanently registered. All participants with registration records were assumed to be permanently registered. c) record date date was prior to birth. 1 Participants were censored at the earlier of the data extract date (the start of the date range provided by UK Biobank [2]) and the date of death in linked death registry data when cleaning registration record data. To address the limitations of using practice registration histories to estimate periods of data collection, a rule-based approach was developed to determine the period of EHR data collection for each participant. This is summarised in algorithm A1 and figure S1. An R implementation is provided at https://github.com/philipdarke/ukbb-ehr-data. The algorithm is applied separately to each data provider if a participant has data from multiple providers (for example, a participant that transfers from a medical practice in England to one in Wales). The resultant period(s) of data collection for each data provider are combined and data collection is assumed to be continuous during gaps of less than 1 year. Algorithm A1: Algorithm used to identify periods of EHR data collection (continued). Step Participants with multiple periods of practice registration (e.g. as a result of changing GP on relocation) may not be fully captured in the data extract. These participants may feature discontinuous periods of data collection. All additional periods during which EHR data may have been collected are therefore identified. 6. Identify the date of the first and last record within each G i where i = 1, ..., n. Set R i to the period between the first and last record for The gaps between registration periods are examined to determine whether any records have been recorded outside of periods of practice registration. 7. Active EHR data collection is assumed to take place during: a) P 1 . b) P 2 , . . . , P n . c) All periods R i containing at least one non-prescription record. Active EHR data collection is assumed to take place: a) During the period from the start of data collection identified in step 4. b) During subsequent periods of practice registration (i.e. it is assumed that a participant did not move from a practice using an EHR system to a paper-based one). c) Between the first and last record during periods where a participant is not registered with a GP practice. Un-registered periods that only contain prescription records are ignored e . 8. Include gaps between the periods identified in step 7 if they are of length less than 1 year (inclusive). Participants that move GP practice may not have continuous periods of registration. Gaps of 1 year or less are included to reflect this. e Complete data collection is unlikely to have taken place during un-registered periods that only feature prescription records. Figure S1: Application of algorithm A1. This synthetic participant corresponds to example 4 in figure 1 in the main manuscript (continued on next page). Step 1: ignore records dated before 1 January 1985 1980 1990 2000 2010 2020 Step Step 5: identify other potential periods of data collection 1980 1990 2000 2010 2020 Step 6: R 1 = period between first and last records during gap between registered periods 1980 1990 2000 2010 2020 Step 7: select each period of data collection Registered with GP Diagnosis/event Test/observation Prescription Step 8: join together periods P 1 and R 1 as separated by less than one year Summary of approach Observations and biomarkers were recorded in up to three value fields. Each data provider adopted a different approach [3] with numeric test results extracted primarily from the first and second value fields. Units, where available, were typically recorded in the value3 field. d) The median value was taken where multiple test results were recorded on the same day e.g. blood pressure measurements. e) Measurements and biomarkers recorded at UK Biobank assessment visits were extracted, outliers removed and added to those from the EHR data. Unit harmonisation f) The EHR value was discarded when both a UK Biobank and EHR observation were recorded on the same day. Additional processing was required where multiple measurements were recorded under the same entity type in the Vision practice management system (for example weight in value1 and BMI in value2 under code 22A..). became 0205021. All Read v2 codes were trimmed to length 5 3 . The majority of prescription records can only be resolved to BNF subparagraph. This is insufficient for some use cases, for example identifying atypical anti-psychotic medication which are included under BNF subparagraph 0402010 along with other anti-psychotic medications. Tables S45 and S46 illustrate how the drug name field can be used to identify drugs where insufficient detail is included in BNF coding. Table S7: UK Biobank prescription data. X indicates the coding terminology used by the data provider. Code Country Data provider Practice system Read v2 BNF dm+d prescription records were searched for the relevant prescription codes. Figure S3 illustrates the time between prescription records for a selection of drug types. A weekly repeat pattern is present. Previous work based on EMIS data [5] used a 28 day cut-off to determine "active" prescriptions i.e. a prescription within 28 days of the date of interest evidenced a current drug prescription. Based on figure S3, a 90 day cut-off was used for our analysis. Figure S3: Time between prescriptions in days for a range of drugs. 28 days is the most common interval (except for anti-psychotics) but gaps of 56 days and beyond are common. Tables S8 to S11 show the agreement between a selection of self-reported conditions and medications and the processed EHR data. Comparison is made as at the first UK Biobank study visit as set out in the main manuscript. Hypertension, diabetes and myocardial infarction showed high levels of agreement across evaluation metrics. Transient ischaemic attack (TIA) and mental health conditions showed a high number of "false positives" (low precision) where EHR codes were present but participants did not self-report the condition. Potential reasons for this include reluctance to self-report or erroneous code recording (for example a suspected TIA where diagnosis codes are rarely removed if a TIA is later ruled out). Diabetes incidence criteria Participants were assumed to enter a diabetic state at the date of the first diagnosis code. Where multiple diabetes sub-types were present in the data, participants were assumed to have type 2 diabetes with the exception of those under age 35 and with an insulin prescription prior to one year after diagnosis who were assumed to have type 1 diabetes. Diabetes remission criteria Remission was defined as the cessation of all diabetes medication followed by two sub-diabetic blood glucose test results (glycated hemoglobin (HbA1c) < 48 mmol/mol or fasting plasma glucose < 7.0 mmol/l) separated by at least six months [7]. Pre-diabetes criteria Normoglycaemic participants were deemed to enter a pre-diabetic state on the first date they met National Institute for Health and Clinical Excellence PH38 criteria [8]: a) HbA1c ≥ 42 mmol/mol or fasting plasma glucose ≥ 5.5 mmol/l (two-hour oral glucose tolerance test results ≥ 7.8 mmol/mol were also included to capture those with impaired glucose tolerance [9]) b) no previous diabetes diagnosis c) no diagnosis in the subsequent three months (excluding gestational diabetes) to allow for the delayed recording of a clinical code following a diagnosis based on a blood glucose test. Glucose tests during periods of gestational diabetes or anti-diabetes medication were not tested against these criteria. QDiabetes-2018 QDiabetes-2018 was evaluated in line with Hippisley-Cox and Coupland [5] using a study start of 1 January 2005 and the same inclusion criteria. Performance was evaluated using our algorithm to identify periods of data collection and our phenotyping approach to determine predictors and outcomes (tables S12 and S14) and, for comparison, assuming complete data collection during periods of GP registration record and predictors/outcomes determined as in Hippisley-Cox and Coupland [5] (tables S13 and S15). Our algorithm results in a broadly the same number of eligible participants (205,901 vs 205,290) but longer post visit follow-up (mean of 10.6 vs 9.7 years). Periods of data collection identified using our algorithm and outcomes using our phenotyping approach. Figure S5: Calibration of QDiabetes-2018 model on UK Biobank data (all data providers). Periods of data collection identified using GP registration records and outcomes as in Hippisley-Cox and Coupland [5]. Leicester risk score The Leicester risk score was calculated as in Gray et al. [10]. Performance was evaluated when predicting the 5 year incidence of diabetes at the first UK Biobank visit. 5 year performance was used due to the relatively small number of UK Biobank participants with 10 years of EHR follow-up following the first study visit. Our algorithm was used to identify periods of data collection and our phenotyping approach to determine outcomes (table S16). For comparison, the evaluation was also carried out assuming complete data collection during periods of GP registration record and outcomes determined as in Hippisley-Cox and Coupland [5] (table S17). Results are shown in table S18. Our algorithm results in a larger number of participants deemed to have EHR data collection from the first UK Biobank study visit (186,208 vs 181,233 when using registration record data) and longer post visit follow-up (mean of 7.4 vs 6.9 years). Leicester score performance for the 5 year incidence of diabetes was generally higher using our algorithm and phenotyping approach. S18: Confusion matrices for the 5 year incidence of diabetes following the first UK Biobank visit using the Leicester score. For comparison, Barber et al. [11] reported sensitivity of 89.2% and specificity of 42.3% for 10 year diabetes prediction. Active data collection/ diabetes outcomes determined using: Our algorithm/phenotyping tool GP registration records/as QDiabetes [5] Score < 16 Score ≥ 16 Score < 16 Score ≥ 16 substance therefore drugs were identified by searching the drug name field in all TPP records with BNF code starting 0402 for these generic and brand terms. Fuzzy matching was used e.g. Drug Search terms (case insensitive) UK BIOBANK CODING Demographic inputs were taken from UK Biobank visit data as these were generally unavailable in the linked EHR data. Relevant biomarkers measured by UK Biobank were used to augment those extracted from the EHR data. The fields used are summarised in table S47. Mapping for ethnicity and smoking are summarised in tables S48 to S50. The codes used to identify self-reported conditions are in table S51. Self-reported medications were identified from field 20003 by searching for drug names (both generic and brand names). The search terms are available at https://github.com/philipdarke/ehr-codesets. Any self-reported medications matching a steroid term were excluded if they also included the terms "eye", "ear" or "cream" as the aim was to identify "regular steroid tablets" as under the QDiabetes-2018 model.
3,017
2021-12-13T00:00:00.000
[ "Medicine", "Computer Science" ]
Enantioselective Recognition of L-Lysine by ICT Effect with a Novel Binaphthyl-Based Complex A novel triazole fluorescent sensor was efficiently synthesized using binaphthol as the starting substrate with 85% total end product yield. This chiral fluorescence sensor was proved to have high specific enantioselectivity for lysine. The fluorescence intensity of R-1 was found to increase linearly when the equivalent amount of L-lysine (0–100 eq.) was gradually increased in the system. The fluorescence intensity of L-lysine to R-1 was significantly enhanced, accompanied by the red-shift of emission wavelength (389 nm to 411 nm), which was attributed to the enhanced electron transfer within the molecular structure, resulting in an ICT effect, while the fluorescence response of D-lysine showed a decreasing trend. The enantioselective fluorescence enhancement ratio for the maximum fluorescence intensity was 31.27 [ef = |(IL − I0)/(ID − I0)|, 20 eq. Lys], thus it can be seen that this fluorescent probe can be used to identify and distinguish between different configurations of lysine. Introduction In recent years, fluorescent sensors have gained more attention from researchers due to their better sensitivity and short recognition response time, good selectivity, and ease of operation. Among many recognition applications, the enantiomeric fluorescence recognition of chiral organic compounds has gradually become a research hotspot for fluorescent probe design [1][2][3][4][5][6]. A literature review reveals that highly enantioselective fluorescent sensors for recognizing chiral molecules (e.g., amino alcohols and amino acids and related chiral small molecule analogs) are also being developed [7,8]. Chiral molecules are also receiving increasing attention in materials composition, organic synthesis, and agricultural products. Amino acids are used as nucleophilic reagents in various reactions occurring at carbon-carbon bonds [9]. Amino acids are also an indispensable part of the human body. In the human body, amino acids are important raw materials for protein synthesis and are equally important for human development, normal metabolism, and the continuation of life [10]. Lysine is a fatty amino acid and one of the basic essential amino acids for humans and mammals [11][12][13][14]. The body cannot synthesize it by itself, and it must be supplemented with food. L-Lys can promote skeletal muscle growth [15,16], enhance the body's immunity, resist the virus, promote fat oxidation, relieve anxiety, etc. It also promotes the absorption of certain nutrients and can work synergistically with some nutrients to improve the performance of various nutrients. Synergistic effects can better express the physiological functions of various nutrients [17,18]. The deficiency of D-Lys can lead to adverse effects such as uremia. The absorption efficiency of D-Lys and L-Lys is different, and D-Lys can hardly be absorbed and utilized, while the biological activity is mainly L-Lys. Lysine is essential for the function of proteins. As a drug target, lysine is not only versatile Materials and Methods Shanghai Aladdin provided all the reagents used in this experiment, and the chemicals used were purchased from the corresponding suppliers or synthesized by known routes. 1 H NMR and 13 C NMR were measured on the Bruker AM-400WB spectrometer, the instrument used for fluorescence experiments was a Hitachi F-7100 Fluorescence spectrophotometer, and melting points were determined using an X-4 melting point tester. The optical rotation was performed by a Rudolph AUTOPOL IV automatic polarimeter in chromatographic methanol, the ESI-MS spectral data were measured by a Bruker Amazon ion-trap mass spectrometer, and the nonlinear curve fitting software was by Origin 2021. R-3,3'-bis(hydroxymethylene)-2,2'-bis(methoxy-methoxy)-1,1'-binaphthyl (R-b) (2.0 g, 4.6 mmol) with 10 mL of ultra-dry THF was added to a 50 mL single-mouth flask, then add sodium hydride (0.49 g, 20.2 mmol) in batches and continue the reaction for 2 h, then slowly add 3-bromopropyne (1.8 mL, 23.0 mmol) to the reaction system. Allow the reaction to be overnight after dropping. After the detection reaction of thin layer chromatography spot plate (TLC), the reaction was pretreated to obtain 1.8 g of yellow oily liquid, Y = 76.6%. Under the condition of a turning ice bath, the product (1.8 g, 1.5 mmol) was added to a 100 mL single-mouth bottle and dissolved with 20 mL of CH 3 OH/THF (CH 3 OH:THF = 2:3) solution, and after 10 minutes, we continued to slowly add 5 mL of concentrated hydrochloric acid dropwise, and allowed it to continue to react for 3 hours. After the thin-layer chromatography spot plate (TLC) detection reaction was completed, the yellow oily liquid was obtained by spin drying by silica gel column chromatography (unfolding agent petroleum ether:ethyl acetate = 3:1), Y = 72.5 %. 1 R-3,3'-bis((prop-2-yn-1-yloxy) methyl)-[1,1'-binaphthalene]-2,2'-diol (R-c) (1.0 g, 2.26 mmol) and 2-azidoacetate (0.5 mL, 5.4 mmol) were added to a 100 mL aubergine flask with 20 mL of tetrahydrofuran. After the reaction system was stirred for 5 min at 0 • C, sodium ascorbate (0.98 g, 4.9 mmol), copper sulfate pentahydrate (0.56 g, 2.2 mmol), and 8 mL of water were injected into the reaction system overnight. After TLC detected the complete reaction, the reaction was quenched in ice water; dichloromethane was extracted three times, washed once with saturated salt water, and dried with anhydrous magnesium sulfate. Filter and silica gel column chromatography (unfolding agent petroleum ether:ethyl acetate = 1:2) obtained a white solid 1. 25 S-3,3'-bis((prop-2-yn-1-yloxy) methyl)-[1,1'-binaphthalene]-2,2'-diol (S-c) (0.30 g, 0.67 mmol) and 2-azidoacetate (0.16 mL, 1.60 mmol) were added to a 100 mL aubergine flask with 10 mL of tetrahydrofuran. After the reaction system was stirred for 5 min at 0 • C, sodium ascorbate (0.30 g, 1.50 mmol), copper sulfate pentahydrate (0.17 g, 0.67 mmol), and 6 mL of water were injected into the reaction system overnight. After TLC detected the complete reaction, the reaction was quenched in ice water; dichloromethane was extracted three times, washed once with saturated salt water, and dried with anhydrous magnesium sulfate. Preparation of Solutions Required for the Test A 65 mg probe R-1 and a 65 mg probe S-1, respectively, were placed into two 10 mL volume volumetric flasks; add chromatographic methanol to dissolve and quantify to 10 mL. At this time, the concentration of the test mother liquor is 0.1 M, and then the mother liquor is diluted to a concentration of 2.0 × 10 −5 mol L −1 , which needs to be ready to be used at any time. The commonly used amino acids (D/L-cysteine, D/L-phenylalanine, D/L-alanine, D/L-methionine, D/L-proline, D/L-lysine, D/L-glutamic acid, D/L-glutamine, D/Larginine, D/L-serine, D/L-threonine, D/L-asparagine, D/L-aspartic acid, D/L-valine, D/L-histidine, D/L-tryptophan, D/L-leucine D/L-tyrosine) were used to configure the above amino acid solution at a concentration of 0.1 M in deionized water, which needs to be prepared and used freshly. At room temperature of 25 • C, 2 mL of R-1 test solution was added to 3.5 mL of high-transparency quartz fluorescence cuvette, followed by 8 µL of amino acids to be measured, and then the fluorescence spectral response was carried out, λ ex = 318 nm, slits = 5.0/2.5 nm, and the fluorescence response time was 0.8 s. Synthesis Step Scheme Scheme 1 shows the synthesis procedure for probes R-1 and S-1. BINOL fluorescent probes R-1 and S-1 modified by the triazole group were synthesized using dinaphthol as the starting substrate. Propyne and derivatives R-c and S-c were synthesized by addition and substitution reactions, and then the probes were synthesized by Click reaction and methyl azidoacetate with THF as solvent at room temperature. The yield is as high as more than 80%. The synthesized compounds were validated by 1 H NMR, 13 C NMR, and ESI-MS. Scheme 1 shows the synthesis procedure for probes R-1 and S-1. BINOL fluorescent probes R-1 and S-1 modified by the triazole group were synthesized using dinaphthol as the starting substrate. Propyne and derivatives R-c and S-c were synthesized by addition and substitution reactions, and then the probes were synthesized by Click reaction and methyl azidoacetate with THF as solvent at room temperature. The yield is as high as more than 80%. The synthesized compounds were validated by 1 H NMR, 13 C NMR, and ESI-MS. Scheme 1. The synthesis steps of compound R-1 and S-1. Fluorescence Studies of Lysine Fluorescence response of the R-1 probe to thirty-six amino acids (eighteen pairs of amino acid isomers) is as noted below. The solution of 0.1 M of amino acids was configured in deionized water, and R-1 was configured in chromatographic methanol at a concentration of 2.0 × 10 −5 M. During the test, each pair of amino acid isomers (20 eq.) was added separately to the test master solution R-1 for fluorescence response testing. As shown in Figure 1a, only L-Lys showed a significant fluorescence intensity change after adding thirty-six amino acids to the test system. As show in the figure, L-Lys significantly improved the fluorescence response of the R-1 probe with an I/I0 value of 1.79. Then, the following research studies were conducted by examining the fluorescence response of the lysine enantiomers. In the chromatographic CH3OH system, when L-Lys (20 fold equivalent) was added to the probe R-1, the fluorescence intensity at λ = 402 nm was substantially enhanced, and the fluorescence emission wavelength was red-shifted (from 389 nm to 402 nm). In the same case, when D-Lys was added, the fluorescence intensity at λ = 389 nm was reduced, the relative change was not significant (Figure 1b), and the value of the enantioselective fluorescence enhancement ratio was calculated as 31.27: where I0 refers to the fluorescence intensity of the fluorescent probe without the addition of the guest molecule, and IL and ID refer to the fluorescence intensity after the addition of the L-configuration and D-configuration guest molecules]. Therefore, it can be concluded that probe R-1 showed excellent specific recognition of L-Lys (20 fold equivalent) in the chromatographic CH3OH system. Scheme 1. The synthesis steps of compound R-1 and S-1. Fluorescence Studies of Lysine Fluorescence response of the R-1 probe to thirty-six amino acids (eighteen pairs of amino acid isomers) is as noted below. The solution of 0.1 M of amino acids was configured in deionized water, and R-1 was configured in chromatographic methanol at a concentration of 2.0 × 10 −5 M. During the test, each pair of amino acid isomers (20 eq.) was added separately to the test master solution R-1 for fluorescence response testing. As shown in Figure 1a, only L-Lys showed a significant fluorescence intensity change after adding thirty-six amino acids to the test system. As show in the figure, L-Lys significantly improved the fluorescence response of the R-1 probe with an I/I 0 value of 1.79. Then, the following research studies were conducted by examining the fluorescence response of the lysine enantiomers. In the chromatographic CH 3 OH system, when L-Lys (20 fold equivalent) was added to the probe R-1, the fluorescence intensity at λ = 402 nm was substantially enhanced, and the fluorescence emission wavelength was red-shifted (from 389 nm to 402 nm). In the same case, when D-Lys was added, the fluorescence intensity at λ = 389 nm was reduced, the relative change was not significant (Figure 1b), and the value of the enantioselective fluorescence enhancement ratio was calculated as 31.27: where I 0 refers to the fluorescence intensity of the fluorescent probe without the addition of the guest molecule, and I L and I D refer to the fluorescence intensity after the addition of the L-configuration and D-configuration guest molecules]. Therefore, it can be concluded that probe R-1 showed excellent specific recognition of L-Lys (20 fold equivalent) in the chromatographic CH 3 OH system. We further investigated the selective fluorescence recognition of D-/L-Lys by the chiral sensor R-1 (2.0 × 10 −5 mol/L chromatography CH 3 OH in mixed solution) for the enantiomeric isomers. Referring to Figure 2a,b, the fluorescence titration spectra of R-1 with D-Lys and L-Lys verified that with the increase of L-Lys concentration (100-fold equivalent), the fluorescence intensity of probe R-1 at λ = 411 nm was enhanced to 2.41 times of the initial intensity, and the emission wavelength was red-shifted from 389 nm to 411 nm; in the same case, with the addition of D-Lys (100-fold equivalent), the fluorescence intensity at λ = 389 nm showed a decreasing trend relative to the initial intensity, and the emission wavelength did not change. The analysis of the titration plot of D-/L-Lys shows that the fluorescence intensity of D-Lys shows a linear decrease, while the fluorescence intensity of L-Lys shows a linear increase, R = 0.9735(as shown in Figure 2c). In order to verify the stability of the complex between L-Lys and R-1, the saturation curve of L-Lys titration was performed, (Figure 2d) whose complex stability constant K a = 6.47 × 10 6 M was calculated by nonlinear fitting. We further investigated the selective fluorescence recognition of D-/L-Lys by the chiral sensor R-1 (2.0 × 10 −5 mol/L chromatography CH3OH in mixed solution) for the enantiomeric isomers. Referring to Figure 2a,b, the fluorescence titration spectra of R-1 with D-Lys and L-Lys verified that with the increase of L-Lys concentration (100-fold equivalent), the fluorescence intensity of probe R-1 at λ = 411 nm was enhanced to 2.41 times of the initial intensity, and the emission wavelength was red-shifted from 389 nm to 411 nm; in the same case, with the addition of D-Lys (100-fold equivalent), the fluorescence intensity at λ = 389 nm showed a decreasing trend relative to the initial intensity, and the emission wavelength did not change. The analysis of the titration plot of D-/L-Lys shows that the fluorescence intensity of D-Lys shows a linear decrease, while the fluorescence intensity of L-Lys shows a linear increase, R = 0.9735(as shown in Figure 2c). In order to verify the stability of the complex between L-Lys and R-1, the saturation curve of L-Lys titration was performed, (Figure 2d) whose complex stability constant Ka = 6.47 × 10 6 M was calculated by nonlinear fitting. Therefore, it is further speculated that this is due to the unique spatial structure inside the R-1 molecule synthesized based on the axial chirality of R-BINOL, which has different intensities of recognition for different configurations of lysine guest. We further investigated the selective fluorescence recognition of D-/L-Lys by the chiral sensor R-1 (2.0 × 10 −5 mol/L chromatography CH3OH in mixed solution) for the enantiomeric isomers. Referring to Figure 2a,b, the fluorescence titration spectra of R-1 with D-Lys and L-Lys verified that with the increase of L-Lys concentration (100-fold equivalent), the fluorescence intensity of probe R-1 at λ = 411 nm was enhanced to 2.41 times of the initial intensity, and the emission wavelength was red-shifted from 389 nm to 411 nm; in the same case, with the addition of D-Lys (100-fold equivalent), the fluorescence intensity at λ = 389 nm showed a decreasing trend relative to the initial intensity, and the emission wavelength did not change. The analysis of the titration plot of D-/L-Lys shows that the fluorescence intensity of D-Lys shows a linear decrease, while the fluorescence intensity of L-Lys shows a linear increase, R = 0.9735(as shown in Figure 2c). In order to verify the stability of the complex between L-Lys and R-1, the saturation curve of L-Lys titration was performed, (Figure 2d) whose complex stability constant Ka = 6.47 × 10 6 M was calculated by nonlinear fitting. Therefore, it is further speculated that this is due to the unique spatial structure inside the R-1 molecule synthesized based on the axial chirality of R-BINOL, which has different intensities of recognition for different configurations of lysine guest. Enantiomeric Excess's Studies We further investigated the composition of the lysine enantiomers. Probe R-1 was made to mix with lysine at different ee [21,22] values, and fluorescence response measurements were performed. (Figure 3). It is seen that the fluorescence intensity gradually increases with increasing excess of L-Lys. Therefore, it is further speculated that this is due to the unique spatial structure inside the R-1 molecule synthesized based on the axial chirality of R-BINOL, which has different intensities of recognition for different configurations of lysine guest. Enantiomeric Excess's Studies We further investigated the composition of the lysine enantiomers. Probe R-1 was made to mix with lysine at different ee [21,22] values, and fluorescence response measurements were performed. (Figure 3). It is seen that the fluorescence intensity gradually increases with increasing excess of L-Lys. Enantiomeric Excess's Studies We further investigated the composition of the lysine enantiomers. Probe R-1 was made to mix with lysine at different ee [21,22] values, and fluorescence response measurements were performed. (Figure 3). It is seen that the fluorescence intensity gradually increases with increasing excess of L-Lys. Referring to Figure 3, the fluorescence of R-1 at 389 nm can show a linear increasing relationship with increasing enantiomeric excess (ee > 0), and the results of these curves can be obtained to identify the enantiomeric combination of amino acids. The trend of fluorescence intensity of probe R-1 on lysine enantiomers confirms that probe R-1 can be used to identify the enantiomeric combination of lysine. It is thus speculated that the molecular structure of the R-type compound R-1 forms a space more suitable for L-Lys to approach the recognition site and form R-L complexes, thus achieving the selective recognition of lysine enantiomers. Study of Reaction Mechanism In order to better investigate the recognition mechanism of lysine and fluorescent probe R-1, an 1 H NMR spectroscopic titration test was performed by weighing 6.5 mg of Referring to Figure 3, the fluorescence of R-1 at 389 nm can show a linear increasing relationship with increasing enantiomeric excess (ee > 0), and the results of these curves can be obtained to identify the enantiomeric combination of amino acids. The trend of fluorescence intensity of probe R-1 on lysine enantiomers confirms that probe R-1 can be used to identify the enantiomeric combination of lysine. It is thus speculated that the molecular structure of the R-type compound R-1 forms a space more suitable for L-Lys to approach the recognition site and form R-L complexes, thus achieving the selective recognition of lysine enantiomers. Study of Reaction Mechanism In order to better investigate the recognition mechanism of lysine and fluorescent probe R-1, an 1 H NMR spectroscopic titration test was performed by weighing 6.5 mg of R-1 dissolved in DMSO and L-Lys dissolved in D 2 O (lysine is only soluble in water), adding 1.0 eq. of L-Lys first for spectroscopic scanning, and then 2.0 eq. of L-Lys for spectroscopic scanning (Figure 4). The analysis of the hydrogen spectrum spectra revealed that several sets of signal peaks of the probe R-1 had different trends of chemical shift changes after the addition of L-Lys. Among them, since Ha is a reactive hydrogen atom in the hydroxyl group, it may combine with water after dropping into water, making its signal peak at 8.4 ppm disappear. Two Hb on the triazole group act as hydrogen bond donors and can form intermolecular hydrogen bonds with a pair of solitary pairs of electrons in the amino group at the L-Lys terminal [23], causing its chemical shift to move from 7.96 ppm to 7.77 ppm, a change of 0.19 ppm. Since the Hb participates in the formation of intermolecular hydrogen bonds. The chemical shift of Hd also changed from 2.82 ppm to 2.65 ppm, which changed by 0.17 ppm. The chemical shift of Hc was slightly shifted to the high field region, from 5.44 ppm to 5.36 ppm, a change of 0.08 ppm. The signal peak on the naphthalene ring did not change significantly, and the corresponding analysis can be concluded that the change mainly occurred in the triazole group. Figure 2b, the emission wavelength increased by 22 nm from 389 nm to 411 nm, while the fluorescence intensity enhanced by 2.41 times from 2548 to 6148. No significant wavelength and fluorescence intensity changes were found during the titration of D-Lys according to the titration diagram Figure 2a. The literature revealed that most of the fluorescence identifications that produced red-shift or blue-shift phenomena had an ICT mechanism generated, and the identification of lysine can be distinguished by this redshift phenomenon and the corresponding fluorescence intensity change. Complexation Ratio Studies As shown in Figure 5, to determine the complexation relationship between probe R-1 and L-Lys, we performed a complexation ratio determination experiment of probe R-1 and L-Lys. When performing the test, the total concentration of the mixed solution of probe R-1 and L-Lys was kept at 2. According to the structural analysis diagram in Figure 4b, the strong electron-giving ability of the lone pair of electrons of the N atom in the amino group at the end of L-Lys enhances the electron-absorbing ability of Hb on the triazole group, which increases the electron transfer within the molecular structure and causes the intermolecular charge transfer effect (ICT) [24][25][26][27][28] to occur, resulting in the phenomenon of red-shift of emission wavelengths and fluorescence intensity enhancement. According to the titration diagram Figure 2b, the emission wavelength increased by 22 nm from 389 nm to 411 nm, while the fluorescence intensity enhanced by 2.41 times from 2548 to 6148. No significant wavelength and fluorescence intensity changes were found during the titration of D-Lys according to the titration diagram Figure 2a. The literature revealed that most of the fluorescence identifications that produced red-shift or blue-shift phenomena had an ICT mechanism generated, and the identification of lysine can be distinguished by this red-shift phenomenon and the corresponding fluorescence intensity change. Complexation Ratio Studies As shown in Figure 5 Thus, it is known that the H atoms on both triazole groups of the R-1 structure form an intermolecular hydrogen bonding force with the N atom at the L-Lys end group. Complexation Ratio Studies As shown in Figure 5, to determine the complexation relationship between probe R-1 and L-Lys, we performed a complexation ratio determination experiment of probe R-1 and L-Lys. When performing the test, the total concentration of the mixed solution of probe R-1 and L-Lys was kept at 2.0 × 10 −5 M. The results showed that the molar fraction of [L-Lys]/([R-1] + [L-Lys]) reached the maximum value when the molar fraction of [L-Lys] was 0.5, indicating that probe R-1 was bonded to L-Lys in the form of a 1 + 1 complex. Thus, it is known that the H atoms on both triazole groups of the R-1 structure form an intermolecular hydrogen bonding force with the N atom at the L-Lys end group. Study of Specifically Reaction Lysine Through experiments, such as nuclear magnetic titration and complexation titration, it was found that the identification group in the synthesized probe was the triazole group. Hb of C5 on the triazole group binds to the lone pair of electrons on N atoms in an amino group at the L-Lys end group position. Then, in terms of the structure of the eighteen pairs (thirty-six) of amino acids we identified, there are four amino acids with amino groups at the end base, namely asparagine, glutamine, arginine, and lysine. Their structures are: . (1) Through the structural analysis of the four amino acids, although the end bases of Asp, Glu, and Arg also have amino groups, after the end base amino is connected to the carbonyl group and imine, the lone pair electron of the N atom in the amino group Study of Specifically Reaction Lysine Through experiments, such as nuclear magnetic titration and complexation titration, it was found that the identification group in the synthesized probe was the triazole group. Hb of C5 on the triazole group binds to the lone pair of electrons on N atoms in an amino group at the L-Lys end group position. Then, in terms of the structure of the eighteen pairs (thirty-six) of amino acids we identified, there are four amino acids with amino groups at the end base, namely asparagine, glutamine, arginine, and lysine. Their structures are: Complexation Ratio Studies As shown in Figure 5, to determine the complexation relationship between probe R-1 and L-Lys, we performed a complexation ratio determination experiment of probe R-1 and L-Lys. When performing the test, the total concentration of the mixed solution of probe R-1 and L-Lys was kept at 2.0 × 10 −5 M. The results showed that the molar fraction of [L-Lys]/([R-1] + [L-Lys]) reached the maximum value when the molar fraction of [L-Lys] was 0.5, indicating that probe R-1 was bonded to L-Lys in the form of a 1 + 1 complex. Thus, it is known that the H atoms on both triazole groups of the R-1 structure form an intermolecular hydrogen bonding force with the N atom at the L-Lys end group. Study of Specifically Reaction Lysine Through experiments, such as nuclear magnetic titration and complexation titration, it was found that the identification group in the synthesized probe was the triazole group. Hb of C5 on the triazole group binds to the lone pair of electrons on N atoms in an amino group at the L-Lys end group position. Then, in terms of the structure of the eighteen pairs (thirty-six) of amino acids we identified, there are four amino acids with amino groups at the end base, namely asparagine, glutamine, arginine, and lysine. Their structures are: . (1) Through the structural analysis of the four amino acids, although the end bases of Asp, Glu, and Arg also have amino groups, after the end base amino is connected to the carbonyl group and imine, the lone pair electron of the N atom in the amino group (1) Through the structural analysis of the four amino acids, although the end bases of Asp, Glu, and Arg also have amino groups, after the end base amino is connected to the carbonyl group and imine, the lone pair electron of the N atom in the amino group forms p-л conjugation with the carbonyl group and imine, which weakens the electron-giving ability of the lone pair electron. The structure of Lys is just an ordinary alkyl chain, and the ability to give electrons is relatively strong. (2) Through structural analysis, it can be seen that the structures of Asp, Glu, and Arg have greater steric hindrance than the structure of Lys, and Lys has only one alkyl chain and smaller steric hindrance. In summary, this probe only has fluorescence recognition for Lys. Recognition of Enantiomers S-1 and R-1 In the same test environment, a series of tests were also carried out on the enantiomer S-1 of R-1. According to Figure 6, both probes show the same fluorescence response with specific recognition of lysine (as shown in the Supporting Materials for specific test results). Recognition of Enantiomers S-1 and R-1 In the same test environment, a series of tests were also carried out o S-1 of R-1. According to Figure 6, both probes show the same fluorescen specific recognition of lysine (as shown in the Supporting Materials fo sults). Fluorescence Titration of R-1 and S-1 by L-Lys The results of titrating R-1 and S-1 with L-Lys are shown in Figure 7. When the titration is started with L-Lys, the fluorescence emission at λ = 389 nm is enhanced, and the emission wavelengths are accompanied by a significant redshift. However, the fluorescence enhancement rate of R-1 is higher than that of S-1, indicating that its recognition is selectively differentiated. be seen that the structures of Asp, Glu, and Arg have greater steric hindrance than the structure of Lys, and Lys has only one alkyl chain and smaller steric hindrance. In summary, this probe only has fluorescence recognition for Lys. Recognition of Enantiomers S-1 and R-1 In the same test environment, a series of tests were also carried out on the enantiomer S-1 of R-1. According to Figure 6, both probes show the same fluorescence response with specific recognition of lysine (as shown in the Supporting Materials for specific test results). Fluorescence Titration of R-1 and S-1 by L-Lys The results of titrating R-1 and S-1 with L-Lys are shown in Figure 7. When the titration is started with L-Lys, the fluorescence emission at λ = 389 nm is enhanced, and the emission wavelengths are accompanied by a significant redshift. However, the fluorescence enhancement rate of R-1 is higher than that of S-1, indicating that its recognition is selectively differentiated. According to the linear Benesi-Hildebrand equation, the fluo- Fluorescence Titration of R-1 and S-1 by D-Lys The results of titrating R-1 and S-1 with D-Lys are shown in Figure 8. When the titration is started with D-Lys, the fluorescence emission at λ = 389 nm does not change significantly, and the fluorescence intensity decreases slightly. Similarly, the fluorescence intensity of R-1 varies more than S-1, indicating that its recognition can be selectively distin- Conclusions In summary, a novel chiral BINOL fluorescent probe linked by a triazole group was synthesized by click reaction and nucleophilic substitution reaction when 36 amino acids (18 amino acids of different configurations) were added to the probe, which can specifically identify and distinguish different configurations of lysine. Through the study of different configurations of lysine, it was found that L-Lys notable enhanced the fluorescence intensity and red-shifted the emission wavelength, and this phenomenon was attributed to the stronger electron-giving ability of the lone pair of electrons of the N atom in the amino group at the L-Lys end position, which enhanced the electron-absorbing ability of H on the triazole group and improved the electron transfer ability within the molecular structure, causing the intramolecular charge transfer effect (ICT). Among the 36 amino acids added, there are 4 amino acids with amino groups at the end base, namely Asp, Glu, Arg, and Lys. Due to lysine having a simple alkyl chain, it has a stronger ability to give electrons and less steric hindrance, so the probe has strong fluorescence recognition with lysine. We also explored the enantiomeric S-1 (as shown in Supplementary Materials) and found that the distinguished recognition of chiral amino acids has the same recognition preference. As amino acids are essential to the human body, different activities exist for different configurations of amino acids, so chiral fluorescent sensors are used to recognize different configurations of amino acids as one of the necessary methods. Conflicts of Interest: The authors declare no conflict of interest.
6,967.8
2023-02-21T00:00:00.000
[ "Chemistry", "Biology" ]
Article Automatic Defect Detection in Spring Clamp Production via Machine Vision . There is an increasing demand for automatic online detection system and computer vision plays a prominent role in this growing field. In this paper, the automatic real-time detection system of the clamps based on machine vision is designed. It hardware is composed of a specific light source, a laser sensor, an industrial camera, a computer, and a rejecting mechanism. The camera starts to capture an image of the clamp once triggered by the laser sensor. The image is then sent to the computer for defective judgment and location through gigabit Ethernet (GigE), after which the result will be sent to rejecting mechanism through RS485 and the unqualified ones will be removed. Experiments on real-world images demonstrate that the pulse coupled neural network can extract the defect region and judge defect. It can recognize any defect greater than 10 pixels under the speed of 2.8 clamps per second. Segmentations of various clamp images are implemented with the proposed approach and the experimental results demonstrate its reliability and validity. Introduction With the increasing demands on production quality, system performance, and economic requirement, modern industrial processes are more complicated both in structure and automation degrees.The reliability and safety issues on these complicated industrial processes become the most critical aspects for system design and are receiving considerably increasing attention nowadays [1,2].The spring clamp detection system in this paper talks about also facing the problems mentioned above. A spring clamp is rounded by pressing spring steel, with two ears left around the circle.When to be clamped, tight ears need to hold down firmly, making larger set into the inner tube, and then you can loosen the grip of the hand, which is very simple to use, and these are the simple steps for using a spring clamp.The adopted material has high flexibility, good physical properties, and strong firmness, which makes the clamp suit the connection of pipe systems of vehicle cooling, heating, and ventilating.The high elasticity of the clamp compensates the pipe shrinks caused by the change of temperature or the degradation of the pipe.In natural state, the clamp does not have the clamping force.It happens only when exerted to a larger pipe.The clamp can guarantee the reliability of the connection by generating a permanent distortion through the uniform pressure around the circle, which protects the fastness and time within a reasonable range. The clamp detected in this paper is widely used.It is one of the essential accessory substances applied in fixing the vehicle pipes.Its quality, security, and service life are of utmost importance to the vehicle performance.As a result, the clamp detection is a well-concerned problem of the manufacturers.The detection is still relied on manual detecting at present, which causes problems such as labor intensive, low speed, and misdetection.Some foreign institutes started the relevant research to solve the problems above more than 10 years ago and have now made some progress, such as bottle defect detection, fabrication defect detection, vehicle headlamp lenses defect detection, and web defect detection.Our country began to research the automatic detecting technology at the 1990s, such as the detection of presswork quality and detection of float glass fabrication. The pipeline system introduced in this paper is in a nonclosed space, which makes the spot light infected by the natural light and the house illumination; in the meantime, serious background noise will be produced.According to the above peculiarities, we designed a spring clamp real-time detection system based on machine vision.Built on both sides of the production line, this system consists of a special light source to weaken the light interference, a camera with high resolution, an external trigger device which will trigger the camera to acquire an image, a host computer which has program to extract an image, recognize it, give a YES/NO judgment, and display the final results on the screen, and a rejecting mechanism which is used to reject nonconforming products.This system could make it possible to automatically detect the spring clamps and greatly improve the rate of defect identification and defect removal, thereby achieving the goals of increasing the quality of the products and avoiding the actual loss. Problem Descriptions According to the detection characteristics of the pipeline, the defect detection system is built on both sides of the production line.Online real-time detection device depends on the external trigger device to trigger the camera to acquire an image, which has no influence on the real production line.There are four parts in this system: optical illumination, image acquisition, image processing, and rejecting mechanism. For an irregular circle clamp with a smooth curved surface, the image quality will be affected by the illumination mode.Back lighting makes the clamps contour clear, but it also makes it hard to distinguish details.Front parallel lighting will submerge the defects by forming highlights on the other side of the hollow when the light shines through the hollow in the clamp, as shown in Figure 1. In order to solve the above problems, a new illumination mode, back lighting combined with lateral lighting, is brought out in this paper, of which the details are as follows: (1) front: a white frosted glass is set as the background and a quadrilateral light source is placed parallelly under the glass; (2) lateral: three cameras are placed parallelly above the glass with a 120 ∘ angle between each other.Each camera obtains an image of 1/3 of the clamp after fixing the relative position, as is illuminated in Figure 2. Images from these three cameras constitute the entire outside surface of the clamp, after which the capturing process will be finished.The detection process is exposed in Figure 3: when the clamp passes through trigger 1, trigger 1 enables the camera A to capture a backlit image of the clamp.The shape and ears will be detected.When the clamp passes through trigger 2, trigger 2 enables the cameras B, C, and D to catch an image of the clamp, respectively.These three images which can constitute the whole external surface of the clamp are then sent to the computer to detect its color, scratch, height, and so on.Once there appears one defect, signal will be sent to the rejecting mechanism through RS485 to pick away the defective one. Image Processing Algorithm Image processing is the core technique in this system.With the development of a corresponding technique, a lot of methods are presented, such as neural network (NN) and wavelet, but the applicability of each algorithm is narrow and just effective to their special scenes.The detection algorithm is the kernel of this system.The process is as follows: read images from those four cameras, respectively, and preprocess the images; detect the high, color, shape, crack, and so on; locate the defects, assess the defects, and display the results in the computer and pick away those with defects by rejecting mechanism. Image Preprocessing. During the production of the spring clamps, there exists image noise in the original image captured in the limited environmental conditions; as a result, the original images should be preprocessed.The algorithm is utilized to reduce noise and delete the unwanted regions. Region of Interest (ROI). The field of view of the captured pictures is definite because the relative position of the camera and the clamp is fixed in mechanical design of the system. In order to increase the detection rate and real-time property of the system, it is necessary to locate ROI before image segmentation.We could judge the image based on abrupt changes in grayscale, which is able to determine the ROI effectively and quickly. Summing the gray values of each column of an image in spatial domain (which is shown in Figure 4), we obtain : where ( , ) is the gray value of the pixel in the coordinate position ( , ).In a similar way, summing the gray values of each row is obtained: As we can see, abrupt changes in grayscale will cause correspondingly changes of and . Then subtracting two adjacent values of , we obtain Considering the characteristics of the clamp, we set a maximum tolerance Δ max = 10 pixel.Set up an array flag [ − 1]: In Figure 5, 5 is obtained, the first value that is not zero in the series flag [ − 1], and 3 is gotten, the last value that is not zero by applying the above algorithm, which could determine the up and down edges in Figure 5.In a similar way, we can also obtain the left and right edges: = 2 and = 4.Thus, the upper-left corner (X5, Y2) and lower-right corner (X3, Y4) will be generated.Finally the ROI area will be fixed, which is shown in Figure 5. Median Filtering Based on Weight Value.The complicated environment of the factory causes noise in our captured images.Meanwhile, in industrial processes uneven tiny scratches and spots can be also caused.The image filtering will solve the above problems effectively.In order to eliminate noise as well as protect the details of an image, median filter is implemented to change the values of those pixels which have biggish gray difference with the surrounding pixels and then eliminate isolated noise. In order to conquer the drawbacks of simple local median filter, different weight values are given to the pixels involved in the operation, which is weighted median filter.Normally the weight values are determined according to the following principles: (1) assign biggish weight value to the pixel to be processed and smaller values to the rest; (2) determine the weight value of the rest of the pixels according to the distance to the pixel to be processed and the closer, the bigger; (3) determine the weight value of the rest pixels according to the degree of closeness between the pixel to be taken up and the one of the rest. Height Detection. The height of clamp can be calculated from three lateral images.The detection method is as follows.Firstly, detect the edges with ROI and obtain the straight edge lines 1 and 2 by using straight line fitting method at edge points.Secondly, draw a perpendicular bisector (H) in ROI and calculate the points of intersection between and 1 and and 2 ((1, ), (2, )).Finally, the height can be calculated by the absolute value of 1 and 2.The site test pattern is displayed in Figure 6.As we can see, this method can calculate not only the height of the clamp, but also the height of the hollow part in the middle of the clamp, which can provide a basis for detailed height detection. Defect Detection. The first step is always location of defect, of which the theory is the same as ROI location.After locating the defects comes defect segmentation and recognition.At present, there are some segmentation methods, such as threshold method, the maximum entropy method, and histogram threshold method [3][4][5].Due to the inherent characteristics of the product, defects in the images are of small proportion in the whole image.What kind of algorithm to segment defects and clamps greatly is one of the difficulties in this research. After positioning, we can see that, as in the right image in Figure 7, some burrs are removed and the defect area is enlarged, which make it easier to improve the processing speed of the software.In order to get the perfect segmentation effect and improve the defects proportion in a histogram of an image to be segmented, segmentation algorithms based on threshold, region segmentation method, morphological segmentation method, and so forth have been tested in the tests.At the end, the PCNN method is adopted, and some improvement has been put forward based on the original.PCNN (pulse coupled neural network) is derived from Eckhorn et al. 's [6] research on nerve cells in the cat visual cortex.In the PCNN model, neurons with similar input generate impulses at the same time, which is able to reduce local gray level difference as well as making up local minor disconnection in an image.PCNN method is unrivalled by other segmentation methods.This property has also been used in fields such as shadow removal, image denoising, and edge extraction. On the basis of Eckhorn's research, Johnson explained PCNN model with circuit theory, which is illustrated in Figure 8. Parameter definition is the same as Johnson and Padgett's [7]. The formula derivation from the circuit in Figure 8 is listed below.In the actual derivation, the same mistakes were found in documents from Johnson and Padgett's [7] and Yide et al's [8].Different part between my formulas and theirs is roughly listed below.For detailed derivation of the formulas, see the Appendix where where 1 , 2 are the electrical potentials of their neuron membranes.Equation ( 5) indicates that pulse signal is not only related to the transfer conductance between synapses 1 , 2 , but also related to the equivalent leakage conductance, the equivalent capacitance, and the intrinsic capacitance, which indicates that whether a neuron can be fired depends on the exterior input of its neighboring neuron, as much as its own internal activity. Pulse function () will be generated when the neuron potential 1 goes larger than the threshold potential; and there are no signals to be generated when the inner activities achieve balance; that is to say, is zero. Then the following equation can be derived from (5): where Equation ( 7) is actually the functional relation of the neuron membrane potentials in equilibrium.It indicates that the nonlinear multiplying modulating coupling property of PCNN is caused by the conductance between the axon of a neuron (presynaptic) and the axon of a neighboring neuron (postsynaptic).While neurons that input conductance are controlled by the pulse voltage, the characteristics of neuronal synapses with this character are transferred to the adjacent conductance neurons. Each neuron includes 3 parts: dendrite, nonlinear connection modulation, and pulse generation element.The model of a PCNN neuron is illustrated in Figure 9 [7].The dendrite is utilized to receive information from the neighboring neurons through the linear connection input channel and feedback input channel.Nonlinear connection modulation, namely, the internal activities of neurons (), is obtained by multiplying the linear part which has an offset and the feedback input part.The generation of the pulse depends on whether the inner activities can stimulate the dynamic threshold, and the threshold value () will be modified with the output state of the neuron.When is lower than , the neuron will be stimulated ( = 1), which is called fired.Next suddenly increases because of the feedback of the output, and thus the neuron will be suppressed immediately ( = 0).The output will yield a pulse signal which connects to the input of the neighboring neuron with weight coefficient, so as to influence the stimulus state of these neurons. The model can be described with discrete functions as follows: where ⨂ is convolution, Θ () = − is the temporal responses kernel of the th neuron, and are the weight coefficient matrices of the linear connection input channel and the feedback input channel, and are the constant inputs, is the feeding input, is the linking input, is the total internal activity, is the output pulse, Θ is the neuron threshold, , , represent different neurons, and are the dendrite state parameters, and are the time constant and magnification coefficient of threshold value, is usually set to a larger value which is larger than , and is the linking strength coefficient.Under this discrete mode, (9)-( 13) must be calculated in sequence in the listed order; and Θ and use their values as evaluated at the previous time step. In the light of the inherent characteristics of the clamps, weighted median filter is adopted in image processing in the system.At the same time, PCNN algorithm is used to enhance the filtering effect before segmentation.From (11), the value of influences the filtering effect.A low-pass filter can be used to constrain internal activities so as to reduce noise.Equation (11) can be slightly modified as follows: By allowing changing from 0 to 1, (14) gives a significant decrease in both the temporal and the spatial noise when it is turned on the equilibrium operation of the PCNN.The above idea with PCNN is used to realize the image segmentation on the case in this paper, in which further analysis and modification have also been done. System Implementation According to the design idea, we build this detection system.In the present machine vision, mostly, CCD camera is the main module to capture images.Java Advanced Imaging (JAI) company concentrates on providing customs with abundant products from line scanning to area scanning, from analog to digital technology, and from Camera Link to gigabit Ethernet (Gig E) vision.This research applied a digital camera with the type JAI CM-030GE/CB-030GE, which adopts a Sony IC424 CCD sensor with a size of 1/3 , effective pixels 656 * 494, the max frame rate 90fps under the max resolution and continuous acquisition mode, and a Gig E interface to communicate.The load of a Gig E camera is 656 * 494 * 1 * 90 = 28 MByte/s, while the maximum load of Gig E link is 120 MByte/s.Therefore, 4 cameras which are applied in this research meet the load requirement. For the purpose of testing the accuracy of this algorithm, one thousand clamps are detected randomly, among which 100 are unqualified and the other 900 are qualified.Four images of each clamp were captured by the image acquisition system which was triggered by an external trigger and then conveyed to the host computer for defect detection.The process time of each image is 0.087 s, which means the time to detect one clamp is 0.348 s.The results show that there are no misjudgments in one year's debugging. A two-dimensional image with size of * can be considered as a PCNN network whose number of neuron is * , and each pixel corresponds to a unique neuron input.In the first step, the internal activity is equal to the external stimulus.If the output of the neuron is 1, it is naturally fired.At the same time, the threshold value will increase sharply and decay exponentially with time.It is obvious that if the corresponding pixels of the neighboring neuron and the fired neuron at the last iteration are of the same intensity, the neighboring neuron is easier to be captured and fired.This shows that the natural fire of one neuron can lead to a collective fire to the similar neurons around.Image segmentation could be realized based on the property that the neuron groups formed by the naturally fired neuron and a small area in the image have similar characteristics.The above idea with PCNN is used to realize the image segmentation on the case in this paper, in which further analysis and modification have also been done. In the right image of Figure 10, it is obvious that even small detailed information like the wrinkle on the inner side of the clamp in the lower right corner of the image and the trail at the overlap region in the center-left of the image are segmented clearly and completely.Without doubt, the segmentation results of the middle image in Figure 10 using the threshold segmentation method are also good except that some details are missed in this case.The reason why PCNN method can realize the ideal edge detection is that it includes two significant characteristics: firstly, the output of PCNN is binary; secondly, the output is an area with single gray level, which ensures the successful detection of the real image edges and provides basis for segmentation. Figure 11 shows several segmentation images with different in (14), where the edges are similar.The value of the middle image is 0.3.The value of the right image is 0.8.We can find that the middle image is already good enough.More interesting results could be obtained by decreasing of .The image will be segmented further based on the former coarse segmentation with the fire in different iteration of PCNN.However, it is also found that excessive segmentation will increase the difficulty in target recognition (the middle image of Figure 11).Thus, the number of iteration determines the capacity of PCNN to recognize edges with different gray levels. Derived from the research on the target recognition mechanism of the mammalian visual neurons, the PCNN model can extract the edge and region information only after a few iterations instead of plenty of image training processes.Figure 12 shows several segmentation images with different iteration numbers, where the edges are similar.The number of Figures 12(b)∼12(f) is 4, 6, 7, 9, and 10.We can find that Figure 12(d) is already good.More interesting results could be obtained by increasing the number of iteration.The image will be segmented further based on the former coarse segmentation with the fire in different iteration of PCNN.However, it is also found that excessive segmentation will increase the difficulty in target recognition (Figures 12(e), and 12(f)).Thus, the number of iteration determines the capacity of PCNN to recognize edges with different gray levels. Conclusions A spring clamp detection system based on machine vision is built in this paper, which is built on both sides of the production line.This system could detect accurately under the speed 2.8 clamps per second and detect any defect over 10 pixels.The rates of defect recognition and defective removal have reached a high level, which can achieve the goals of improving the overall quality of the products manufactured and avoid actual loss.The major work of this paper is as below: (1) an automatic location algorithm ROI is designed, which can fast locate the ROI in an image; (2) the most concise geometric principle is used in height detection algorithm design, which can achieve fast measurement on the premise that the angle between camera and detecting object is 90 ∘ or 0 ∘ ; (3) some mistakes of the equivalent circuit in Johnson and MA Yi-des documents are corrected; (4) it helps us to improve the original PCNN algorithm and apply it in this detection.This system is the application of machine vision in online detection of the spring clamp, which is capable of automatically detecting the surface of the clamp.Of course, the detection system also needs to be optimized in a lot of aspects, for example, the distortion-free source coding problems such as how to remove unimportant information in an image during transmission in order to save cost and improve compression effect, which will be the next key point in further research. Figure 1 : Figure 1: Images with various lighting illumination pattern. Figure 2 : 2 Figure 3 : Figure 2: The illustrated diagram of the clamps lateral image acquisition. Figure 4 :Figure 5 : Figure 4: A spatial domain of the image. Figure 6 : Figure 6: The height calculation diagrams of the three lateral images. Figure 8 : Figure 8: The equivalent circuit of neuron. Figure 10 : Figure 10: The original image (a), the output image after threshold segmentation (b), and the output image after PCNN segmentation (c).
5,315.8
2014-07-09T00:00:00.000
[ "Engineering", "Computer Science" ]
Optimal Controller Design for Ultra-Precision Fast-Actuation Cutting Systems Fast-actuation cutting systems are in high demand for machining of freeform optical parts. Design of such motion systems requires good balance between structural hardware and controller design. However, the controller tuning process is mostly based on human experience, and it is not feasible to predict positioning performance during the design stage. In this paper, a deterministic controller design approach is adopted to preclude the uncertainty associated with controller tuning, which results in a control law minimizing positioning errors based on plant and disturbance models. Then, the influences of mechanical parameters such as mass, damping, and stiffness are revealed within the closed-loop framework. The positioning error was reduced from 1.19 nm RMS to 0.68 nm RMS with the new controller. Under the measured disturbance conditions, the optimal bearing stiffness and damping coefficient are 1.1×105 N/m and 237.7 N/(m·s−1), respectively. We also found that greater moving inertia helps to reduce all disturbances at high frequencies, in agreement with the positioning experiments. A quantitative understanding of how plant structural parameters affect positioning stability is thus shown in this paper. This is helpful for the understanding of how to reduce error sources from the design point of view. Introduction Ultra-precision freeform surfaces are widely used in advanced imaging and illumination systems, high-power beam-shaping applications, and other high-end scientific instruments [1]; they give the designers greater ability to cope with the performance limitations commonly encountered in simple-shape designs. However, the stringent requirements for surface roughness and form accuracy of freeform components pose significant challenges for current machining techniques-especially in the optical and display market, where large surfaces with tens of thousands of micro-features need to be machined [2][3][4]. The machining of such microstructures requires ultra-precision fast-motion systems. Typically, PID control laws are used; however, the PID control algorithm has only four free parameters that can be tuned, while the real-world situation is much more complex. The control algorithm and gains are often selected based on human experience through a "trial and error" process. It is possible to optimize the controller gains given the mathematical model of the system. E. A. Padilla-Garcia proposed a concurrent multi-objective dynamic optimization method to optimize the selection of controllers and motors; the optimization objectives were the energy consumption, tracking error, and motor weight. The efficiency of the proposed methodology was validated by simulations of an industrial robot [5]. Alter et al. applied an H−∞ robust control algorithm for controlling a linear motion stage, and the resulting dynamic stiffness was improved by 27-46% compared to PD control [6,7]; they further developed a stiffness-enhancement control law for optimal control, and the dynamic stiffness was improved by around 100% [8]. Dumanli utilized a linear-quadratic regulator (LQR) to achieve optimal placement of controller poles and zeros with acceleration feedback; he applied this algorithm in control of a ball screw feed drive, and achieved active damping with higher bandwidth [9]. Previous studies have mostly focused on the enhancement of dynamic stiffness. For precision fast-actuation cutting systems, the bandwidth and the positioning error are also very important. Wei-Wei Huang et al. developed a novel robust dual-loop control scheme with a Kalman-filter-based extended state observer and H∞ control for nano-positioning stages to implement highbandwidth tracking operations [10]; they applied the control scheme on a piezo-driven stage, and the positioning bandwidth was improved from 3.6 kHz to 5.52 kHz. However, the positioning noise at this bandwidth is 20 nm, which is not sufficient for ultra-precision cutting systems. The positioning noise is mostly caused by environmental vibrations and the noises in the electronics. Feinan Zhu developed an improved reset control strategy to control the positioning of the read head in HDD; in his model, he included the external disturbances, and successfully reduced the tracking error in a finite time [11]. Parameter uncertainty can also be treated as a kind of disturbance. F. Mendoza-Mondragón proposed a two-degrees-of-freedom controller for robust speed regulation in permanent-magnet synchronous motors (PMSMs) [12]; the experimental results showed that better robust and disturbance rejection was achieved compared with traditional PI control. Chunhong Zheng proposed a simple but effective nonlinear proportional-derivative (PD) control strategy for faster, high-precision positioning [13]. It can be seen that the performance of positioning systems can be improved by optimizing the controller, but it is still difficult to predict the minimum tracking error before the hardware is built. Another issue with the design of fast positioning systems is that the tracking bandwidth and following error are greatly affected by the structural parameters, such as moving mass or damping. Li Zelong used multi-objective optimization and finite element simulation to design a flexure-hinge servo turret with a high natural frequency for fast tool servo applications [14]. It has been proven that the plant moving mass affects the minimum achievable positioning error [15]. Therefore, a system model that reveals the influences of structural parameters is necessary in order to achieve a quantitative understanding of how to reduce error from the design point of view. In this paper, the uncertainty associated with controller tuning is precluded by adopting an H2 optimal control algorithm, which results in a control law minimizing positioning errors based on the plant model and measured disturbances. The minimum positioning errors are predicted with different structural parameters. A deterministic model to optimize the structural parameters to be minimized following error is proposed for the first time. Then, the influence of each structural parameter is analyzed. The results of our analysis reveal the optimal structural parameters and provide guidance on improving the dynamic performance of the tool positioning system. The control effects of the optimal controller and the PID controller are compared through a series of positioning tests. Materials and Methods In this section, we describe an optimal control strategy that was used to control a fast positioning system. A model to predict the static following errors was proposed based on the optimized controller. The results were used to study the influences of different structural parameters on positioning stability. Experimental Setup A custom-built fast tool servo cutting device for freeform turning is shown in Figure 1a. This configuration is based on a flat Lorentz actuator. Short-stroke high-frequency motions are achieved in the Z direction with flexure guidance. The motion along the X direction is driven by a linear motor and guided by a ball-bearing linear slide. A metrology straight edge is used as the position reference, and a capacitive displacement sensor is used to measure against it. The diamond cutter is fixed in the same line as the displacement sensor and the motor center. In this way, the force passes through the gravity center and the Abbey principle is obeyed, which is very important in reducing machine tool errors [16,17]. The detailed assembly design of the motor and the bearing structure is shown in Figure 1b. tions are achieved in the Z direction with flexure guidance. The motion along the X direction is driven by a linear motor and guided by a ball-bearing linear slide. A metrology straight edge is used as the position reference, and a capacitive displacement sensor is used to measure against it. The diamond cutter is fixed in the same line as the displacement sensor and the motor center. In this way, the force passes through the gravity center and the Abbey principle is obeyed, which is very important in reducing machine tool errors [16,17]. The detailed assembly design of the motor and the bearing structure is shown in Figure 1b Experimental Determination of the System Model An accurate mathematical model of the mechanical and electrical systems was established prior to controller design. The lumped-parameter model of the mechanical system is established as shown in Figure 2. The 1, 1, 1 block represents the tool tip mounting flexibility, which reflects the dynamic performance of the tool holder and the coil support. 2 is the mass of the movable body, including the coil assembly and sensor. 2 and 2 are the stiffness and damping of the flexure bearing, respectively. 3 represents the mass of the X carriage. 4 is any flexible mass that will disconnect from 2 at high frequency. 5 and 5 are the stiffness and damping of the motor coil with respect to magnets, respectively. Experimental Determination of the System Model An accurate mathematical model of the mechanical and electrical systems was established prior to controller design. The lumped-parameter model of the mechanical system is established as shown in Figure 2. The m1, k1, c1 block represents the tool tip mounting flexibility, which reflects the dynamic performance of the tool holder and the coil support. m2 is the mass of the movable body, including the coil assembly and sensor. k2 and c2 are the stiffness and damping of the flexure bearing, respectively. m3 represents the mass of the X carriage. m4 is any flexible mass that will disconnect from m2 at high frequency. k5 and c5 are the stiffness and damping of the motor coil with respect to magnets, respectively. tions are achieved in the Z direction with flexure guidance. The motion along the X direction is driven by a linear motor and guided by a ball-bearing linear slide. A metrology straight edge is used as the position reference, and a capacitive displacement sensor is used to measure against it. The diamond cutter is fixed in the same line as the displacement sensor and the motor center. In this way, the force passes through the gravity center and the Abbey principle is obeyed, which is very important in reducing machine tool errors [16,17]. The detailed assembly design of the motor and the bearing structure is shown in Figure 1b Experimental Determination of the System Model An accurate mathematical model of the mechanical and electrical systems was established prior to controller design. The lumped-parameter model of the mechanical system is established as shown in Figure 2. The 1, 1, 1 block represents the tool tip mounting flexibility, which reflects the dynamic performance of the tool holder and the coil support. 2 is the mass of the movable body, including the coil assembly and sensor. 2 and 2 are the stiffness and damping of the flexure bearing, respectively. 3 represents the mass of the X carriage. 4 is any flexible mass that will disconnect from 2 at high frequency. 5 and 5 are the stiffness and damping of the motor coil with respect to magnets, respectively. Table 1. According to [18], the closed-loop control system can be represented by a transfer matrix G and a controller K, as shown in Figure 3. Disturbances are modelled as input w, while output performance to be evaluated is modelled as z. The controller senses the output y of the plant and then generates a control signal u to the plant. The column number of inputs w represents the number of disturbances. The transfer matrix G can be partitioned into four submatrices. Submatrix A represents the characteristic matrix of the plant in state-space denotation. The B1 block is the input matrix for all of the disturbances, while the last column (B2) corresponds to the control input u. The block C1 is the output matrix for the errors to be minimized, and the last row (C2) corresponds to the output measurement. Sweep sinusoidal signals are selected to test the response of the built positioning sys-tem. Sweep sinusoidal commands are sent from the D/A convertor, and the response of the open-loop system is measured by the capacitive displacement sensor. The lumpedparameter model of the system is known, and the mathematical model is used as the greybox model. The model parameters are then estimated by fitting the grey-box model and the experimental data. The estimated parameters are listed in Table 1. According to [18], the closed-loop control system can be represented by a transfer matrix and a controller , as shown in Figure 3. Disturbances are modelled as input , while output performance to be evaluated is modelled as . The controller senses the output of the plant and then generates a control signal to the plant. The column number of inputs represents the number of disturbances. The transfer matrix can be partitioned into four submatrices. Submatrix represents the characteristic matrix of the plant in state-space denotation. The 1 block is the input matrix for all of the disturbances, while the last column ( 2) corresponds to the control input . The block 1 is the output matrix for the errors to be minimized, and the last row ( 2) corresponds to the output measurement. The H2-norm of a SISO system with transfer function ( ) is defined as [18]: For a multivariable system with a transfer function matrix of ( ) = , the definition can be generalized to: The matrix ( ) is the cost function, which is to be minimized in the optimization process. The selection of the cost function depends on the application requirements. In this case, the positioning error is to be minimized; thus, the transfer function matrix The H2-norm of a SISO system with transfer function J(s) is defined as [18]: For a multivariable system with a transfer function matrix of J(s) = [j mn ], the definition can be generalized to: The matrix J(s) is the cost function, which is to be minimized in the optimization process. The selection of the cost function depends on the application requirements. In this case, the positioning error is to be minimized; thus, the transfer function matrix formed by the transfer function of each disturbance source to the tool position was selected as the cost. The controller output u is also included in the cost function to be constrained, because there are hardware limits on the maximum controller output. The optimal control calculation is equivalent to solving a Riccati equation [19], and finally a controller transfer matrix K is calculated. The input disturbances w in Figure 3 usually have colored spectrum characteristics, so the input can be modelled as a white noise going through a particular weighting filter. The transfer functions of the filters are then integrated into the plant model, and an augmented transfer matrix G is formed, as shown in Figure 4. W e and W u are the weighting filters for the positioning error and control output in the cost function, respectively. W 1 , W 2 , and W 3 are the weighting filters for the disturbances. cause there are hardware limits on the maximum controller output. The optimal control calculation is equivalent to solving a Riccati equation [19], and finally a controller transfer matrix is calculated. The input disturbances in Figure 3 usually have colored spectrum characteristics, so the input can be modelled as a white noise going through a particular weighting filter. The transfer functions of the filters are then integrated into the plant model, and an augmented transfer matrix is formed, as shown in Figure 4. and are the weighting filters for the positioning error and control output in the cost function, respectively. , , and are the weighting filters for the disturbances. Modelling of Disturbances and Weighting Functions The weighting function for the following errors controls the shape of the closedloop sensitivity function. Since for a closed-loop system, the sensitivity at a high frequency range is always close to unity, is mainly for controlling the low frequency range sensitivity shape. According to [18], the weighting function can be expressed as: where limits the peak response near the crossover frequency; is the intended closed-loop bandwidth; and ε is introduced to make the weighting function strictly proper, and its value should be selected according to the allowable static state following error under cutting force, namely, the static stiffness. In this analysis, is set to 1.2721 for critical damping, and ε is set to 1 × 10 −7 . The weighting function for the output of the controller controls how much output will be commanded to achieve the desired performance. At frequency ranges higher than intended bandwidth, is used to limit the control output by adding a pole; thus, the response falls fast at high frequencies, in order to suppress sensor noise. The weighting function can be expressed as: Modelling of Disturbances and Weighting Functions The weighting function for the following errors W e controls the shape of the closedloop sensitivity function. Since for a closed-loop system, the sensitivity at a high frequency range is always close to unity, W e is mainly for controlling the low frequency range sensitivity shape. According to [18], the weighting function can be expressed as: where M s limits the peak response near the crossover frequency; ω b is the intended closedloop bandwidth; and ε is introduced to make the weighting function strictly proper, and its value should be selected according to the allowable static state following error under cutting force, namely, the static stiffness. In this analysis, M s is set to 1.2721 for critical damping, and ε is set to 1 × 10 −7 . The weighting function for the output of the controller W u controls how much output will be commanded to achieve the desired performance. At frequency ranges higher than intended bandwidth, W u is used to limit the control output by adding a pole; thus, the response falls fast at high frequencies, in order to suppress sensor noise. The weighting function can be expressed as: where M u and ω bu limit the control output, and ε 1 is introduced to make the weighting function strictly proper. M u and ω bu are set as large numbers (1 × 10 8 ) to indicate that the motor power is enough for static positioning. The disturbances are measured separately at each error source using a data acquisition board and a capacitive sensor. The disturbances are assumed to be stationary stochastic processes. As shown in Figure 5a, the capacitive sensor noise is modelled as independent bandlimited white noise. The weighting filters are valued as the square root of the signal's average PSD value. The weighting filter for sensor noise is modelled as W 1 = 1.36 × 10 −6 (constant). The current loop noise shown in Figure 5b is modelled with large amplitudes at the low frequencies, and the peak at 7748 Hz is modelled by a poorly damped second-order peak filter. The weighting filter can be modelled as: where ω = 2π × 7748. Then, the environmental disturbance vibrations are chosen as follows: Modelling of Following Errors With the system model, the frequency response functions from each disturbance input to the tool position FRF i (υ) can be obtained. The error power contribution PSD i from each disturbance to the final position can be calculated as shown in Equation (7): where i indicates the disturbance source number (from 1 to 3; P 1 , P 2 , and P 3 are the PSD of each error source, and υ is the frequency). Since these disturbances are assumed to be mutually uncorrelated, their powers can be combined to reflect the total error power [20]. The synthesized tool position's PSD is: In order to estimate the time-domain error magnitude from the PSD values, the cumulative amplitude spectrum (CAS) function is derived. CAS i (υ) is the square root of the integrated PSD i ( f ), from 0 Hz to υ Hz. The following error can be calculated as follows: where υ Nyquist is the Nyquist frequency-namely, the frequency span. Closed-Loop Response with Optimal Control Using the above model, the solved optimal controller that minimized the following errors was a 27 × 27 matrix in the state-space form with 27 state variables. The open-loop and closed-loop transfer functions of the modelled system with the calculated optimal controller are shown in Figure 6a In comparison, the same crossover frequency was achieved with a PID algorithm, and the calculated controller functions are shown in Figure 6b. There exists a structural resonant point at frequency of 1645 Hz, which can cause troubles when the PID gains are further increased. This peak is successfully compensated in the optimal controller, because the controller has more control of degrees of freedom. The low-frequency control actions are also different in that the optimal controller is fully determined by the disturbance strengths, while the PID controller is calculated according to the phase margin set by the user. The measured following errors with the optimal controller are shown in Figure 7. The position bandwidth (−3 dB) is found to be around 1.1 kHz. The RMS value is 0.68 nm and the peak-to-valley value is 5.38 nm. The calculated positioning error is 0.23 nm RMS by the model prediction, and the peak-to-valley value should be around 1.4-2.3 nm. In comparison, the PID controller is tuned with the same sampling rate and a similar bandwidth (1.5 kHz), and the following errors are measured as shown in Figure 8. Because the sampling rate is kept the same, the measuring error contribution from the feedback sensor should be the same. The following errors are larger when the bandwidth is increased, with an RMS value of 1.19 nm and a peak-to-valley value of 7.65 nm with PID control. These results show that the optimal controller indeed helps to achieve better positioning stability. The FFT spectra of the two error signals are shown in Figure 9. It can be seen that the error spectrum is more evenly distributed when the optimal controller is utilized. The high-frequency noise is higher when the PID controller is used. The measured following errors with the optimal controller are shown in Figure 7. The position bandwidth (−3dB) is found to be around 1.1 kHz. The RMS value is 0.68 nm and the peak-to-valley value is 5.38 nm. The calculated positioning error is 0.23 nm RMS by the model prediction, and the peak-to-valley value should be around 1.4-2.3 nm. In comparison, the PID controller is tuned with the same sampling rate and a similar bandwidth (1.5 kHz), and the following errors are measured as shown in Figure 8. Because the sampling rate is kept the same, the measuring error contribution from the feedback sensor should be the same. The following errors are larger when the bandwidth is increased, with an RMS value of 1.19 nm and a peak-to-valley value of 7.65 nm with PID control. These results show that the optimal controller indeed helps to achieve better positioning stability. The FFT spectra of the two error signals are shown in Figure 9. It can be seen that the error spectrum is more evenly distributed when the optimal controller is utilized. The high-frequency noise is higher when the PID controller is used. The FFT spectra of the two error signals are shown in Figure 9. It can be seen that the error spectrum is more evenly distributed when the optimal controller is utilized. The high-frequency noise is higher when the PID controller is used. Figure 9. FFT spectra of the two error signals with different control algorithms. Figure 9. FFT spectra of the two error signals with different control algorithms. Study on the Influences of Structural Parameters on Positioning Following Error From the control point of view, the controller has theoretically done its best to suppress outside disturbances. If the disturbances cannot be reduced from the roots, it is worthwhile to study how to reduce the system response to the disturbances by changing the structural parameters, such as mass, damping, etc. Several selected parameters were studied for their influences on the positioning following errors based on the closed-loop model, including the total mass of the moving part, flexure stiffness, damping, and motor force constant. The effects of changing plant parameters need not be linear. Thus, the current system design is used as the "operating point". The structural parameters are changed, and the resultant following RMS errors are used to compare the sensitivities. This helps to figure out the most effective way to optimize the performance. Influence of Moving Mass When the total moving mass m 1 + m 2 is doubled based on the current configuration, all high-frequency noises are reduced, as shown in Figure 10a. The current noise transfer functions are notably reduced at high frequencies. Base vibration errors are also reduced at high frequency. The side effect of increasing moving mass is that more force is needed to achieve the same acceleration. This means bigger motors will be used and, thus, more heat will be generated. The CAS plot in Figure 10b also shows that the positioning errors are lower. all high-frequency noises are reduced, as shown in Figure 10a. The current noise transfer functions are notably reduced at high frequencies. Base vibration errors are also reduced at high frequency. The side effect of increasing moving mass is that more force is needed to achieve the same acceleration. This means bigger motors will be used and, thus, more heat will be generated. The CAS plot in Figure 10b also shows that the positioning errors are lower. The minimum achievable positioning errors with different moving masses are shown in Figure 11a. It can be seen that the errors decrease with larger mass, as does the bandwidth (−3dB). The minimum achievable error is plotted in Figure 11b. The RMS error also decreases with increased moving mass, but it is larger than that in Figure 10a because it deviates from the optimal bandwidth. The minimum achievable positioning errors with different moving masses are shown in Figure 11a. It can be seen that the errors decrease with larger mass, as does the bandwidth (−3 dB). The minimum achievable error is plotted in Figure 11b. The RMS error also decreases with increased moving mass, but it is larger than that in Figure 10a because it deviates from the optimal bandwidth. Influence of Flexure Bearing Stiffness and Damping The flexure bearing in the designed system is the only path that external vibrations can travel to the tool. Therefore, the stiffness and affect the degree to which environmental vibrations will be transferred to the tool tip. Meanwhile, they also affect the rejection ability of force disturbances; this can be seen in Figures 12a and 13a. When the flexure stiffness increases, the transfer function from base vibrations is raised at low frequency. Meanwhile, when the damping increases, more high-frequency base vibration is transmitted to the tool. However, the errors caused by the current-stage noise are reduced in both cases, while the optimal closed-loop bandwidth is also decreased slightly. The CAS functions are plotted in Figures 12b and 13b. The total following error is decreased because the contribution of the base vibration is so small. This is not always true; when the stiffness or damping is increased to such a level that the base vibration contribution is comparable to the reduction in the current noise contribution, the total error will be increased. Influence of Flexure Bearing Stiffness and Damping The flexure bearing in the designed system is the only path that external vibrations can travel to the tool. Therefore, the stiffness k 2 and c 2 affect the degree to which environmental vibrations will be transferred to the tool tip. Meanwhile, they also affect the rejection ability of force disturbances; this can be seen in Figures 12a and 13a. When the flexure stiffness k 2 increases, the transfer function from base vibrations is raised at low frequency. Meanwhile, when the damping c 2 increases, more high-frequency base vibration is transmitted to the tool. However, the errors caused by the current-stage noise are reduced in both cases, while the optimal closed-loop bandwidth is also decreased slightly. reduced in both cases, while the optimal closed-loop bandwidth is also decreased slightly. The CAS functions are plotted in Figures 12b and 13b. The total following error is decreased because the contribution of the base vibration is so small. This is not always true; when the stiffness or damping is increased to such a level that the base vibration contribution is comparable to the reduction in the current noise contribution, the total error will be increased. There exists an optimal pair of flexure stiffness and damping coefficients under this disturbance situation, which minimizes the following errors, as can be seen in Figure 14. In this analysis, the stiffness and damping are adjusted within a large range, and the total RMS following error is plotted. As the stiffness and damping coefficients are increased, the following error first deceases and then starts to rise. The minimum following error of RMS (0.96 nm) is achieved when is equal to 1.1 × 10 N/m and is equal to 237.7 N/(m • s ). Apparently, this optimal stiffness and damping value is dependent on the relative strength of the disturbances. The CAS functions are plotted in Figures 12b and 13b. The total following error is decreased because the contribution of the base vibration is so small. This is not always true; when the stiffness or damping is increased to such a level that the base vibration contribution is comparable to the reduction in the current noise contribution, the total error will be increased. There exists an optimal pair of flexure stiffness k 2 and damping c 2 coefficients under this disturbance situation, which minimizes the following errors, as can be seen in Figure 14. In this analysis, the stiffness k 2 and damping c 2 are adjusted within a large range, and the total RMS following error is plotted. As the stiffness and damping coefficients are increased, the following error first deceases and then starts to rise. The minimum following error of RMS (0.96 nm) is achieved when k 2 is equal to 1.1 × 10 5 N/m and c 2 is equal to 237.7 N/ m·s −1 . Apparently, this optimal stiffness and damping value is dependent on the relative strength of the disturbances. der this disturbance situation, which minimizes the following errors, as can be seen in Figure 14. In this analysis, the stiffness and damping are adjusted within a large range, and the total RMS following error is plotted. As the stiffness and damping coefficients are increased, the following error first deceases and then starts to rise. The minimum following error of RMS (0.96 nm) is achieved when is equal to 1.1 × 10 N/m and is equal to 237.7 N/(m • s ). Apparently, this optimal stiffness and damping value is dependent on the relative strength of the disturbances. Conclusions In this paper, optimal control was achieved for a fast-actuating motion system. The influences of mechanical parameters such as mass, damping, and stiffness were investigated. The following conclusions can be drawn: Conclusions In this paper, optimal control was achieved for a fast-actuating motion system. The influences of mechanical parameters such as mass, damping, and stiffness were investigated. The following conclusions can be drawn: 1. The positioning error was reduced from 1.19 nm RMS to 0.68 nm RMS with the new controller, showing the benefits of a deterministic controller design approach; 2. Under the given disturbances, there exist optimal bearing stiffness and damping coefficients that result in minimal following errors. The optimal bearing stiffness and damping coefficients are 1.1 × 10 5 N/m and 237.7 N/ m·s −1 , respectively; 3. It was found that increasing moving mass helps to reduce following errors, but the optimal bandwidth will be smaller. Future Work The current analysis studied the positioning stability of tools when they hold their position, which is applicable for cutting of flat surfaces. When the tool starts to follow the high-frequency profiles, there are other disturbances, such as the inertia forces and cutting forces. Therefore, more detailed modelling of such disturbances can be added in future works.
7,333.2
2021-12-27T00:00:00.000
[ "Engineering", "Materials Science" ]
Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN) is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability. Introduction Large distributed memory parallel machines are becoming increasingly available.To e ciently use such large machines to solve an application problem, the computation must rst be divided into parallel actions.These parallel actions are then mapped and scheduled onto processors. Static, compile-time allocation is one way to accomplish this.As a rather simple example, consider the problem of multiplying two 64x 64 matrices on 16 processors.One may decide that each processor will compute a 16 x 16 submatrix of the result matrix by using appropriate rows and columns from the original matrix.This leads to 16 sub-computations, as desired, and either an automatic scheduler or a programmer can specify the appropriate data movement and computations. Such static scheduling schemes cannot be used when the size of sub-computations cannot be accurately determined.In fact, in many computations, the sub-computations themselves are not known at compile time.Combinatorial search problems encountered frequently in AI provide an extreme example.Exploring a node in the search tree may lead to a large sub-tree search, may quickly lead to a dead-end, or may lead to a solution.Even with deterministic computations, data dependencies and variable computational costs of operations lead to programs in which the detailed structure of computation can not be predicted in advance.In such computations, one cannot divide the work into N equal parts, where N is the number of processing elements (pes) in the system, because the computational costs of subtasks cannot be predicted accurately.A reasonable strategy for such computations is to divide the work at runtime into many ( N) smaller granules, and attempt to dynamically distribute them across the processors of the system.The grainsize must be large enough to o set the overhead of parallelization.There are systems, such as the chare kernel described in the next section, which can support a grainsize as small as a few milliseconds.Partitioning an application with small grainsize would provide a large pool of work.Thus, even if the amount of computation within individual granules may vary unpredictably, it at least becomes possible to move these granules among processors to balance the load. A scheduling scheme in such a context must deal with dynamic creation of work.It must cope with work generation and consumption rates that vary from processor to processor and from time to time.It cannot be a centralized scheme as it must work with a large number of processors and must scale up to a larger future system.Rather, it must be a distributed scheme, in which each processor participates in realizing the load balancing objectives. Obviously, a static scheduling scheme cannot be used for a computation that involves dynamic creation of work.However, a dynamic scheduling scheme can also be used for statically allocatable computations, such as the matrix multiplication problem mentioned above.In fact, a good dynamic scheduler may perform better than static schedulers even in some statically schedulable computations, because it will automatically adapt to variable speeds of processors and to variable numbers of processors. In this paper we describe a dynamic and distributed scheduling scheme called Adaptive Contracting Within Neighborhood (ACWN).The next section discusses background and context in which the scheme is to operate, and outlines basic issues.Section 3 describes some algorithms with similar objectives.Section 4 presents the ACWN algorithm and compares three di erent scheduling algorithms.Performance evaluation is given in Section 5, showing that ACWN maintains good load balance with low overhead.In Section 6, we discuss why the ACWN algorithm outperforms the other algorithms. Background The chare kernel is a runtime support system that is designed to support machine independent parallel programming 1, 2, 3, 4].The kernel is responsible for dynamically managing and scheduling parallel actions, called chares.A chare | the word stands for a small chore or task | is a process with some speci c properties.Programmers use kernel primitives to create instances of chares and send messages between them, without concerning themselves with mapping these chares to processors, or deciding which chare to execute next.Chares have some properties that distinguish them from processes in general.Upon creation, and upon receipt of a message, chares usually execute for a relatively short time.They may create other chares or send messages to existing ones.Having processed a message, the chare suspends, to be awakened by another message meant for it.These characteristics simplify the scheduling of chares considerably. We will use the chare kernel concepts and terminology in discussing dynamic scheduling strategies.However, it should be clear that the scheduling strategies that are applicable in this context can also be used in other contexts which involve dynamic creation of small-grained tasks.For example, the REDIFLOW system 5] for applicative programming, other parallel implementations of functional languages, rewrite systems and logic languages, and actor-based languages such as Cantor 6], can all bene t from such strategies. Many previous research e orts have been directed towards the task allocation in distributed systems 7,8,9,10,11,12,13,14,15,16,17].Although some basic ideas can be shared, we cannot simply apply these strategies to multicomputer networks.A recent comparison study of dynamic load balancing strategies on highly parallel computers is given by Willebeek-LeMair and Reeves 18].Work with a similar assumption as ours includes the Gradient Model developed by Lin and Keller 19].Athas and Seitz also point out that random placement can be a quite simple and e ective strategy 20,21].These strategies are discussed in the next section. A chare instance goes through three phases in its life-cycle: the allocating phase, the standing phase, and the active phase.It is said to be in the allocating phase upon its creation until it enters in a pool of chares at some pe, and to be in the standing phase until it starts execution for the rst time.Then the active phase begins.Opportunities for chare scheduling exist in all three phases, but with di erent cost and e ectiveness.The allocating-phase strategies as well as standing-phase strategies are instances of placement strategies.The active phase can also be used for scheduling.Strategies that move a chare in the active phase are called migration strategies.Since the grainsize of chares is not large, migration is expensive and not necessary for load balance.Hence, this strategy is not considered in this paper. Scheduling strategies can also be classi ed based on the amount of load information they use.The \load" measure may include the number of messages waiting to be processed, the number of active chares, available memory, etc., possibly in a weighted combination.For the following discussion, the speci c load measure is unimportant.The scheduler at a pe may periodically collect information from other pes to calculate its own \status" information on which the scheduling decision is based.The strategies can be classi ed as follows: type-i strategies involve using no status information.type-ii strategies calculate the status information by using local load information only. type-iii strategies calculate the status information by collecting load information from neighbors.type-iv strategies calculate the status information by collecting status information from neighbors.type-v strategies calculate the status information by collecting load information from all the pes in the system. Type-i and -ii strategies typically have low overhead.The randomized allocation to be discussed in Section 3 is an example of a type-i strategy.It is believed that a strategy that adapts to variations in the system is necessary, and using local information alone is not su cient to judge such variations.Type-v strategies, on the other hand, are expensive in large systems, and not scalable. The algorithm developed in this paper is a type-iii strategy, in which the status information of a pe may be determined based on load information from itself and from its neighbors.The gradient model to be described in the next section is a type-iv strategy.The status information of a pe is determined from its neighbors' status information.Thus, the status of a pe depends on its neighbors, and theirs, in turn, depend on their neighbors.However, the time required to exchange information causes the status to be dependent on possibly outdated information. Randomized Allocation and Gradient Model Athas and Seitz have proposed a global randomized allocation algorithm 20,21].The randomized allocation is an allocating-phase scheduling strategy and no standing-phase action is involved.A randomized allocation algorithm dictates that each pe, when it generates a new chare, should send it to a randomly chosen pe.One advantage of this algorithm is simplicity of implementation.No local load information needs to be maintained, nor is any load information sent to other pes.Statistical analysis shows that the randomized allocation has a respectable performance as far as the number of chares per pe is concerned.However, a few factors may degrade the performance of the randomized allocation.First, the grainsize of chares may vary.Even if each pe processes about the same number of chares, the load on each pe may still be uneven.Second, the lack of locality leads to large overhead and communication tra c.Only 1=N subtasks stay on the creating pe, where N is the number of pes in the system.Thus, most messages between chares have to cross processor boundaries.The average distance traveled by messages is the same as the average internode distance of the system.This leads to a higher communication load on large systems.Since the bandwidth consumed by a long-distance message is certainly larger, the system is more likely to be communication bound compared to a system using other load balancing strategies that encourage locality.Eager et al. 8] have modi ed the naive randomized allocation algorithm.They use threshold, a kind of local load information, to determine whether to process a chare locally or locate a chare randomly. The gradient model 19] is mainly a standing-phase scheduling strategy.As stated by Lin 22], instead of trying to allocate a newly generated chare to other pes, the chare is queued at the generating pe and waits for some pe to request it.A separate, asynchronous process on each pe is responsible for balancing the load.This process periodically updates the state function and proximity on each pe.The state of a pe is decided by two parameters, the low water mark and high water mark.If the load is below the low water mark, the state is idle.If the load is above the high water mark, the state is abundant.Otherwise, it is neutral.The proximity of a pe represents an estimate of the shortest distance to an idle pe.An idle pe has a proximity of zero.For all other pes, the proximity is one more than the smallest proximity among the nearest neighbors.If the calculated proximity is larger than the network diameter, it is in saturation and the proximity is set to be network diameter+1, to avoid unbounded increase in proximity values.If the calculated proximity is di erent from the old value, it is broadcast to all the neighbors.Based on the state function and the proximity, this strategy is able to balance the load between pes.When a pe is not in saturation and its state is abundant, it sends a chare from its local queue to the neighbor with the least proximity. The gradient model may cause load imbalance.For a tree-structured computation, this behavior could cause the upper level nodes to cluster together near the root pe.When the results need to be collected at the root of the computation tree, the computation slows down.Furthermore, the proximity information may be inaccurate because of communication delays and the nature of the proximity update algorithm: by the time the proximity information from an idle pe propagates through the majority of pes in a system, the state of the original pe may have changed. Adaptive Contracting Within Neighborhood Adaptive Contracting Within Neighborhood (ACWN) is a scheduling algorithm using the type-iii strategy.Here, each pe calculates its own load function by combining various factors that indicate its current load.A simple measure may be the number of messages waiting to be processed.Adjacent pes exchange their load information periodically by sending a small load message or piggybacking the load information with regular messages.Thus, each pe maintains load information on all its nearest neighbors. For pe k, its own load function is denoted by F(k), and its neighbors' load functions are denoted by a set of values F 0 (i), where dist(k; i) = 1.The value of F(k) is calculated periodically. The load information can then be used to determine a system state.For each pe k, a function B(k) is de ned as Min dist(k;i)=1 fF 0 (i)g, which represents how heavily its neighbors are loaded.Two prede ned parameters, low mark and high mark, are used to compare with B(k) to ascertain the current system state, which is shown in Table I.If B(k) < low mark, the system is considered to be in the light-load state.If B(k) high mark, it is in the heavy-load state.Otherwise, it is in the moderate-load state.phase strategy is called contracting and the standing-phase strategy is called redistributing. As mentioned before, a chare is in its allocating phase from the time it is created until it enters the local queue at a pe.The allocating-phase strategy of the ACWN algorithm is shown in Figure 1.During this phase, a newly created chare is contracted m hops, where 0 m d and d is the network diameter.We set an upper limit of traveling distance d for each allocating chare to prevent unbounded message oscillation.The contracting decision is based on the system state of each pe.The number of hops traveled so far for each chare c is recorded as c:hops.Thus, at each pe k, for an allocating chare c, which either is created by pe k or received from other pes, there exist the following cases: if the system is in the heavy-load state or c:hops d, chare c will be retained locally and added to the local pool of messages, terminating its allocating state; if the system is in the light-load state and c:hops = 0, pe k will contract chare c to its least-loaded neighbor no matter what its own load is.Otherwise, the chare will be contracted conditionally: if the load on the least-loaded neighbor is smaller than its own load, the chare is contracted out to that neighbor.In this way, the newly generated chare c travels along the steepest load gradient to a local minimum. The standing-phase strategy of the ACWN algorithm is shown in Figure 2. Load imbalances may appear even though the allocating-phase strategy is applied.Such imbalance may appear either due to limitations Redistributing(k) /* at PE k */ For each time interval and when not in the heavy-load state find i for all j's such that F'(i) <= F'(j), where dist of the underlying load contracting scheme which nds only a local minimum, or due to the di erent rates of consumption of chares.Moreover, since each pe has its own system state, it is possible that there exist pes in the light-load state, moderate-load state, or heavy-load state at the same time in a system.During the heavy-load state, pes accumulate chares without sending them to any other pes.Thus, after a pe leaves the heavy-load state, it may own many more chares than other pes.These chares need to be redistributed to other pes as the allocation of new chares alone may not be su cient to correct the load imbalance.Notice that the redistributing is active only when a pe is not in its heavy-load state.In the heavy-load state, since all neighbors of the pe have su cient work to do, it is not necessary to balance load between them. The behavior of both contracting and redistributing scheduling strategies is a ected by the system state, which is determined by the load information as well as the prede ned parameters, low mark and high mark.Low mark is used to switch states between the light-load and moderate-load.If it is too high, chares are to be contracted out frequently, and the overhead to move chares becomes higher.If it is too low, chares are spread out slowly and load imbalance may occur.High mark is used to decide whether the system is heavily loaded, i.e. in saturation.If this mark is too high, the scheduling algorithm keeps moving chares among pes even when they all have su cient work, leading to higher overhead.However, if high mark is set too low, the heavy-load state will be reached prematurely, which may cause load imbalance.Experiments suggested that performance is not sensitive as long as these parameters are in a reasonable range.As shown in Figures 3 and 4, the low mark could be about 2 to 5, and high mark about 8.These experiments used the number of messages waiting to be processed as the measure of load.In the rest of the experiments with ACWN, we chose values of low mark and high mark as 2 and 8, respectively. Scheduling strategies without migration can be summarized as a general model.The model consists of three functions.In the allocating phase, whether a chare is sent out depends on an allocating-phase function.If the function is true, the chare will be sent out; otherwise it is kept local.In the standing phase, whether chares are moved depends on a standing-phase function.If the function is true, the chares will be redistributed between pes.The third function is a destination function that determines which pe to send a chare to when the chare is to be allocated or redistributed.Di erent scheduling strategies can set di erent values for each of the three functions.If a scheduling strategy sets the allocating-phase function always false, it is considered to be inactive during the allocating phase.Similarly, if a scheduling strategy sets the standing-phase function always false, it is said to be inactive in the standing phase.To compare the randomized allocation, gradient model, and ACWN algorithms under this general model, we list three functions for each of them in Table II.For the gradient model, P(k) represents the proximity at pe k and d is the network diameter.For the randomized allocation, whenever pe k generates a new chare, a random number m is obtained to determine its allocating-phase function as well as its destination function, where 0 m < N and N is the number of pes in the system.If m is equal to k, the allocating-phase function is false.Otherwise, the chare will be sent to pe m as its destination.The gradient model has virtually no allocating-phase action.When a chare is generated, it is put in the local queue.This leads to slow spreading of the load.On the other hand, the randomized allocation does not have standing-phase action.It usually generates good distribution of the load.However, when the sizes of chares vary in a wide range, this strategy is unable to redistribute the load even if some pes are busy and others are idle.ACWN conducts the actions in both phases, resulting in a more reliable performance. Performance Studies We have tested several examples on an Intel iPSC/2 hypercube to study the e ectiveness of dynamic scheduling schemes on multicomputers.The machine used has a 32-node con guration with 4 megabytes of memory at each node.Three algorithms, randomized allocation, gradient model, and ACWN, were implemented.They shared most subroutines except for the allocating-phase function, the standing-phase function, and the destination function.Notice that the programs are chosen not because they are good parallel algorithms for the problems they solve, but for the suitability of illustrating di erent computation patterns handled by the dynamic scheduling.For each program the best sequential program written in C was also tested without changing the algorithm. In general, the sum of execution times of all pes can be broken into three parts: computation time, overhead, and idle time.Computation time is spent on problem solving and should be equal to the sequential execution time.This time is invariant with di erent scheduling strategies, di erent numbers of pes, and di erent grainsizes.Overhead includes the work of bookkeeping, communication, and load balancing.Idle time is the time in which pes have no work to do.The overhead and idle time depend on granularity of partitioning as well as scheduling strategy.Experiments for di erent grainsizes of the 10-Queen problem were conducted for analysis of the factors of granularity.In Figure 5, we show the e ciency of this problem for di erent numbers of pes with di erent grainsizes.The performance of the largest grainsize slumps as the number of pes increases because the pool of available chares is not large enough to keep all the pes busy.The poor performance of the curve with the smallest grainsize is due to overhead.For example, in Figure 6 we give the components of the execution time for di erent grainsizes with 16 pes.Here, a small grainsize imposes a large amount of overhead.On the other hand, a large grainsize reduces overhead, but may result in longer processor idle time because of load imbalance.individual chare, the system maintains a chare block, and for each message there is a message header including its source and destination chare information.The overhead of bookkeeping is about 250{400 microseconds whenever a new chare is created or a message is sent.The communication overhead consists of the time spent by the processor that deals with the sending and receiving of messages.The actual transmission time is overlapped with computation and does not need to be considered.The overhead for each communication is about 450 microseconds.The granularity also a ects communication overhead, because the number of messages exchanged between pes tends to increase when the grainsize becomes smaller.Not all the messages between chares introduce communication overhead.Only those going to pes other than the source pe have that result.Thus, the load-balancing strategies also in uence the communication overhead, as di erent strategies have di erent e ects on what fraction of the messages will be between local chares.Scheduling overhead can be divided into two parts: updating load information and chare placement.Time spent on chare placement is proportional to the number of chares and is determined by granularity.System load information can be exchanged periodically.As shown in Figure 7 for the 10-Queen problem on 16-pes, too short a period increases communication overhead, and too long a period leads to inaccurate load information due to sluggish updates.With a long exchanging period, the system acts unstably.We give both worst and best time from many repetitions of experiments for periods after 256 milliseconds.In Figure 7, we have shown two curves, with and without piggybacking for di erent exchanging periods.Piggybacking load information on the regular outgoing messages can reduce the number of load information messages exchanged.One with piggybacking behaves better than one without piggybacking, since with every message we update load information with a negligible additional cost.In Figure 8, we pick up one instance with piggybacking to show the sum of overhead and sum of idle time at all 16 pes.A short exchanging period makes the frequently updated load information unnecessary.However, if the period is too long, load is highly unbalanced with long idle time.From the curves, it can be seen that the best period is between 50 and 150 milliseconds.In the rest of the experiments, piggybacking is applied to both the ACWN and the gradient model algorithms.The period of load information exchanging is set to be 100 milliseconds for ACWN and the best value of exchanging interval is also selected for the gradient model. We now discuss the in uence of scheduling strategies.A good scheduling algorithm must be able to balance load for di erent application problems.At the same time, it has to keep scheduling overhead small.Furthermore, it must keep good locality so that most chares can be executed locally to reduce Period (msecs) communication overhead.Here we compare three scheduling algorithms, randomized allocation, gradient model, and ACWN.In Figure 9, for Fibonacci32 on 8-pes, we list chare distribution at each pe with di erent scheduling algorithms.Each chare processed at pe k is either generated by pe k itself or from other pes.The ACWN has the most locally generated chares and a few from other pes.At the other extreme, the randomized allocation has a few local chares (about 1/N), and most chares from other pes. The only scheduling overhead for the randomized allocation is to generate random numbers whenever a chare is created.However, communication overhead is high since most chares are sent to other pes irrespective of whether the system is heavily or lightly loaded.For the same problem of Figure 9, we illustrate percentage of computation time, overhead, and idle time in Figure 10.To compare the algorithms, the overhead time can be subdivided further into three sub-categories: the bookkeeping overhead (T B ), communication overhead (T C ), and load balancing overhead (T L ). Figure 11 extracts the overhead parts from Figure 10 and illustrates each kind of overhead for di erent algorithms.The randomized allocation has large overhead spent on communication although its scheduling overhead is negligible.The gradient model utilizes the system status information to make loads balanced among pes so that the idle time is reduced.More importantly, the gradient model sends chares away only when necessary.Due to this locality property, the gradient model does not incur high communication overhead compared to the randomized allocation case.However, the gradient model must exchange load information more frequently to balance the load, resulting in large load balancing overhead.The ACWN exhibits better locality than the gradient model.Therefore, it has less communication overhead.Its scheduling overhead is also small, due to a low frequency of load information exchange. In Table III and Figures 12{15, we give the performance comparison of the randomized allocation, the gradient model, and the ACWN algorithms.Here, one instance of each program has been chosen for execution, that is, 10-Queens, Fibonacci-32, one con guration of 15-puzzle, and the Romberg integration with 14 integrations.Characteristic features for di erent problems are shown in Table IV.The granularity is between 1 to 100 milliseconds, resulting from the medium-grained partitioning.Coarse granularity causes serious load imbalance and ne granularity leads to large overhead.The Fibonacci problem is a regular tree-structured computation.The grainsizes of leaf chares are roughly the same.In the Queen problem, the grainsize is not even, since whenever a new queen is placed, the search either successfully continues to the next row or fails.The 15-puzzle is a good example of an AI search problem.Here the iterative Performance of this problem is therefore not as good as others.In the Romberg integration, the evaluation of function points at each iteration is performed in parallel.As we can see, ACWN is better than both the randomized allocation and the gradient model in all the cases. Discussion The ACWN algorithm outperforms the randomized allocation and the gradient model partly due to its two-phase scheduling strategy and partly due to its adaptive locality.Its good locality reduces communication overhead whereas the randomized allocation does not.Besides the standing-phase strategy, the allocating-phase strategy of ACWN allows load to spread out faster than the gradient model.The ACWN can adapt to di erent chare sizes too.Assume at a time both pe i and pe j have m messages waiting for processing, respectively.It happens that pe i gets a message with a large amount of computation.After a while, pe i still holds m ? 1 messages and pe j may have no messages left.At this time, ACWN is able to schedule messages from pe i to pe j to balance the load.In contrast, the randomized allocation cannot adapt to such a case. For a small number of pes, the gradient model can make better load balance than the randomized allocation.However, since the gradient model was designed based on good locality to reduce communication overhead, it does not spread the load very fast.For a large number of pes, the gradient model leads to more load imbalance than the randomized allocation does.As shown in Figure 16 for the 10-Queen problem, the idle time of the gradient model at 16 and 32 pes is longer than the randomized allocation.A similar conclusion is also made by Grunwald 24].ACWN reaches the most even load distribution among the three scheduling algorithms. From experiments, the overhead for a local chare that does not involve communication overhead is about 0.3 to 0.4 milliseconds and for a remote chare that involves communication overhead it is about 1.2 to 1.3 milliseconds.Thus, performance may not su er much from bookkeeping and communication overhead if the grainsize of a chare is much larger than that.A few ten-milliseconds can therefore be counted as a reasonable grainsize.Due to a large number of remote chares, the communication overhead for the randomized allocation is large, which in turn implies a large grainsize.Does overhead of a complicated scheduling algorithm always overwhelm the bene t it achieves?Certainly, a complex algorithm (as an extreme example, one that looks for the least loaded processor across the entire system at every scheduling decision) loses its uniform distribution advantage to its high overhead.The randomized allocation algorithm bears negligible overhead for load balancing decisions, but the communication overhead is high and the suspension is large.We have shown that a good load balance can be obtained by a simple algorithm with low scheduling overhead.Even though ACWN pays more scheduling overhead compared to the randomized allocation, it still can achieve better performance in most cases. Overhead can be reduced by using co-processors.A co-processor can be attached to the main processor in each pe, which handles all bookkeeping, load balancing, and communication activities.In the iPSC/2 hypercube, each pe has a communication co-processor which shares part of the communication overhead.Since we are not able to program co-processors, overhead of bookkeeping, load balancing, and part of communication must be handled by the main processor.If the ACWN scheduling can be applied to a system with co-processors, the frequency of load information exchange can be increased and more communication activities may take place to improve load balance, as long as the load of the co-processor does not exceed the load of the main processor.The randomized allocation and the gradient model may bene t more from the co-processor than ACWN does, since the randomized allocation has more communication overhead and the gradient model has more scheduling overhead. Conclusion We described a scheme for dynamic scheduling of medium-grained processes on multicomputers.The scheme, called Adaptive Contracting Within Neighborhood, employs two substrategies: an allocatingphase strategy and a standing-phase strategy.The allocating-phase strategy moves a new piece of work along the steepest load gradient to a local minimum within a neighborhood.It estimates the system state and ensures that pieces of work are moved only when the system requires it.The standing-phase strategy corrects load imbalance by redistributing pieces of work that were initially allocated by the allocating-phase strategy.Every processor maintains load information about their neighbors only, and such information is often exchanged by piggybacking it on regular messages.Thus, the scheme incurs low load balancing overhead.As it manages to retain many pieces of work on the processor that produced them, it has low communication overhead. ACWN was compared with two other schemes, the randomized allocation and the gradient model.The randomized allocation incurs negligible load balancing overhead and achieves reasonably uniform distribution of work.However, it incurs much communication overhead.The gradient model, on the other hand, enforces locality at the expense of agility in spreading work out quickly to processors.All these schemes were implemented in a system called the chare kernel running on the Intel's iPSC/2 hypercube.The experimental results demonstrate that ACWN performs better than the other two algorithms for many computation patterns. Figure 1 : Figure 1: The allocating-phase strategy for the ACWN algorithm. Figure 2 : Figure 2: The standing-phase strategy for the ACWN algorithm. F 1 Figure 3 : 1 Figure 4 : Figure 3: Low mark e ect on the performance for 10-Queen problem. Figure 7 : Figure 7: Comparison of di erent periods to exchange load information. Figure 8 : Figure 8: Total overhead and idle time at all 16 pes. Table I : System States Table II : The Allocating-phase, Standing-phase, and Destination Functions at pe k
7,529.2
1994-12-20T00:00:00.000
[ "Computer Science" ]
ACCURACY COMPARISON OF VHR SYSTEMATIC-ORTHO SATELLITE IMAGERIES AGAINST VHR ORTHORECTIFIED IMAGERIES USING GCP The Very High Resolution (VHR) satellite imageries such us Pleiades, WorldView-2, GeoEye-1 used for precise mapping purpose must be corrected from any distortion to achieve the expected accuracy. Orthorectification is performed to eliminate geometric errors of the VHR satellite imageries. Orthorectification requires main input data such as Digital Elevation Model (DEM) and Ground Control Point (GCP). The VHR systematic-ortho imageries were generated using SRTM 30m DEM without using any GCP data. The accuracy value differences of VHR systematic-ortho imageries and VHR orthorectified imageries using GCP currently is not exactly defined. This study aimed to identified the accuracy comparison of VHR systematic-ortho imageries against orthorectified imageries using GCP. Orthorectified imageries using GCP created by using Rigorous model. Accuracy evaluation is calculated by using several independent check points. INTRODUCTION The use of very high resolution satellite imageries for largescale mapping (1: 5000) provides many advantages over the use of aerial photography related to costs and area coverage.Before using them to develop large-scale maps, the very highresolution satellite imagery should be geometrically corrected.Most of geometric correction implemented by orthorectification.The orthorecification processes requires the Digital Elevation Model (DEM) data and Ground control Point (GCP) data acquired from the field survey.Previous published papers confirms that the difference in accuracy DEM (Barazzetti, et al., 2010;Ayhan, et al, 2006) and different numbers of GCP (Lee, et al, 2008;Murthy, et al, 2008;Toutin and Cheng, 2002) will affect the results (orthorectified image).Currently in Indonesia, the usage of systematic (semiautomatic) orthorectification is increasing due to its costeffective and its simple-rapid method of ortho imageries for many practical purposes for national development, such as infrastructure developments and regional development planning.Thus, it is necessary examine the benefits value of the systematic-ortho method compared to its accuracy result.This paper discuss comparison of the accuracy of very high resolution (VHR) systematic-ortho imageries against orthorectified imageries using GCP data in tropical region.The VHR systematic-ortho imageries is refers to satellite imageries which semi-automatically ortho-rectified by using DEM data only without using any GCP data.Meanwhile, the orthorectified imageries is rectified by using Terrasar-X DEM and GCP STUDY AREA The selected research area of this paper is Bali Island of Indonesia (8 0 3'26"S-8 0 51'1"S and 114 0 25'59"E-115 0 42'40"E).Bali Island is located in between Lombok Island and Java Island (figure 1).Bali Island is about ±5591 km2 and surrounded by beautiful tropical beaches.This island has various topography between 0 -2577m of MSL with flat area in the southern part and mountainous area in the middle and northern part of the island. Figure 1. Location of study area in Bali Island Bali Island has the famous island and tourism industries in Indonesia and in the world.The undergoing rapid development in Bali mainly to facilitate domestic and foreign tourists needs during their visit.This situation resulting many new building and road accesses throughout the island providing accessibility during GCP measurement and easyness in image interpretation. DATA There are three main data used in this study namely Satellite Imageries, Digital Elevation Model and Ground Control Point. Satellite Imageries Variety of the very high resolution satellite imageries sensors are used to produce the ortho-mosaicked imageries to cover the entire Bali mainland area consist of Pleiades, WorldView-2, WorldView-3, and GeoEye.The total number of images are 57 scenes which consists of 17 scenes Pleiades imageries with 0.5m spatial resolution, 22 scenes WorldView-2 imagery with 0.5m spatial resolution, 15 scene WorldView-3 imageries with 0.3m spatial resolution, and 3 scenes GeoEye imageries with a spatial resolution of 0.46m.There are many overlap images for certain areas (figure2). Digital Elevation Model There are 2 type of DEM data used in this research study namely SRTM DEM 30 meters resolution and TerraSAR-X DEM (figure 3).TerraSAR-X DEM is produced from satellite TerraSAR X-band sensor, so objects on the surface of the earth is still served at Digital Surface Model (DSM).DEM TerraSAR-X has a spatial resolution of 9m with vertical accuracy between 7 to 9m in some parts of Bali.SRTM DEM data generated from Shuttle Radar Topography Mission (SRTM) which are used in this research has a spatial resolution of 30m with vertical accuracy better than ± 16m.The height reference system is on Earth Gravitational Model 1996 (EGM 96). Ground Control Point The GCP data are acquired through field works by using geodetic GPS dual frequency and static measurement method.The interval of time observation for each GCP is 1 hour.There are 103 points used for orthorectification processing for the whole Bali Island which distribute evenly.Each GCP is located on an object that easily identified in the image and in the field, usually it located in the corner of identified object and have high contrast.These objects mainly artificial objects, such as corner of street, bridges or fences, the corner of water drainage, volleyball court, swimming pool, etc (figure 4).There are parts of the study area has less dense of GCP distribution, mostly forest coverage area, due to limited road accesses and located in remote area or private places. METHODOLOGY The orthorectification for both systematic orthorecification and orthorectification using GCP are conducted by using two different algorithm methods namely rigorous models and approximation model for different satellite sensor data. Orthorectified imagery using Rigorous models methods is implemented for Pleiades imageries.Rigorous models are being performed for data which contain the information about the geometry of the sensor (physical orbit parameter) during satellite data recording.The approximation models method is used for WorldView-2, WorldView-3 and GeoEye imageries.This method used the RPC data of the related satellite imageries.The RPC data provides the polynomial coefficient information to build correlation between pixels in the image and object in the ground locations (Parcharidis, et al, 2005).During the bundle adjustment process, the tie points between overlapping imageries are generated with the maximum residu for X is 0.940 pixels and for Y is 0.990 pixels.The final result of the this process are ortho-mosaicked imageries of Bali Island with GCP and without GCP. VHR systematic-ortho imageries The VHR systematic-ortho imageries is produced from orthorectification process by using SRTM DEM 30m only. VHR orthorectified imageries using GCP Orthorectification is implemented by using DEM from TerraSAR-X data and 103 of GCPs.Instead of having tie points, during the bundle adjustment process, the GCP are also calculated with the maximum residu for X is 0.078 second and for Y is 0.076 second.The residu value is settled by considering the minimum accuracy for the 1:5.000Topographic map scale which 2.5 metres. Geometric Accuracy Assessment Root Mean Square Error (RMSE) formula can be used to determine the accuracy of the orthorectified image (ASPRS, 1998).equations 1 and 2 are used to determine the RMSE for X and Y coordinate, whereas equation 3 is used to determine the RMSE horizontal (X, Y) (ASPRS, 2014 ;FGDC, 1998). (1) (2) (3) where : x i(map) is the coordinate in the specified direction of the i th checkpoint in the data set, x i(surveyed) is the coordinate in the specified direction of the i th check-point in the independent source of higher accuracy, n is the number of checkpoints tested, i is an integer ranging from 1 to n Horizontal accuracy analysis performed referring to an accuracy standard that has been recognized or have been implemented by an independent agency of a country.The analysis carried out as a quality control for X and Y coordinates generated in order to guarantee the quality of products produced (FGDC, 1998). RESULTS The horizontal accuracy of orthorectified image evaluated based on RMSE error value.RMSE error value obtained from the calculation of differences between the coordinates object in the field to the coordinates object in orthoimage.Some 46 ICP was used to evaluate the horizontal accuracy for each image had produced. Result for VHR systematic-ortho Visually, VHR systematic-ortho imageries looks nice and matching for all scene image (figure 7).However, when it was evaluated using 46 check points, on certain part of the image, 4-5m displacements were identified.Table 1 represents the comparison of ICP coordinates acquired from field measurement using GPS Geodetic and VHR systematic-ortho.Table 1.The displacement values that occurs in VHR systematic-ortho imageries From table 1, it can be seen that some point have displacement more than 4m.Base on calculating the accuracy for 46 ICP, VHR systematic-ortho imageries have 3.374m RMSE value.In 90% confidence interval, thus the accuracy for these imageries is 5,120m.Referenced to the Indonesian base map accuracy standards, VHR systematic-ortho imageries can be used to produces 1: 25000 (2nd class) map scale. Result for VHR orthorectified image using GCP By using exactly the same ICPs, VHR orthorectified imageries using GCP was evaluated.The average displacement resulting from this ortho imageries are in the range of 0.5-1.5mas shown in Table 2. Figure 8. Result for VHR orthorectified imageries using GCP Table 2.The displacement values that occurs in VHR ortho imageries using GCP Table 2 represents points which have nearly equal displacement value in the entire scope of the research studies.The evaluation using 46 ICP shows that VHR orthorectified imageries using GCPs has 1,550m RMSE values with accuracy at 90% confidence interval is 2.351m.Referenced to the Indonesian map accuracy standards, this image then can be used to produce or as reference to 1 : 5000 (3rd class) map scale. Accuracy Comparison Geometric accuracy comparison between VHR systematic-ortho and VHR orthorectified using GCP can be perform by using the same ICPs as shown in Table 3.The accuracy comparison showed significantly different results between VHR systematic-ortho and VHR orthorectified using GCP.The use of GCP and more precision DEM proven to increase the accuracy of orthorectified imagery.Here is the result from accuracy comparison between VHR systematicortho and VHR orthorectified using GCP: 1. Improved the accuracy significantly On VHR systematic-ortho, ICP I031 and I104 has a displacement on the X axis by 7.3231m and 5.5270m respectively.The low accuracy of the VHR systematic-ortho caused by inaccuracies of the information satellite ephemeris and attitude.Erroneous information satellite ephemeris and attitude resulted in a significant displacement in the objects on the earth's surface, especially for flat areas.As an illustration, an error 0.01" in recording information in one corner of the orientation sensor WorldView2 satellite imagery at an altitude of 770km will lead to displacement in the image equal to 9.5m in the earth's surface from the position that it should be.The errors that occur on the X axis is greater than the Y axis indicates the satellite conditions inaccurate information on the X axis is big.The use of GCP in the process of orthorectification made dependent area around the GCP.The presence these bond, the area around the GCP will be controlled.In VHR orthorectified, ICP I031 located close to the GCP G032 and ICP I104 located close to the GCP G062.GCP lead to increased the accuracy in the X-axis at the ICP I031 and I104 be -0.7138m and -0.4986m respectively.Figure 9 shows the appearance ICP I031 and I104 along with nearby GCP.On VHR systematic-ortho, ICP I071 has a displacement by 1.2291m on the X axis and 0.102393m on the Y axis, while ICP I072 has a displacement by -1.38366m on the X axis and -0.958370m on the Y axis.On VHR orthorectified using GCP, ICP I071 has a displacement by 1.06857m on the X axis and 0.07007m on the Y axis and ICP I072 has a displacement by 1.553729m on the X axis and 1.182258m on the Y axis.These ICP located close to the GCP G087 with difference distance on the flat surface topography.ICP located further to the GCP has a larger displacement than the ICP that located closer to GCP.This proves that the closer ICP to GCP, the less displacement occur.Figure 10 shows the appearance of ICP I071 and I072 related to GCP G087. Figure 10.Location ICP with various distance from GCP 2. Decrease the accuracy. On VHR systematic-ortho ICP I045 has a displacement by 1.432m on the X axis and 0.665m on the Y axis.However, the comparison results showed displacement in ICP I045 becomes greater in the VHR orthorectified of -1.2399m and -1.5835m respectively.This decrease in accuracy can occur due to various possibilities.The first possibility is a mistake in the identification of objects in the image so wrong in putting the GCP on orthorectification process.The fault in placed GCP on orthorectification process will cause harm in the area around the coordinates of GCP, resulting in deterioration of accuracy in the area around the GCP. The second possibility is due to the influence of GCP attractions in the vicinity and do not care.ICP I045 was among the GCP G058 and GCP G059.When there are 2 GCP near the ICP, ICP position was influenced by the attraction between the two GCP.When one of the GCP have bad accuracy then the relative position of ICP will be wrong.This is likely occur in ICP 045, so the shift in VHR orthorectified be greater than VHR systematic-ortho.Figure 11 shows the appearance of GCP G058 and G059 related to ICP I045. Figure 11.Location ICP I045 realted to GCP G058 and G059 Therefore, in order to get good results, GCP had used in the orthorectification process should be identified clearly, firmly, and have minimal relief displacement .This 2 possibility then cause some of ICP on VHR orthorectified using GCP has a greater displacement than VHR systematic-ortho.Table 5 represents a recapitulation of the geometric accuracy between two generated imageries.Based on the geometric accuracy, the VHR systematic-ortho imageries has very low accuracy and can not be met the large scale maps requirements.It fit to produce the 1: 25000 map scale or smaller map scale. When it used for large scale map production, this image will result in the issue regarding the horizontal accuracy that not appropriate to the map scale accuracy.That mean the map will give the wrong spatial information to the every user who use that map. Tabel 5. Recapitulation of two generated image CONCLUSION This research paper found that VHR systematic-ortho imageries has significant different accuracy compared to VHR orthorectified using GCPs.The VHR systematic-ortho imagery has geometric displacement in the range between 3 to 4 meters. Based on the evaluation of horizontal accuracy, these images can only be used to produce medium-scale base maps in the scale of 1: 25000 and not recommended to be used to produce large-scale maps of 1: 5000.When it is used to produces largescale maps, this image will bring the wrong spatial information to the user.Then, the user will use the map that not appropriate in horizontal accuracy as they need.Any users who use the map for spatial planning, such as detailed spatial planning, that should use the large scale base map will produce the bad spatial planning.GCP had used in the orthorectification process should be identified clearly, firmly, and have minimal relief displacement Using good GCP on orthorectification process may significantly increase the geometric accuracy to fulfill the requirement as an input data for the production of 1: 5000 base map. Figure 2 . Figure 2. 57 scene satellite imagery coverage Bali Island The product level of Pleiades sensor is in Primary while the other satellite sensors are in Ortho-Ready Standard (OR2A) level.The satellite imageries data used in this research has the incident angle less than 20 degree while the acquisition year are varies between 2013 to October 2015. Visualization for Terrasar X DEM (a) and SRTM DEM 30m (b) Figure 4 . Figure 4. GCP and ICP field measurement in various object Besides the GCPs, we prepared 46 Independent Check Point (ICP) measured through GPS surveys during the field work.The ICPs are used to calculate the accuracy of the VHR systematic-ortho imageries and orthorectified imageries using GCP results.Figure5shows the distribution of GCP and ICP in entire study area.The blue point represents the GCP, while the red point represents the ICP. Figure 5 . Figure 5. Distribution GCP and ICP in Bali Island Figure 9 . Figure 9. Location ICP I031 and I104 related with nearby GCP Table 3 . Comparison of ICPs coordinate between two mosaicked-ortho image
3,683.4
2016-06-03T00:00:00.000
[ "Environmental Science", "Mathematics", "Medicine" ]
Theoretical Analysis of Exciton Wave Packet Dynamics in Polaritonic Wires We present a comprehensive study of the exciton wave packet evolution in disordered lossless polaritonic wires. Our simulations reveal signatures of ballistic, diffusive, and subdiffusive exciton dynamics under strong light–matter coupling and identify the typical time scales associated with the transitions between these qualitatively distinct transport phenomena. We determine optimal truncations of the matter and radiation subsystems required for generating reliable time-dependent data from computational simulations at an affordable cost. The time evolution of the photonic part of the wave function reveals that many cavity modes contribute to the dynamics in a nontrivial fashion. Hence, a sizable number of photon modes is needed to describe exciton propagation with a reasonable accuracy. We find and discuss an intriguingly common lack of dominance of the photon mode on resonance with matter in both the presence and absence of disorder. We discuss the implications of our investigations for the development of theoretical models and analysis of experiments where coherent intermolecular energy transport and static disorder play an important role. • My main remark is that, although I of course appreciate the details and the overall very thoroughly executed analysis, this manuscripts lacks sometimes a clear picture or goal. If the authors could make their goal (e.g. why investigating these parameters?, how is that relevant for future theoretical and experimental investigations?, how does that go beyond past studies?) clearer in their subsections, it would benefit the manuscript greatly and make it much more appealing for a broader audience. For example a sentence like '… relevant to future theoretical and experiment investigations of polaritonic chemistry' in the conclusion would be much stronger with some concrete examples. • On page 10 the authors use photon mode and photon number interchangeable. To my understanding N_c is the number of photon modes. Do the authors consider an explicit truncation of the Fock-space i.e. each mode can include only n number of photons? A clarification would be greatly appreciated. • In Figure 2, the authors mention that the radiation field was modeled using 1601 modes and a fixed Rabi-splitting of 0.1eV. However, given they investigate different numbers of molecules (or length of the cavity) the coupling strength also changes. Should that not also change the Rabi-splitting? Can the authors clarify why different cavity lengths/number of molecules still induces the same Rabi-splitting? • Given that this is a very quantitative study, and especially because it includes a multi-mode picture with many off-resonant modes to the two level system, can the authors comment on, if the results would still hold if one would include more than just a two level system i.e. the off-resonant modes could couple to higher energies? • Additionally as the authors explicitly choose to call their matter systems molecular subsystems (and not atomic), I was wondering which concrete experiments (or molecular systems) they had in mind that are sufficiently represented by their 2-level molecular picture? More detail on that could strengthen the paper a lot and generalize its applicability. • Can the authors comment on, if the dipole approximation (i.e. the transition dipole does not need a spatial resolution) for the highest frequencies included in this multi-mode picture is still valid? (Given up to 1601 modes are used). • Very minor, but on page 16 in "In this subsection, we work with …" there is a double sigma_x. Reviewer: 2 Comments to the Author Aroeira et al. present a theoretical study of exciton transport of a an ensemble of two-level systems in a nano-cavity. A particular nice point is that the authors look at a Fabry-Perot type cavity that allows for periodic boundary conditions of at least one dimension. This is something that is not done in most theoretical studies, but actually matches experimental conditions. I think that the paper is a timely contribution that nicely complements the theoretical understanding of how to model ensembles under strong light-matter coupling. However, one point of criticism is the the presentation as a "molecular" ensemble. Actual molecules can not be accurately modelled as two-level systems as they are also subject to vibrational excitations and static dipole moments (which may cause additional effects in an ensemble). I recommend to change the wording from molecules to two-level systems. This would not lower the value of the study in my point of view but rather clarify the approximations involved. Author's Response to Peer Review Comments: May 29, 2023 Referee response for "Theoretical Analysis of Exciton Wave Packet Dynamics in Polaritonic Wires" We thank both referees for their insightful comments, suggestions, and pertinent questions. Our response to each item is given below in blue and examples of changes to the main manuscript are provided in italics. Reviewer 1. "My main remark is that, although I of course appreciate the details and the overall very thoroughly executed analysis, this manuscripts lacks sometimes a clear picture or goal. If the authors could make their goal (e.g. why investigating these parameters?, how is that relevant for future theo- This discussion is important for both theoretical models and the interpretation of experimental results since, as we will show, qualitative incorrect results are obtained when too few photon modes are considered Page 14. Note that the error vanishes for a sufficiently large number of photons modes, therefore verifying the convergence of our model. Recent work has shown that this is not always guaranteed. Mandal et. al. reported that using the dipole gauge and RWA, polaritonic dispersion relations do not converge with respect to the number of cavity modes unless the often neglected dipole self-energy terms are included in the model. Next, we will consider how static fluctuations in the energy gaps of the two-level systems change the results presented in Fig. 3 Page 15. In the results discussed so far in this letter, we found that quantitative converged results require many photon modes. This indicates that a broad energy range of photons contributes to exciton propagation, which is at odds with the intuition that resonant processes must dominate the dynamics over sufficiently long times. To further understand these findings, we inspect the photonic composition of the wavepacket over time. This will give us insight into the state of the radiation within the cavity which is inaccessible by single-mode models. Moreover, we will discuss how this photonic profile changes with experimentally controllable variables. To obtain this profile, we compute the wave packet via the time-averaged relative mode weight distribution, defined as follows We thank the referee for finding this issue. Indeed, the usage of the term "photon number" was not precise in that instance and has been corrected, as we reformulated the sentence mentioned by the referee. We work within the one-excitation manifold, i.e., the Fock space is truncated such that all states contain either 0 or 1 photon. Hence, the dynamics involve only one-photon processes. We included a new sentence in the manuscript (pg. 9) reinforcing this characteristic of our model: We investigate the dynamics within the one-excitation manifold, i.e., the Fock-space is truncated to include photon modes with either 0 or 1 photons. "In Figure 2, the authors mention that the radiation field was modeled using 1601 modes and a fixed Rabi-splitting of 0.1 eV. However, given they investigate different numbers of molecules (or length of the cavity) the coupling strength also changes. Should that not also change the Rabi-splitting? Can the authors clarify why different cavity lengths/number of molecules still induces the same Rabi-splitting?" The Rabi splitting is often expressed as Ω R = g √ N M where g is the single-molecule light-matter coupling at resonance. Hence, as noted by the reviewer, increasing the number of sites (molecules, N M ) at fixed volume will increase the Rabi splitting. However, the individual coupling strength g also depends on the quantization volume, e.g., g = µ ℏω . Therefore, we can write Ω R = µ ℏωρ/2ϵ 0 where ρ is the density of the system ( N M LxLyLz ). Since in our finite-size study, we increase both N M and L x simultaneously (L x = N M a, where a is the mean intersite distance), the density of the system remains constant, and thus we are able to maintain the same Rabi splitting for all different-sized simulations. We have made changes to the manuscript making this aspect of the model more explicit (pg. 11). The results presented in Fig. 2 were obtained by varying the number of sites (N M ) and consequently the wire length (Lx = N M a). This scheme maintains the density of the system constant allowing us to fix the Rabi splitting at 0.1 eV. "Given that this is a very quantitative study, and especially because it includes a multi-mode picture with many off-resonant modes to the two level system, can the authors comment on, if the results would still hold if one would include more than just a two level system i.e. the off-resonant modes could couple to higher energies?" If the material system under study presented two or more bright low-lying excited states, the dynamics could be significantly changed. However, in the case where two bright excited states are separated by an energy greater than twice the Rabi splitting, we believe our results would still hold. Consider, for example, a collection of three-level systems ({ε 0 , ε 1 , ε 2 }). Our initial state is an exciton superposition of ε 1 states. The {ε 2 } states can only be accessed via an emission of photon ε 1 → ε 0 followed by an absorption event ε 0 → ε 2 . If the photons emitted by {ε 1 } are highly off-resonant with those which efficiently induce the ε 0 → ε 2 transition, then the introduction of ε 2 states is highly unlikely to affect the dynamics of the lower lying {ε 1 }. As a numerical example, let us consider E 0→1 = 2.0 eV (as in most of our results), and E 0→2 = 3.0 eV. From Fig. 5, we see that the probability of finding photons with energy greater than 2.5 eV is generally very small. Hence, another excited state centered around 3.0 eV would have very limited participation in the overall process. We added a comment to the main manuscript addressing this point (pg. 19) The results presented in this section also allow us to evaluate the validity of our two-level approximation for matter. Consider for example a three-level system with well-separated transition energies E 0→1 = 2.0 eV and E 0→2 = 3.0 eV. From Fig. 6, we see that the probability of finding photons with energy greater than 2.5 eV is generally very small. Hence, another excited state centered around 3.0 eV would not interact significantly with any of the populated photon modes and, therefore, would have very limited participation in the overall process. "Additionally, as the authors explicitly choose to call their matter systems molecular subsystems (and not atomic), I was wondering which concrete experiments (or molecular systems) they had in mind that are sufficiently represented by their 2-level molecular picture? More detail on that could strengthen the paper a lot and generalize its applicability." This was also pointed out by reviewer 2 and made appropriate changes to the terminology were made to more rigorously reflect the approximations employed in this work. Still, we believe that, while the model used here is more suitable for atomic resonators, it can also be used to study molecules over ultrafast timescales, especially if their vibronic coupling is weak. To clarify this assumption, we added the following sentence to the theory part (pg. 6): Matter is modeled by a chain of two-level systems, which can represent atoms or molecules with weak vibronic coupling over ultrafast times. "Can the authors comment on if the dipole approximation (i.e., the transition dipole does not need a spatial resolution) for the highest frequencies included in this multi-mode picture is still valid? (Given up to 1601 modes are used)." In the computation with the largest number of photon modes (N c = 1601), the maximum value of k is approximately 0.1 nm −1 , therefore corresponding to a wavelength of roughly 62 nm, which is six times greater than the average distance a between sites, thus making the electric dipole approximation valid. Moreover, as we concluded in this work, highly off-resonant modes (e.g., modes with energy greater than 3.0 eV) give a negligible contribution to the overall dynamics, and we believe that the inclusion of corrections to the electrical dipole approximation would not overcome the energetic suppression of highly off-resonant photon m odes. This important observation is included in the revised manuscript (pg. 9): Since this Hamiltonian relies on the electric dipole approximation, we numerically verified that even when very high-energy photons are considered, the wavelength remains greater than the spacing between sites. Furthermore, as our results will show, these highly off-resonant m odes a re negligible to the exciton transport. "Very minor, but on page 16 in "In this subsection, we work with " there is a double sigma x." We thank the reviewer for pointing that out. This typo has been corrected. Reviewer 2. "However, one point of criticism is the presentation as a "molecular" ensemble. Actual molecules can not be accurately modeled as two-level systems as they are also subject to vibrational excitations and static dipole moments (which may cause additional effects i n a n e nsemble). I r ecommend to change the wording from molecules to two-level systems. This would not lower the value of the study in my point of view but rather clarify the approximations involved." We thank the reviewer for a thorough reading of our manuscript. We agree with the reviewer's suggestions and have changed various instances of "molecular" or "molecule" to "two-level systems", "sites", or "atomic resonators" when more appropriate. See also our response to Reviewer 1 addressing the same point for the included change in the manuscript. We hope that this additional content satisfactorily addresses the comments from the reviewers. Author's Response to Peer Review Comments: The supporting information statement has been corrected as requested by the editor.
3,294.2
2023-04-22T00:00:00.000
[ "Physics" ]
Formulation of transverse mass distributions in Au-Au collisions at 200 GeV/nucleon The transverse mass spectra of light mesons produced in Au-Au collisions at 200 GeV/nucleon are analyzed in Tsallis statistics. In high energy collisions, it has been found that the spectra follow a generalized scaling law. We applied Tsallis statistics to the description of different particles using the scaling properties. The calculated results are in agreement with experimental data of PHENIX Collaboration. And, the temperature of emission sources is extracted consistently. I. INTRODUCTION Multiparticle production is an important experimental phenomenon at Relativistic Heavy Ion Collider (RHIC) in Brookhaven National Laboratory (BNL). In Au-Au collisions, identified particle yields per unity of rapidity integrated over transverse momentum p T ranges have provided information about temperature T and chemical potential µ at the chemical freeze-out by using a statistical investigation [1]. It brings valuable insight into properties of quark-gluon plasma (QGP) created in the collisions. A much broader and deeper study of QGP will be done at Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) and the Facility for Antiproton and Ion Research (FAIR) at the Gesellschaft für Schwerionenforschung mbH (GSI). In order to estimate hadronic decay backgrounds in photon, single lepton and dilepton spectra which are penetrating probes of QGP, m T spectra of identified mesons have been studied in detail [2][3][4][5][6], where m T = m 2 0 + p 2 T is transverse mass of a particle with rest mass m 0 at a given p T . In Ref. [7], m T spectral shapes of pions and η mesons in S-S and S-Au collisions are identical. Such behaviors are caused by m T scaling properties, which help us to predict new m T spectra and understand the mechanism of meson production. Statistical analysis of m T spectra is extremely useful to extract information of particle production process and interaction in hadronic and QGP phases. In the CGS (Color Glass Condensate) description, the total hadron multiplicity follows a scaling behavior motivated by the gluon saturation. Different phenomenological models of initial coherent multiple interactions and particle transport have been introduced to describe the production of final-state particles [8,9] in Au-Au collisions. With Tsallis statistics' development and success in dealing with non-equilibrated complex systems in condensed matter research [10], it has been utilized to understand the particle production in high-energy physics [11][12][13]. In our previous work [14], the temperature information of emission sources was understood indirectly by a excitation degree, which varies with location in a cylinder. We have obtained emission source location dependence of the exciting degree specifically. From central axis to side-surface of the cylinder, the excitation degree of the emission source decreases linearly with the direction of radius. In this work, we parametrize experimentally measured m T spectra of pions in Tsallis statistics. Using the m T scaling properties in the spectrum calculation, we reproduce m T spectra of other light *<EMAIL_ADDRESS><EMAIL_ADDRESS>2 mesons and obtain the temperature of emission sources directly. II. THE FORMULATION AND COMPARISON WITH PHENIX RESULTS At the initial stage of nucleus-nucleus collisions, plenty of primary nucleon-nucleon collisions happen. The primary nucleon-nucleon collision can be regarded as an emission source (a compound hadron fireball) at intermediate energy or a few sources (wounded partons and woundless partons) at high energy. The participant nucleons in primary collisions have probabilities to take part in cascade collisions with latter nucleons. Meanwhile, the particles produced in primary or cascade nucleon-nucleon collisions have probabilities to take part in secondary collisions with latter nucleons and other particles. Each cascade (or secondary) collision is also regarded as an emission source or a few sources. Many emission sources of final-state particles are expected to be formed in the collisions. According to Tsallis statistics [10], the total number of the mesons is given by where p, E, T , µ, V and g are the momentum, the energy, the temperature, the chemical potential, the volume and the degeneracy factor, respectively, a parameter q is used to characterize the degree of nonequilibrium. The corresponding momentum distribution is We have the transverse mass m T distribution, At midrapidity y = 0, for zero chemical potential, the transverse mass spectrum in terms of y and m T is which is only a m T distribution of particles emitted in the emission source at midrapidity y = 0. Considering a width of the corresponding rapidity distribution of final-state particles, the m T spectrum is rewritten as where C = gV (2π) 2 is a normalization constant and Y(−Y) is the maximum (minimum) value of the observed rapidity. Generally speaking, the temperature T and q can be fixed for different event centralities (or impact parameters) by fitting the experimental data of pions. The temperature T of emission sources is calculated naturally and consistently in the current formulation. The symbols represent experimental data of PHENIX Collaboration [15,16] and the curves are fitting results by using Eq.(5). By fitting the experimental data, values of T and q are given in Table I [15,16] in different centrality cuts indicated in the figure. The curves are the results obtained by fitting the data. We fit the spectra using Tsallis distributions and obtain the values of T and q which are given in Table I with χ 2 /dof. It is found that the temperature T increase with the increase of the centrality. Fig. 6 and Fig. 7 show invariant yields of K ± , J/ψ, φ, ω and η as a function of m T − m in corresponding centrality cuts. The symbols represent experimental data of PHENIX Collaboration [16-18, 20, 21]. The curves are the results calculated by using m T scaling properties. The values of χ 2 /dof are shown in Table II. For different centralities, the m T scaled results are in agreement with the experimental data of different mesons. Fig.2−Fig.4, Fig.6 and Fig.7. Table III. The ratios are helpful to understand the contribution of hadronic decay in photonic and leptonic channels. Final-state particles produced in high-energy nuclear collisions have attracted much attention, since attempt have been made to understand the properties of strongly coupled QGP by studying the possible production mechanisms [22,23]. Thermal-statistical models have been successful in describing particle yields in various systems at different energies [14,[24][25][26]. The temperature T of emission sources is very important for understanding the matter evolution in Au-Au collisions at RHIC. In the rapidity space, different sources of final-state particles stay at different positions due to stronger longitudinal flow [27][28][29] . In our previous work, we have studied the trans-
1,524.6
2013-10-01T00:00:00.000
[ "Physics" ]
A Model for a Virtual Fashion Museum The matrix of the 21 st century, characterized by a new reality resultant of the new technological and sociocultural context, seems to impose the design of new communication tools by reviewing today’s museum models based on a deep reflection on the contemporary museum studies. After this immersion in research, we believe having achieved the design of a more innovative virtual fashion museum. Introduction We have tried to look at the fashion curating discipline under a new cultural paradigm, i.e., as an open, transversal and endless practice, in order to design a multicultural conceptual board crossing diverse areas such as art, technology, design, culture and society, for which we defined the main objectives of this research as: o To make a conceptual revaluation of the past and present time so as to design multidisciplinary creative atmospheres, to test innovative exhibition models adjusted to the Digital Era besides attempting to contribute for the dissemination of knowledge and culture; o To contribute for a higher relevance of fashion and costume as anthropological and sociological tools, working on a global context; o To reinforce the obvious need of using the internet in the museum context; o To understand the new systems of cultural communication as well as the new individual and social values, using them as a starting point of the design of an effective platform to communicate fashion. Fashion and the Museum PRE 1970 Although fashion has been overlooked as a research subject in theoretical studies for several decades of the 20 th century as well as having been relegated to the field of history, during the second half of the previous century important academic and theoretical studies in the socio-cultural fields started to consider fashion as a subject of research and to see it under different approaches and perspectives, which reinforced its importance not only as a phenomenon but also as an important tool to analyze individual and social behaviors as a reflection of sociopolitical, economic and cultural changes. On the other hand, the evolution of the Visual Culture studies-the interaction between Art and Society -would necessarily connect cultural studies, art history, critical theory and other fields connected with the social sciences, enlarging its focus to the new cultural products of a renewed urban landscape contaminated by visual images (from photography to advertising, from fashion to cartoons, movies, cosmetics, design, etc.). Culture, today, is related to 'meaning practices' that allow examining the society trough the artifacts, with which the museum is deeply involved. "Museums are deeply involved in constructing knowledge in this way through those objects, narratives, and histories that they bring to visibility or keep hidden" [1]. It was also during the end of the 20 th century that fashion started to be showcased in museums beyond the historical and anthropological contexts and be regarded not merely as a consumption phenomenon but as providing cultural meanings -interpreting from the identification, evaluation and relation of the artifacts with culture through conceptual frames of beliefs and values. The roots of the concept of contemporary fashion exhibition date from the 1940s through a major traveling exhibition -Le Théâtre le la Mode -that took place in Paris, in 1945, immediately after the ceasefire. This interdisciplinary platform was organized by the Chambre Syndicale de la Couture Française, aiming to reaffirm the sophistication of French culture and regain clientele for the haute couture fashion houses -an important business sector before the World War II -in the new economic scenario and the distribution of money wealth. This important exhibition opened ceremoniously at the Pavillon Marsais, Musée des Arts Décoratifs, winning about 100000 visitors and later traveled through the most important capitals -from the European cities of London, Barcelona, Stockholm, Copenhagen and Vienna before heading to New York, Washington and S. Francisco. The exhibition gathered around it the 'crème de la crème' of the Parisian cultural intelligentsia, sharing the stage of an imaginary theatre with small-scale sets designed by famous artists and inhabited by miniature dummies (in a 50|100 scale) wearing the most luxurious dresses by prestigious Parisian couture houses. Curiously the exhibition was designed by a group of famous artists and metteurs-en-artists while in exile, including Christian Bérard (director), Jean Cocteau, Boris Kochno, Eliane Bonabel, Jean Saint-Martin and André Beaurepaire, among others, with the objective of raising money for the reconstruction of the Nazi devastated France. The exhibition not only achieved its goal and raised millions of dollars to help the French economy but was also the rudimentary stage of an innovative form of exhibiting through context. During the following decades Fashion as a creative discipline would always be associated with large blockbusters, especially the ones with the main goal of economic recovery or small exhibitions exclusively directed to the costume experts and historians. POST-1970 The democratization of fashion in the sixties -the transformation from exclusive and luxurious couture garments for the elite into a clothes culture accessible to the masses -led the museums to adjust the perspective [2]. Until de beginning of the 1970s garments were ignored as an exhibition subject per se and were preserved in a very orthodox way, being exhibited almost exclusively on 'costume museums' in a very discreet way, utterly conservative and historically correct, protected from the viewers' gaze through the massive window displays or simply supporting other historical subjects. Thus, the curator's role was restricted to the physical preservation of the collections rigorously inventoried and exhibited in second-class galleries. It was during the 1970s that Fashion Curating emerged as a discipline, via the relationships that important museums established with exceptional fashion and image experts -such as the legendary Diana Vreeland, a former fashion editor of Harper's Bazaar and American Vogue, who started a collaboration with the Metropolitan Museum of Art-The Costume Institute, NY, and the cosmopolitan fashion photographer and costume designer Sir Cecil Beaton, who organized the memorable exhibition 'Fashion: An Anthology by Cecil Beaton' at the Victoria & Albert Museum, London. If, for his exhibition, Beaton joined together an extraordinary and exquisite collection using his personal relationships with European royalty, international socialites, French couturiers and beautiful people -which was later donated to the V&A -, reinforcing its already important collection, presenting for the first time, all the garments contextualized through a contemporary vision, Vreeland went further. Diana Vreeland curated a series of eleven exhibitions for The Costume Museum at The MET and the retrospective The World of Balenciaga (1972) was repeatedly attacked by her dissidents as having the gall to emulate the hallowed halls of the Art world [3] but widely acclaimed by its audience at the same time. "The World of Balenciaga introduced a brand new approach to costume exhibitions. In a spectacular setting, a fashion designer for the first time was given the focus reserved in museums for great artists" [4]. In 1973 Ruth Berenson reviewed Vreeland's Balenciaga exhibition in the New York Review, remarking that the show "accomplished the not inconsiderable feat of bridging the gap between art and fashion". Berenson remarks on the general prevailing tendency of the art world to dismiss the creativity of fashion and calls Balenciaga "the last great artist of fashion" [3]. Vreeland inaugurated a new concept of lighting, which came directly from her former editorial experience. Rejecting the most conventional lighting solutions, she turned the best angle or a beam of light into an object of study and used contextual artifices, such as the enormous baroque wigs in 'The Eighteen Century Woman' (1982-83) or the armors and stage props of the acclaimed Spanish singer Lola Montes for 'The World of Balenciaga' (1972). "Her displays, which often included as many as one hundred mannequins, would appeal to the imagination and plunge the viewer into a milieu -perhaps a celebration of a great moment in Hollywood, or her versions of the eighteen century. She wanted the clothes to appear fashionable to the contemporary viewer. As she had done in her magazine pages, Diana Vreeland would give the viewer something more" [4]. Mrs. Vreeland ignored the rigorous recommendations of her conservative conservation staff and her representations were often incorrect historically, as stated by Eleanor Dwight: "Working with Vreeland was quite frustrating. While the staff members wanted the clothes to appear as they would have in the time period they represented, Vreeland wanted the clothes to look 'now' [5]. Contradicting the canons of that period, she promoted an intimacy between artifacts and viewers in a time when curators detested the idea of having the public to close to the exhibited pieces. Vreeland frees the clothes from the massive window displays and instead exhibits them on wood platforms covered with colored fabrics or mirrors so as to create a distance and reinforce theatricality. Diana Vreeland extended her avant-garde editorial point of view to the museum context and changed this field "with an uncanny sense of style and drama" [5]. She imprinted her personal and non-expert touch by experimenting exhibition contexts that were theatrically dramatized, redefining a new lexicon of sensorial stimulation which went far from the visual sense, introducing soundtracks along with provocative color palettes, using strong scents, subverting the exhibit supports as well as the mannequins. Diana Vreeland inaugurated a new concept of lighting, which came directly from her former editorial experience. Rejecting the most conventional lighting solutions, she turned the best angle or a beam of light into an object of study and used contextual artifices, such as the enormous baroque wigs in 'The Eighteen Century Woman' or the armours and stage props of the acclaimed Spanish singer Lola Montes for 'The World of Balenciaga» (1972). Her displays, which often included as many as one hundred mannequins, would appeal to the imagination and plunge the viewer into a milieu -perhaps a celebration of a great moment in Hollywood, or her versions of the eighteen century. She wanted the clothes to appear fashionable to the contemporary viewer. As she had done in her magazine pages, Diana Vreeland would give the viewer something more" [5]. Both collaborations achieved not only enormous popularity and increased the range of visitors but has also contributed to a further review of the fashion exhibitions, clarifying the relations trough Curating | Exhibition Making and Fashion Curator | Fashion Editor. The new museum formats The discussion on curating dates back from the 2001 symposium Curating Now: Imaginative Practice/Public Responsibility, organized by the American organization Philadelphia Exhibitions Initiative, as stated by the Cultural Program Director of the Pew Charitable Trusts, Marian A. Godfrey: "Curating Now has set a standard of critical dialogue that will challenge the other disciplines to create similar opportunities for artistic discourse that reaches both the Philadelphia region and beyond it" [6]. From this time on museums have been exploring different methodologies and contents, searching to frame themselves in the development of Technology and the Interactive Era. "A new generation of museum professionals has attempted to reinvent the museum, to bring it to the twenty-first century as a place that can compete with other recreational venues for leisure time, a place more identified with providing opportunities for celebration than for contemplation. The thrust of today's museum is to 'attraction status', to be a 'destination', and to appeal to a mass audience. To achieve this, the direction of exhibition and education programs inevitably shy away from universal ideals and moves towards the familiar and commonplace. In the battle between high and low culture, low seems to have the upper hand" [7]. So it is possible to identify now innovative models and concepts, such as: a. The Virtual Museum (referred to in the next section). b. The Object Museum, always related to extraordinary architecture masterworks, such as The Solomon Guggenheim Bilbao. c. The 'Participatory Museum', focused on the study of the motivation, availability, and relevance of the 'participant' viewers, in order to interact with the community, a concept designed by the American curator Nina Simon that has been tested in museums around the world, such as the Santa Cruz Museum of Art & History, the Dutch Museum of Natural History, the Copenhagen Museum or the Zuiderzee Open Air Museum. d. The 'Project Museum' created around expressions of popular art and exploring the effects of the mass-media culture. Unorthodox, the 'Project Museum' is populist, experimental, fun, sensorial, immersive, being the 'The EMP' -Experience Museum (Seattle) its prime example. At the same time, the most recent studies known as 'New Museum Theory' -theories that question and revise the conceptual principles of Vergo's 'New Museology', even though they admit they are much related -identify four paradigmatic archetypes of the future museum: a. 'The Sanctuary', as a result of the most traditional way to see the museum as a sacred contemplation space. b. 'The Market Directed', focused on business plans more devoted to tourism than to the community, which tries to explore very popular contents, momentary blockbusters, tied to strong advertising campaigns and merchandising. c. 'The Post Colonial Museum' that develops reconciliation discourse, redesigning the post-imperial representation to offer new approaches to the history and the cultural the identity of the colonial contexts. d. And finally, the 'Post Museum', which has not yet very defined boundaries but already points to a dynamic cultural center, active, multidisciplinary, tolerant, multicultural and careful with minorities, characterized by calendars that include, together and/or intersected, different avantgarde cultural expressions such as art, photography, video art, fashion, publicity, world music, cinema, street-art, performing arts, discussions and controversies, etc. The museum in the digital era This research focused on the Virtual Museum due to today's cultural dynamic and effervescent context. "It's cheap. It's fast. It offers great shopping, tempting food and a place to hang out. And visitors can even enjoy the art" announced the New York Times in one of 1997 issues [8] about the Virtual Museum. As an important communication tool to produce culture and disseminate knowledge, the museum faces now the new paradigm of technology and interaction. Ignored for several years by the museum, the webspace started recently beginning an important strategic media, allowing the design of different contents for different viewers, giving global access to their digital archives, virtual galleries or even to virtual contents related to real ones through Immersive Virtual Reality. As a matter of fact, in the end of last millennium, some of the most well-known museums started very carefully and timidly to use the Internet mainly as a database, rarely using it to support disciplines connected with fashion and design. Scientific issues (e.g., Virtual Museum of Bacteria). To our knowledge, no initiative focused directly on the fashion industry or design was issued until the end of 2011, with the exception of the largely acclaimed and reported 'Valentino Garavani Virtual Museum' by the Italian fashion house of Valentino. The detailed analysis of this virtual museum allowed us to identify its stronger and weaker points, as defined by Paola Moscati [9]. "The Virtual Museum is not the real museum transposed to the web. The Virtual Museum is not an archive a database of, or electronic complement to the real museum. The virtual museum finally is not what is missing from the real museum [9]. A virtual museum must support an interdisciplinary approach, through the implementation of complex semantic associations, which will allow the user to understand the culture that is behind the objects and contextualize them." [10]. Quoting Silva (2012) -who states that The Valentino Museum does not provide a surprising experience -we can conclude that: a. It is slow because visitors must download an app for every single access. b. It is distant because the immersion is limited to static 2 and 3D spaces. c. It is limited, because it is about one single designer, being thus as a mere database. d. It is not comprehensive because its aim is the sole promotion of the label and the couturier. e. It is closed and not interactive. f. It is not registrable. g. It is controlled and nondemocratic. h. It is mainstream, not experimental. Conclusion: A new format of virtual museum The result of this research is a conceptual design within the virtual territory, which is a communication platform that optimizes the Internet's potential [11]. We believe to having achieved an original, unique and innovative model because its design was based on the real acceptation of the concept of the virtual museum since it is not associated with an existing entity or real content, on the contrary, it aims to establish cooperative networks, bridges among institutions and curators, artists and designers in order to interact with a large spectre of Internet browsers [12]. The final result was in line with the 'W3 Consortium' recommendations' principles, especially the Web for everyone; Web for everything; Knowledge Base e Trust and Confidence, as well as two important reports connected with the relationship between Museum and Technology, namely NMC: Horizon Report>2012, 2012 to 2015 Museum Editions (New Media Consortium | Marcus Institute for Digital Education in the Arts) and Museums and the Web 2011 (Pew Research Center), which present short, medium and long-term recommendations for the museum action [13]. We have also considered the main guidelines of immateriality curating particularly to the issues involving the responsibilities and control systems in collaborative work [14]. The designed archetype aims to explore new maps, products and cultural practices using an interactive and dynamic grammar, supported by collaborative networks, which will be the source of exhibition subjects under different approaches, languages, and media, creating thus a space for a shared laboratory as well as the experimentation of ideas. Although it is not our aim to define it's technological framework, for now, it seems obvious that it should use the 'Virtual Reality' and 'Augmented Virtual Reality' technology; we designed a navigation menu as the basis for further development [15]. Furthermore, it is very important to consolidate an extensive collaborative network that includes Kurators, Curators, Web and Animation Designers as well as digital artists, video-artists, movie directors, photographers, public relations, sponsors, artists, thinkers, and theorists, with the aim of optimizing each of the future exhibition contents. We envision a design clearly inspired by the concept of MOTEL, in the sense in which is generally understood: "A hotel providing travelers with lodging and free parking facilities, typically a roadside hotel having rooms adjacent to an outside parking area or an urban hotel offering parking within the building" (dictionary.reference.com/), i.e., a very dynamic space that houses exhibition contents in alternation using a vast range of multidisciplinary languages and interacting approaches to communicate, and teach. It seems of great importance to offer a mobile website, designed to mobile devices at the same time as a traditional website directed to visitors surfing from their desktops, although it will be mainly accessed by mobile devices. Aside the main interactive fields, we aim to optimize the interaction with the viewers through a large range of applications (apps) that help to conceptualize and contextualize the exhibited objects and artifacts besides endowing exciting moments of discovery through various devices. Special consideration will be given to social network practices to allow viewers to tag, comment and share videos, images and audio contents, thus reproducing its impact and dissemination. It is also important to clarify our intention to reach an extensive range of viewers, so that our mission of sharing and teaching is ensured. Particular consideration will also be given to the continual development of technology connected with smart objects,
4,492.6
2018-01-22T00:00:00.000
[ "Art", "Computer Science" ]
Global teleconnectivity structures of the El Ni\~no-Southern Oscillation and large volcanic eruptions -- An evolving network perspective Recent work has provided ample evidence that global climate dynamics at time-scales between multiple weeks and several years can be severely affected by the episodic occurrence of both, internal (climatic) and external (non-climatic) perturbations. Here, we aim to improve our understanding on how regional to local disruptions of the"normal"state of the global surface air temperature field affect the corresponding global teleconnectivity structure. Specifically, we present an approach to quantify teleconnectivity based on different characteristics of functional climate network analysis. Subsequently, we apply this framework to study the impacts of different phases of the El Ni\~no-Southern Oscillation (ENSO) as well as the three largest volcanic eruptions since the mid 20th century on the dominating spatiotemporal co-variability patterns of daily surface air temperatures. Our results confirm the existence of global effects of ENSO which result in episodic breakdowns of the hierarchical organization of the global temperature field. This is associated with the emergence of strong teleconnections. At more regional scales, similar effects are found after major volcanic eruptions. Taken together, the resulting time-dependent patterns of network connectivity allow a tracing of the spatial extents of the dominating effects of both types of climate disruptions. We discuss possible links between these observations and general aspects of atmospheric circulation. Introduction The empirical analysis of climate data is fundamental to understand the evolution and develop more accurate methods for forecasting of climate phenomena like El Niño. Typically, such data sets comprise time series representing temperature, precipitation or other climate variables observed at distinct locations distributed around the globe. Their common properties include long-range spatial and often also temporal correlations (Fraedrich and Blender, 2003), interactions at and among multiple scales (Paluš, 2014) and nonlinearity (Dijkstra, 2013). With the Earth's surface being subdivided into regions for which individual "grid points" and associated localized records of climate variability are considered representative, the evolution of the climate system can be approximately described by a high-dimensional multivariate time series composed of a multitude of interdependent signals. While the analysis of such big climate data sets has been traditionally attempted by means of classical statistical approaches like empirical orthogonal function or maximum covariance analysis (von Storch and Zwiers, 2003), it has recently been realized that these methods exhibit fundamental intrinsic limitations, including their linearity and associated condition of pairwise orthogonal patterns (Gámez et al., 2004). As a consequence, the traditional view that the corresponding decompositions of global spatio-temporal covariability patterns actually provide dynamical (or at least statistical) modes that unambiguously coincide with specific key climatic processes has been abandoned (Monahan et al., 2009). Taken together, there is growing evidence that the application of traditional linear methods of signal processing and the climatic interpretation of their results are severely affected by the dynamical complexity of the involved processes. During the last years, complex network representations of climate variability have been developed (Tsonis et al., 2006;Donges et al., 2009a, b;Tsonis et al., 2011;Tsonis and Swanson, 2012;Steinhaeuser et al., 2012;Peron et al., 2014;Ciemer et al., 2017) and demonstrated to provide a suitable approach for relieving some of the aforementioned concerns (Donges et al., 2015b). In this nonlinear statistical framework, referred to as functional climate network analysis, the individual grid points or cells are considered as nodes of a spatially embedded graph. Connections among these nodes are established according to similarities between the individual (local) climate time series (Tsonis et al., 2006;Donges et al., 2009b;Donner et al., 2017). By construction, the network structures thus obtained highlight essential statistical interrelationships among spatio-temporal climate data (Donges et al., 2009b). The application of functional climate networks has already provided several important insights. For instance, centrality measures, such as betweenness centrality, have been found to serve as tracers of global circulation patterns in the atmosphere and oceans (Donges et al., 2009a). Moreover, climate networks have been used to identify dipole patterns which represent pressure anomalies of opposite polarity appearing in two different regions simultaneously (Kawale et al., 2011). The study of the coupling structure between interdependent climate variables (Donges et al., 2011), the temporal evolution and teleconnections of the North Atlantic Oscillation (Guez et al., 2012(Guez et al., , 2013, the distinction of different types of El Niño phases (Radebach et al., 2013;Wiedermann et al., 2016) and the prediction of the latter (Ludescher et al., 2013(Ludescher et al., , 2014 have also been subjects of corresponding recent investigations. Many of the aforementioned methodological achievements have been integrated in open source software packages (Donges et al., 2015a) contributing to the increasing use of functional network analysis in climatological studies (Donner et al., 2017). One rather fundamental property of large networks is their (possibly hierarchical) organization in terms of communities -an aspect that has also been addressed recently in the context of key patterns in climate data (Tsonis et al., 2011). Here, a community is a subset of densely connected nodes which exhibit only few interactions with the rest of the network (Newman, 2006b;Fortunato, 2010). In a climate network context, communities would ideally have some climatological interpretation. Specifically, Tsonis et al. (2011) argued that each community in a climate network should be considered as a subsystem which operates relatively independent of the other communities. Besides corresponding connectivity structures in individual climate variables, community detection algorithms (Fortunato, 2010) can also be used to detect multi-variable clusters (Steinhaeuser et al., 2010). In this paper, we analyze global surface air temperature data in terms of functional climate networks and demonstrate the close relationship between El Niño and La Niño episodes as well as strong volcanic eruptions on the one hand, and temporal changes in the modular organization of the resulting networks on the other hand. For this purpose, we study 3.4 ENSO-big Agung El Chichon Pinatubo Figure 1. Main regions of interest used within this paper. Sets of blue dots labelled with "El Chichon", "Agung" and "Pinatubo" indicate grid points within a 5 • radius around the corresponding volcanoes. The numbers 3, 3.4 and 4 identify the corresponding Nino regions (cf. Tab. 1) commonly used for defining characteristic indices of ENSO variability. The region "ENSO-big" will be removed from the complete global data set when analyzing the spatial imprints of volcanic eruptions to ensure that ENSO-related effects are excluded. the teleconnectivity structure in the climate system in terms of spatial fields of two network properties that represent the number of strong statistical connections, as well as the average spatial distance between the connected grid points. In addition, the associated temporal variations are traced by some scalar-valued global network characteristics. The remainder of this paper is organized as follows: Sect. 2 provides brief information on the climatological background of ENSO and volcanic eruptions as the two types of major climatic disruptions studied in this work. The data and methodology used in this work are described in detail in Sect. 3. Finally, our results are presented and discussed in Sect. 4, followed by concluding remarks. El Niño-Southern Oscillation Among the dominant teleconnectivity patterns in the global climate system, the El Niño-Southern Oscillation (ENSO) is the probably most remarkable phenomenon in terms of both, the magnitude of associated variations in sea-surface temperature (SST) and sea-level pressure, as well as its specific impacts on different aspects of regional climate variability worldwide (Trenberth, 1997). During the positive phase (El Niño) of this complex oscillation of the coupled atmosphereocean system in the tropical Pacific, the eastern tropical Pacific exhibits some anomalous warming with respect to "normal" mean conditions, while the negative phase (La Niña) is characterized by a corresponding cooling. In comparison with the normal mean climatology, this surface temperature region latitudes longitudes Nino1+2 Table 1. Overview on different regions commonly used for defining characteristic temperature-based indices associated with ENSO variability. In addition, we include the definition of the "ENSO-big" region studied in this work, which corresponds to the region that is discarded in our analyses of the impacts of strong volcanic eruptions on global temperature teleconnectivity. anomaly results in marked shifts of key atmospheric pressure systems, modifying the large-scale circulation and, thus, leading to prominent shifts of, e.g., precipitation patterns. It has been shown that effects of both ENSO phases can be observed in remote regions including North and South America, Africa, the Indian subcontinent, and even Antarctica (Ropelewski and Halpert, 1987;Dai and Wigley, 2000;Neelin et al., 2003;Turner, 2004;Clarke, 2008;Sarachik and Cane, 2010). The long-term variability of ENSO is characterized by some irregular oscillations with a period of 2 to 7 years and remarkable variations in the associated characteristic frequencies and amplitudes of both, temperature and pressure anomalies. Following its prominent spatial structures in tropical SST and sea level pressure, ENSO is commonly traced by indices that take up the variability of the aforementioned observables in some key region of the tropical Pacific ocean. Notably, a set of indices has been defined in terms of average SST anomalies taken over distinct regions in the eastern and central tropical Pacific, referred to as Nino1+2, Nino3, Nino4 and Nino3.4, respectively (Trenberth and Stepaniak, 2001) (see Fig. 1 and Tab. 1). In this work, we will utilize the socalled Ocean Niño Index (ONI) for differentiating between different phases of ENSO. It is defined as the running threemonth mean SST anomaly for the Niño 3.4 region (5 • N-5 • S, 120 • -170 • W) in comparison with centered 30-year base periods that are updated every 5 years (NCEP, 2017). When the ONI exceeds 0.5 • C for at least five consecutive months, the corresponding situation is classified as an El Niño, and the magnitude of the ONI is taken as an indicator of the strength of the corresponding event. In turn, if the ONI drops below −0.5 • C for at least five consecutive months, this indicates a La Niña episode. In the last years, it has been recognized that the commonly observed spatial patterns associated with El Niño (as well as La Niña) related SST anomalies are far from being homogeneous across the set of observed events. Consequently, it has been suggested to further distinguish both phases into two respective flavours (see Wiedermann et al., 2016, and references therein). The first type is the classic or East Pa- cific (EP) El Niño (Rasmusson and Carpenter, 1982;Harrison and Larkin, 1998), which is localized in the eastern tropical Pacific and characterized by strong positive SST anomalies close to the western coast of South America. Opposed to this, the El Niño Modoki or Central Pacific (CP) El Niño exhibits marked SST anomalies in the central tropical Pacific close to the dateline. Notably, both spatial structures (EP and CP) can be observed in the context of La Niña, too. Noticing that there have been contradictory classifications in the literature for some past ENSO events, Wiedermann et al. (2016) recently presented a new indicator for the ENSO flavor based on functional climate networks. In the remainder of this paper, we will follow their classification, which is summarized in Tab. 2. Volcanic eruptions Besides distinct ENSO episodes and their known global climate impacts, another type of events that can substantially affect climate at large spatial and temporal scales are strong volcanic eruptions. Similar to El Niño and La Niña episodes, such events can result in large-scale spatially coherent cooling trends due to modifications of the radiation balance by changes in atmospheric chemistry and the shielding effect of volcanic aerosols in the stratosphere. Subsequently, such cooling can again cause changes of precipitation and temperature patterns from synoptic (weather) time scales to relatively persistent multi-annual effects (Robock, 2000) and even trigger long-lasting climate disruptions (Miller et al., 2012). In the past decades, several large volcanic eruptions have injected up to some millions of tons of sulfur dioxide into the atmosphere, which can get rapidly distributed around the globe once entered the stratosphere. In this study, we focus on the global effects of the three major volcanic eruptions during the second half of the 20th century. Within this period, the largest and most influential event, the Mount Pinatubo eruption (McCormick et al., 1995), took place between April and September 1991 in the Philippines, followed by the Mount Agung eruption in Indonesia (February 1963to January 1964 and the El Chichon eruption (March to September 1982) in Mexico (see Fig. 1). Data In this study, we use daily mean surface air temperature (SAT) data (at sigma level σ = 0.995) from the National Center for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR) Reanalysis I project (Kalnay et al., 1996;Kistler et al., 2001): The data cover the years 1948-2015 at a global grid with equi-angular spatial resolution of 2.5 • in both latitude and longitude, thus comprising 10, 512 individual temperature time series. Note that we found the inclusion of the land areas important, hence we chose SAT instead of the aforementioned SST. In order to remove leading order effects of seasonality in the temperature recordings, the long-term average temperatures for each calendar day of the year have been subtracted from the raw data independently for each grid point, resulting in so-called SAT anomalies. Equi-angular gridded data have, by construction, a higher density of grid points at the poles than around the equator, which would result in systematic biases of statistical characteristics overemphasizing the polar regions with apparently more data if not properly accounted for. For the latter purpose, area-weighted measures have been developed and subsequently applied in recent works (Tsonis et al., 2006;Heitzig et al., 2012;Wiedermann et al., 2013). As an alternative, we follow here the approach of Radebach et al. (2013), where the original data have been remapped onto a grid with a much higher spatial homogeneity. Specifically, we use an icosahedral grid as described by Heikes and Randall (1994), which finally leads to a decomposition of the Earth's surface into Voronoi cells of almost the same area. In the present case, the corresponding remapping procedure results in a set of N =10,242 grid points that exhibit a narrowly peaked distribution of geodesic distances between direct neighbors. In Radebach et al. (2013), the time series associated with each new grid point have been determined based upon the values at the respective four surrounding grid points in the original equi-angular grid. In this work, we use a slightly different approach by taking the four closest points in space instead, which in some cases may deviate from the former setting. This modification is motivated by the fact that the consideration of the spatially closest "observational" values may provide a better approximation of climate variability at the new grid point. Moreover, these spatial neighbours can be determined efficiently using spatial search trees. Due to the commonly rather large spatial correlation length of the SAT field (as compared to other climate variables like precipita-tion) and its resulting spatial smoothness, we do not expect the time series resulting from both algorithmic variants to differ markedly. Finally, we note that when using the global data set as described above, the temporal correlations associated with the key ENSO region and the surrounding parts of the Pacific ocean are known to dominate climate variability globally. This leads to undesired outcomes when aiming to resolve the effects of individual volcanic eruptions on global temperature patterns, since they might be masked by ENSO variability, especially in cases where the corresponding effects take place simultaneously with some El Niño (or La Niña) event. In fact, strong tropical volcanic eruptions have even been suggested to serve as triggers for El Niño phases (Khodri et al., 2017). In order to account for the problem of temporal cooccurrence between the effects of volcanic eruptions and ENSO events, we are going to use the full set of data when studying the effects of ENSO on global temperature teleconnectivity, while excluding the main ENSO region and its surroundings (referred to as "ENSO-big" in Fig. 1 and Tab. 1) when studying the impacts of specific volcanic eruptions. Note that this excluded region has been chosen rather large on purpose (as an outcome of more systematic studies with variable regions to be discarded, which are not further discussed here for brevity) such as to ensure an as complete as possible separation between the direct ENSO impacts and the effects of volcanic eruptions, especially in cases of simultaneous events. In fact, when considering the full global SAT data set in the context of the impacts of volcanic eruptions, only the signatures of the Mount Pinatubo eruption are clearly visible (Radebach et al., 2013). Alternative strategies for reducing the impact of ENSO variability in order to highlight the climate effects of other phenomena like volcanic eruptions might include conditioning out the effect of ONI or other representatives of the ENSO state on the local SAT variability at each grid point prior to network analysis. We outline further investigations that make use of such approaches as a subject of future research. Functional climate network analysis Functional climate networks provide a coarse-grained spatial representation of the co-variability structure among globally or regionally distributed measurements of some climate variables (Tsonis et al., 2006;Tsonis and Swanson, 2012;Donner et al., 2017). Starting from a set of records of the variable of interest, the geographical positions associated with the individual time series are identified with the N nodes of some abstract network embedded on the Earth's surface. The connectivity of this network is then formed by establishing links between pairs of these nodes that fulfill some statistical similarity criterion (see below). Thus, links in such climate networks represent strong statistical associations between climate variability at different points in space. These associa-tions may potentially indicate certain climatic processes or mechanisms interlinking the variability at the corresponding locations. Hence, the resulting linkage structure is referred to as functional connectivity. Like other undirected and unweighted networks, functional climate networks are conveniently represented in terms of their adjacency matrix A = (A ij ), where A ij = 1 indicates the existence of a link between node i and node j, while A ij = 0 corresponds to an absence of such a link. In our specific case, the matrix A is time-dependent, since the spatial co-variability structure of the SAT field changes with time. In such a case, we speak of an evolving climate network (Radebach et al., 2013). Network generation According to our considerations presented above, we take the grid points of the icosahedral grid constructed by remapping the original NCEP/NCAR reanalysis data as nodes of an evolving SAT network (i.e., we consider a fixed node set that does not change over time). For establishing the time-dependent link set, we consider sliding windows in time covering a set of days {d} = [d 0 , d 0 + ∆d] with a width of ∆d = 365 days and mutual offset of 183 days between subsequent windows. Each of these windows is labeled with the corresponding midpoint date d mid = d 0 + ∆d/2. Our choice of 1-year windows is motivated by the fact that seasonality may not be not completely accounted for when using shorter windows, since the consideration of SAT anomalies as defined above does not exclude seasonality in the local higher-order statistical characteristics of SAT after correcting only for the mean climatology. Moreover, due to different distributions of land and water masses on both hemispheres of the Earth (with difference persistence properties of SAT), the resulting spatial covariability structure manifested in the climate network topology may undergo seasonal variations as well, which could affect the results of our analysis presented in this work. From the seasonally adjusted temperature data at each grid point during a time window (corrected for the window-wise mean), T i ({d}) (with i denoting the respective grid point), we compute the matrix of pairwise Pearson's correlation coefficients where • {d} and σ {d} (•) denote the mean value and standard deviation of the respective variable taken over the time window {d}. From this matrix, we identify the entries (i.e., pairs of nodes) with the highest absolute values of mutual correlations |c ij |. Specifically, in this work, we consider the 0.5% strongest pairwise statistical similarities among all nodes per window, i.e., where Θ(•) is the Heaviside function, q |c|,0.995 (d mid ) is the 99.5-percentile of the distribution of absolute correlation values for the time window centered at d mid , and δ ij denotes Kronecker's Delta. A (d mid ) is now the mathematical representation of our evolving climate network. In the following, we omit the explicit time dependence of A (also in all network properties) for brevity. Note that according to our construction, A is symmetric. Node degree The degree k i of a node i is defined as the number of links connected to i, It represents how densely a node is connected within the network. In case of a functional climate network, the degree can thus be considered as a proxy for the importance (or centrality) of a certain grid point in the spatio-temporal correlation structure of the variable of interest. In the following, we refer to network measures like the degree, which provide a characteristic value specific to each individual node i, as local network characteristics. We call the full set of their values taken together with the associated spatial positions of all nodes a field. Average link distance Another local network characteristic that defines a field of values upon a functional climate network is the average link distance (Donner et al., 2017) of a node i, which is defined as with d ij being the normalized spatial distance between two nodes i and j. As a proper normalization, we choose here the largest possible (shortest) distance between two points on the Earth's surface, i.e., half of the circumference of the Earth, u Earth , so that d ij = 2D ij /u Earth where D ij is the geodesic distance between nodes i and j. A low average link distance indicates that i has very localized connections, while a high value points to a node with long-ranging spatial connections. This measure is closely related with the total distance of a node with respect to the rest of the network as previously used by in the context of functional climate networks. We emphasize that the average link distance must not be confused with the conceptually related average path length, where d ij , as defined above, would be replaced by the minimum number l ij of links separating two nodes i and j in the network (i.e., where i and j do not need to be directly connected). Taking the average of d i over all nodes i of the network gives the global average link distance Transitivity The network transitivity quantifies how strongly the connectivity of a network is clustered. Mathematically, it describes the degree to which the network's adjacency property is transitive, i.e., the fraction of cases in which the presence of two links between nodes i and j as well as i and k is accompanied by a third link between j and k. Mathematically, this is expressed as (Boccaletti et al., 2006;Radebach et al., 2013) Like the global average link distance d (but unlike degree and average link distance), T does not define a field, but returns one single single scalar value for each network. Modularity The concept of modularity was introduced into network science by Newman and Girvan (2004) to measure the degree of heterogeneity within the network structure, i.e., how well different groups of nodes can be distinguished that are densely connected within each group, but only sparsely among different groups. In the case of a climate network, modularity provides a single scalar-valued characteristic property that discriminates between a relatively homogeneous link placement (low modularity) and the existence of (commonly regionally confined) clusters of nodes (time series) that exhibit relatively coherent variability (high modularity). The definition of modularity relies upon a partitioning of the network into meaningful subgraphs. Up to a multiplicative constant, it counts how many links are clustered within these subgraphs and compares this value with the expected number of links inside these subgraphs if the network were random, where m is the total number of links and ∆ ij an indicator function informing whether or not two nodes i and j belong to the same subgraph in the considered partition. The individual subgraphs in the partition that maximizes the modularity Q are called communities. The higher the modularity, the more split-up (or modular) the network. Accordingly, community detection by modularity maximization has become a common tool for cluster analysis. While the above definition of modularity is mathematically precise, its maximization is a hard computational problem and can only be achieved by making use of suitable heuristics. Various estimation algorithms have been proposed (Fortunato, 2010). It should be emphasized that many of them can result in suboptimal solutions. Thus, a good choice of the algorithm is important for obtaining reliable results. In this work, we employ the WalkTrap method introduced by Pons and Latapy (2006). By comparing the results provided by this algorithm with those of other methods (cf. Appendix A), we have found that the WalkTrap solution exhibits comparably high values of modularity and relatively stable values in case of strongly overlapping windows (i.e., in cases where the considered data do not change much). Regionalization of field measures As detailed above, node degree and average link distance constitute two important local network characteristics. In some of our following investigations, it will be useful to study the associated spatial fields in full detail. However, when focusing on the specific impacts of certain climate phenomena, it can be beneficial to perform a regionalization of these measures. Specifically, for a subset of nodes X ⊆ {1, . . . , N } representing a certain part of the globe, a regionalized version of the degree would be given as where |X | denotes the number of nodes in the considered set. As a consequence, we can not only assign a degree value to an individual node, but also (as a mean degree) to a subgraph. Note that this regionalized degree differs from the concepts of cross-degree and cross-link density between subgraphs (Donges et al., 2011), since unlike k X , the latter exclude contributions due to links between nodes within X in their definition. For the average link distance, the corresponding regionalized property d X can be defined in full analogy. Below, we detail some reasonable choices for X to be utilized in the context of the present work, which focus on specific spatially contiguous regions of the Earth's surface that are associated with ENSO or volcanoes with strong past eruptions. El Niño-Southern Oscillation regions As already detailed in Sect. 2.1, there exist a variety of indices that measure the "strength" of a particular ENSO state. Among others, four Nino regions (1+2, 3, 4 and 3.4) have been defined to capture SST anomalies in different parts of the tropical Pacific. The regionalization approach described above can be applied to these four regions by taking all nodes located within the respective spatial domains and apply Eq. (8). Thereby, we obtain a set of eight new scalar-valued characteristics: k Nino1+2 , d Nino1+2 , k Nino3 , d Nino3 , k Nino4 , d Nino4 , k Nino3.4 and d Nino3.4 . In order to reduce this vast amount of information, in what follows, we will not make use of the (anyway less frequently studied) Nino1+2 region, but focus on the Nino3.4 region (which is also the basis of the nowadays most common ONI index) and its two contributors, Nino3 and Nino4. Volcano regions The locations of the three volcanoes responsible for the largest eruptions of the recent decades are shown in Fig. 1. To obtain interpretable information on the (tele-) connectivity induced by these eruptions, we need to integrate the connectivity properties of a sufficiently large amount of meaningfully chosen grid cells. As a first attempt, we therefore take the area within a radius of 5 • around each volcano as basis for the regionalization procedure of k i . This leads to the three observables k Pinatubo , k Agung and k Chichon . For the average link distance, one could again proceed in a similar way. However, the aforementioned choice might not be optimal, since symmetric spatial regions in the near-field do not necessarily exhibit the strongest persistent temperature effects after an eruption. Instead, the specific local meteorological conditions (especially wind fields) during the eruption period largely control the three-dimensional patterns of atmospheric aerosol concentrations and, hence, the position of the strongest mid-term cooling to be expected. Accordingly, the induced teleconnectivity can be more evident within regions that have been shifted with respect to the locations of the volcanoes. To account for this, we also calculate regionalized degrees for accordingly shifted regions (see Sect. 4.2 for details), denoted as k Pinatubo , k Agung and k Chichon . Here, the specific regions have been selected according to an examination of the resulting degree fields of the SAT networks for time windows succeeding the individual eruptions and the corresponding wind fields, seeking for the timing and position of the strongest anomalies in the degree field that could be attributed to each eruption (see below). Note that although a volcanic eruption may start relatively abruptly, its larger-scale atmospheric effects commonly become effective only with a considerable delay of several months or more (Robock, 2000;McCormick et al., 1995). Results In the following, we present the results of our functional network analysis of global SAT patterns with a focus on the associated imprints of ENSO. Subsequently, we turn to analyzing and discussing the excess connectivity induced by strong volcanic eruptions. El Niño-Southern Oscillation Let us start with investigating the global effects of ENSO on the spatio-temporal co-variability structure of global SAT. From a complex network perspective, this problem has already been addressed in a variety of previous studies (e.g. Yamasaki et al., 2008;Gozolchiani et al., 2008;Radebach et al., 2013;Wiedermann et al., 2016;Fan et al., 2017, and various others), making use of different approaches for constructing network structures from global climate data. However, none of these works has considered the complementarity between topological and spatial network properties in great detail, nor utilized the concepts of modularity and global average link distance that constitute key aspects of this paper and provide important new insights as demonstrated in the following. Global network properties The network transitivity T has been shown by Wiedermann et al. (2016) to systematically discriminate between the EP and CP flavours of both, El Niño and La Niña. While this reference used an area-weighted version of T and included information on the total pairwise correlation strength instead of just binary adjacency information, we follow here the approach of Radebach et al. (2013) in using remapping onto an icosahedral grid. Figure 2a shows the corresponding results obtained using our slightly modified data set, which are qualitatively almost indistinguishable from those of the two aforementioned studies as expected 1 . In order to further quantify the strength of teleconnectivity in the global SAT field, the network modularity Q provides a prospective candidate measure that has not yet been exploited for this purpose in previous studies. Recall that a high modularity indicates a fragmented network, whereas low values would point to a relatively homogeneous connectivity structure of the network as a whole. Hence, a marked decrease in modularity could indicate an increase in the degree of organization of the global SAT network, i.e., a tendency towards more balanced co-variability in global temperatures. Figure 2b shows that most time intervals that are characterized by elevated values of network transitivity actually exhibit a marked reduction in modularity. Consistent with previous findings of Radebach et al. (2013), most of these time windows in fact coincide with either some El Niño or La Niña phase, indicating again the global impact of these episodes in terms of equilibrating spatial co-variability in the Earth's SAT field. This can be considered as an expected signature of emerging teleconnectivity. Note that taken alone, this process would not necessarily imply a stronger synchronization (as studied by, e.g., Maraun and Kurths (2005)) be-1 Note, however, that Fig. 2 1 9 5 0 1 9 5 5 1 9 6 0 1 9 6 5 1 9 7 0 1 9 7 5 1 9 8 0 1 9 8 5 1 9 9 0 1 9 9 5 2 0 0 0 2 0 0 5 2 0 1 0 2 0 1 5 0.6 0.8 T (a) 1 9 5 0 1 9 5 5 1 9 6 0 1 9 6 5 1 9 7 0 1 9 7 5 1 9 8 0 1 9 8 5 1 9 9 0 1 9 9 5 2 0 0 0 2 0 0 5 2 0 1 0 2 0 1 5 0.6 0.8 Q (b) 1 9 5 0 1 9 5 5 1 9 6 0 1 9 6 5 1 9 7 0 1 9 7 5 1 9 8 0 1 9 8 5 1 9 9 0 1 9 9 5 2 0 0 0 2 0 0 5 2 0 1 0 2 0 1 5 tween climate variability in distinct regions, which would be reflected by higher absolute correlation values. Specifically, in this work, we use a fixed link density of 0.5% in all window-specific climate networks and thus cannot make any statements about the overall strength of correlations. However, following previous results by Radebach et al. (2013), we may actually expect that the correlation threshold q |c|,0.995 used for establishing network connectivity in this work exhibits maxima whenever T shows a peak, thereby supporting the hypothesis of El Niño and La Niña episodes synchronizing global SAT variability by establishing teleconnections. Regarding the latter observation, it is remarkable that previous works by other authors rather reported a reduction of connectivity associated with a breakdown of synchronization due to the large-scale climate disruption triggered by El Niño events . In fact, this observation has been recently used to develop network-based forecasting strategies for El Niño (Ludescher et al., 2013(Ludescher et al., , 2014. However, the apparent contradiction between the latter results and our observations can be resolved when taking the different approaches of network construction used in the respective works into account. Beyond their overall large-scale similarity, the temporal variability profiles of transitivity and modularity also exhibit some important differences. In particular, the strong 1982/83 El Niño is represented as a single long episode of reduced modularity values while being split into two rather distinct peaks in transitivity (see Fig. 2a and 2b). Given the known seasonal profile of El Niño peaking around Christmas, it is remarkable that the ONI index has remained high during a quite long period of time, indicating a single extended event even despite its temporary decay captured by T , but not quite by Q. This underlines that both measures actually capture different aspects of network organization that provide complementary information. Another notable observation is related with the abrupt shift from El Niño to La Niña conditions in summer 1998, leading to a very fast reorganization of the global SAT field. The latter transition is reflected by some negative anomaly of T in summer 1998 in comparison with the "normal" background values of this measure, which presents a unique feature in the time evolution of network transitivity over the last decades that is not accompanied by any corresponding anomaly in Q. Taken together, modularity and transitivity evolve similarly at larger time scales, but provide complementary viewpoints. High transitivity commonly coincides with the temporary appearance of densely connected structures in the functional climate network, which are typically well localized in space (Radebach et al., 2013). In turn, modularity captures the global connectivity pattern rather than primarily local features. Specifically, a low modularity value actually highlights more global connections in the climate network. While network transitivity and modularity present two key topological network characteristics, functional climate networks are systems embedded in geographical space. Thus, the spatial placement of nodes and links (which is disregarded by topological characteristics) can play a pivotal role in network structure formation (Radebach et al., 2013). In order to address this aspect, we present the temporal evolution of the global average link distance d in Fig. 2c. Notably, this measure exhibits more irregular variability with a less clear distinction between "background level" and "anomalies" associated with different types of climate disruptions than the previously studied two topological characteristics. Yet, the general behaviour of d resembles that of network transitivity in the sense that ENSO-related peaks often co-occur in both measures. This indicates that strong El Niño and La Niña episodes do not exclusively trigger shortrange (localized) connectivity (high T ), but also global teleconnectivity (high d ), which is in line with contemporary knowledge on the large-scale impacts of both types of ENSO phases. Notably, this result is in agreement with previous qualitative results of Radebach et al. (2013) on the link distance distribution of global SAT networks. From the results discussed above, we tentatively conclude that in order to distinguish globally influential ENSO events from episodes of minor (or more regional) relevance, a combination of modularity and average link distance can be useful, taking a holistic view in studying the differential imprints of different types of ENSO phases. We will recall this strategy when discussing the effects of volcanic eruptions on network organization at a global scale. Spatial patterns of network connectivity The above analysis of global network properties has largely confirmed some known effects of certain ENSO phases on the spatial co-variability structure of the global SAT field. Drawing upon the insight that topological and spatial network measures can provide different perspectives on the corresponding network patterns, we now turn to investigating the geographical characteristics of the generated functional climate networks. Specifically, following recent observations that climate network properties distinguish between the EP and CP flavours of both, El Niño and La Niña (Wiedermann et al., 2016), we are interested in the question how the associated (tele-) connectivity structures are manifested in the respective spatial fields of degree and average link distance. For this purpose, Fig. 3 shows composite plots of the spatial patterns exhibited by both network properties during the different types of ENSO phases, thereby averaging the local network properties over all time windows that are classified as showing either of these situations (see Tab. 2). The left panels of Fig. 3 display the respective mean degree fields for the different types of ENSO periods. As ex-pected, we observe a particularly strong deviation from a homogeneous pattern during EP El Niños (Fig. 3a), while the degrees in the eastern-to-central tropical Pacific are only slightly larger than in the rest of the network during time windows without El Niño or La Niña conditions (Fig. 3i). This general behaviour is expected from previous studies (Wiedermann et al., 2016). Still, the observed degree patterns alone do not allow us to distinguish between a local or global phenomenon. For this purpose, the right panels of Fig. 3 show the corresponding mean average link distance fields for each type of situation. Elevated values of this measure in the typical ENSO region are present in case of all four possible types of episodes, indicating that both flavours of El Niño and La Niña actually generate additional connections in the tropical Pacific that span relatively large distances. Analyzing the composite maps of the average link distance in more detail, it is important to note that beyond the ENSO region itself, additional parts of the globe exhibit elevated values. This indicates the presence of localized teleconnections that possibly link climate variability in the latter regions with ENSO. Specifically, EP El Niños (Fig. 3b) exhibit such teleconnections with Indonesia and Western Africa, which are also recovered for EP La Niñas (Fig. 3f). For CP El Niños (Fig. 3d), the d i field highlights a weak connection with Western Africa, but none with Indonesia. Similar but still weaker teleconnections can be observed for CP La Niñas (Fig. 3h). Among the aforementioned patterns, the apparent teleconnection with Indonesia present during EP events but not during their CP counterparts is particularly interesting, as it is localized in the westernmost tropical Pacific. Thus, it connects eastern and western Pacific while not leading to marked long-distance connections in the central Pacific close to the dateline. One appealing explanation of this finding could be that the corresponding link is mediated via the Walker circulation (Ashok and Yamagata, 2009) and, thus, via airflow in higher altitudes rather than near-surface atmospheric circulation. However, it has to be noted that our analysis is based on cross-correlations only. The values can be severely affected by distinct temporal persistence properties of SAT in the eastern and western tropical Pacific, as pointed out by recent studies making use of modern causal inference methods (Balasis et al., 2013;Runge et al., 2014). Accounting for this effect in terms of replacing the correlation values by associated significance levels in the network generation step (Paluš et al., 2011) could provide a useful yet computationally demanding avenue for future research on this topic. From an impact perspective, the teleconnection suggested by our results is compatible with the documented increased likelihood of droughts in Indonesia during El Niño events (Diaz et al., 2001). The apparent teleconnection with Western Africa spans a rather large spatial distance (about one third of the globe). In this context, Joly and Voldoire (2009) noted that "a significant part of the West African monsoon (WAM) interan- nual variability can be explained by the remote influence of El Niño-Southern Oscillation (ENSO)." This previously reported teleconnection could be responsible for the elevated average link distance over Western Africa especially during EP El Niños. In general, climate variability in tropical regions is typically more likely to exhibit strong correlations than between tropics and extratropics, which is mainly due to the cellular structure of meridional atmospheric circulation that is effectively decoupling tropics and extratropics. In this regard, the omnipresent slightly elevated average link distance values in the polar regions are most likely data artifacts not corrected by our remapping procedure rather than indications of actual teleconnections. Regionalized network characteristics Global and local climate network properties as discussed above still provide only incomplete information on the effects of climate variability in different parts of the ENSO region on global SAT. To obtain further insights into this aspect, we now turn to analyzing the regionalized field measures introduced in Sect. 3.3 and study the specific connectivity associated with the Nino3.4, Nino3 and Nino4 regions in terms of degree and average link distance. The corresponding results are summarized in Fig. 4. We observe that the relative magnitude of variations of regionalized degree and average link distance is even stronger than that of the global network properties transitivity, modularity and global average link distance discussed above. All measures exhibit episodes of very small values as opposed to such with much larger values, the latter often coinciding with El Niño and La Niña phases. Since the corresponding regions have been previously chosen for defining ENSO-specific indices, this result has been expected. Most importantly, degree and average link distance based characteristics exhibit strong positive correlations. Notably, for climatic events with predominantly local structure, we would expect a strong increase of k i but only weaker increase of d i in the region under study. Hence, our corresponding observations underline that ENSO-related climate impacts are not confined to the vicinity of the ENSO region, but are controlled by large-scale teleconnections. Since the different ENSO regions show partial overlap (cf. Fig. 1), the results obtained for the individual regions exhibit a high degree of similarity. However, regarding specific El Niño or La Niña episodes, comparing the corresponding signatures for the Nino3 and Nino4 regions still allows attributing these events to more Eastern Pacific or Central Pacific types. For example, the strong 1997/98 El Niño is reflected by very high values of the regionalized degree for the Nino3 and Nino3.4 regions, but relatively weak signatures in the more western Nino4 region, which is consistent with its classification as an EP type event. Examining the time evolution of all six regionalized network measures in some detail, it is notable that between 1978 and 1982, there has been considerable variability in all measures pointing towards an episodic presence of teleconnections even though none of the time windows was classified as an El Niño or La Niña episode according to the ONI. Moreover, we find that before the year 2000, clear peaks can always be observed in all properties as alternating with periods of low values. In turn, during the last about 15 years, we rather find strong variability without any low background level, with peaks occurring almost annually, with the exception of 2013 and 2014. This change in the overall temporal variability pattern of our regionalized network measures might point to some fundamental changes in the spatiotemporal organization of global SAT, either due to some not yet identified mode of natural variability or as a result of external interference. An attribution of this observation is, however, beyond the scope of the present work. Volcanic eruptions Besides ENSO variability, strong volcanic eruptions have been identified as causes of marked disruptions in climate network properties in earlier studies (Radebach et al., 2013). In this context, the application of the complementary viewpoints as used in this work for further characterizing the impacts of such eruptions promises interesting additional insights. Regarding the global network properties, let us turn back to Fig. 2. As already emphasized in our discussion on the corresponding imprints of different ENSO phases, anomalies in transitivity and modularity need to be interpreted differently in terms of global versus more regional changes in climate network connectivity. While EP El Niños and La Niñas (but not their CP counterparts) consistently show peaks in transitivity coinciding with breakdowns in modularity, a similar signature has been found in the aftermath of the Mount Pinatubo eruption, suggesting that this event has affected the climate system globally. However, when comparing these topological network characteristics with the spatial network property of global average link distance d , we find a marked difference. Specifically, for ENSO-related climate disruptions, both d and T show the same direction of changes (i.e., peaks) with only few exceptions. In turn, we observe an opposite behaviour of both measures in 1993, which corresponds to the time windows where the cooling effects following the Mount Pinatubo eruption should have taken their maximum (McCormick et al., 1995). Hence, unlike for ENSO-related disruptions, the peak in transitivity and simultaneous drop in d indicate that the effects of the volcanic eruptions have rather been regionally confined. The latter is consistent with the hypothesis of elevated correlations in the region that has been most directly affected by the associated cooling trend following the eruption. Based on this observation, we suggest that using the global average link very strong strong moderate weak weak moderate strong very strong in conjunction with network transitivity and modularity enables us to discern disruptive events with global effects (strong ENSO phases) from those exhibiting more regional impacts (volcanic eruptions). In general, one notable difference in comparison with ENSO-related impacts is that large-scale effects of volcanic eruptions on global SAT teleconnectivity can be observed only after a sufficiently large amount of aerosols have entered the stratosphere (Robock, 2000). Accompanying the resulting time shift between trigger event and response, we may also need to consider a spatial shift of the most affected region as compared to the location of the volcano. In the following, we apply our regionalization procedure described in Sect. 3.3.2 to studying the impacts of the Mount Pinatubo, Mount Agung and El Chichon eruptions. In order to avoid interference with the effects of ENSO events, the ENSO-big region depicted in Fig. 1 are excluded from the corresponding computations. The results obtained from this analysis are summarized in Fig. 5. The largest of the three considered eruptions (Mount Pinatubo) had a global cooling effect and has left clearly visible signatures in all considered global network measures as discussed above. Some months after the eruption, a large region of elevated network connectivity has been established, which covers essentially all of the western tropical Pacific (Fig. 5c). The temporal evolution of the average degree in the region around Mount Pinatubo displays an abrupt rise about half a year after the eruption, then a constantly high value for about one year (the common residence time of volcanic aerosols in the stratosphere) before dropping again back to its previous level (Fig. 5a). The region with the highest degrees is shifted northward with respect to the location of the volcano (Fig. 5c). When computing the average degree for this region, we observe an even stronger rise of the regionalized degree than for the region surrounding the volcano (Fig. 5b). The Mount Agung eruption exhibits similar, but weaker, patterns in the respective region (Fig. 5f). However, the region with the highest degree is shifted south-westward. The average degree in the region surrounding Mount Agung only shows weak changes after the eruption (Fig. 5f), as opposed to a somewhat sharper increase in the shifted region, with the peak effect occurring significantly faster after the beginning of the eruption than in case of the Mount Pinatubo eruption (Fig. 5e). However, it should be noted that we can already observe the beginning of some upward trend in regionalized degree before the actual eruption, pointing to a possible interference with normal natural variability. Unlike the two other volcanic eruptions, the degree field in the period succeeding the El Chichon eruption showed hardly any marked changes (Fig. 5i). Consequently, we also do not observe any marked signature in the temporal variability profile of the regionalized degree in the surrounding of the volcano (Fig. 5g). Instead of a peak shortly after the eruption, we actually find a clear drop of the average degree. However, given that El Chichon is located relatively close to the ex-tended ENSO region, we cannot rule out that this could be an effect of the strong El Niño event occurring shortly after the eruption and eventually even being partially triggered by the latter (Khodri et al., 2017). In general, previous studies indicate that the El Chichon eruption caused a much weaker summer cooling than the Mount Agung eruption (Man et al., 2014), which could also explain its absent signature in our analysis. Conclusions We have used functional climate networks constructed from spatial correlations of daily global surface air temperature (SAT) anomalies to analyze the global impact and teleconnections of past El Niño and La Niña events as well as volcanic eruptions. By making use of the global network property of modularity, we have found that at least the East Pacific flavours of such events lead to a global reconfiguration of SAT variations. Considering the global average link distance as a complementary spatial network characteristic, we have identified distinct qualitative differences between the imprints of these ENSO periods and the Mount Pinatubo eruption in global SAT patterns. Using composites of the degree and average link distance fields, we have identified hallmarks of distinct ENSO teleconnections in the climate network structure, especially such linking the eastern tropical Pacific with Indonesia and West Africa during East Pacific El Niños, both of which have already been reported in previous studies (Diaz et al., 2001;Joly and Voldoire, 2009). By making use of a regionalization procedure applied to these two fields of local network properties, we have introduced a simple yet effective tool to unveil the differential roles of different regions in the tropical Pacific in establishing teleconnections during different El Niño and La Niña events. Finally, we have analyzed the global and local connectivity properties of SAT-based climate networks in the aftermath of the strongest recent volcanic eruptions of Mount Pinatubo, Mount Agung and El Chichon. In particular, while the Mount Pinatubo eruption has been confirmed to exhibit marked impacts on global SAT, its dominating effect was rather regional (i.e., it did not trigger long-range teleconnections detectable by our approach). While most of the results presented in this work rely on a qualitative analysis of temporal changes in the climate network properties, additional statistical quantification of their relationship with existing indicators of ENSO variability and volcanic eruptions' impacts might further strengthen our findings. Regarding ENSO, many previous studies have attempted to utilize the spatial patterns of SST anomalies to define corresponding index variables. However, the corresponding classifications of El Niño and La Niña phases reached only partial consensus, which was in fact the motivation of the work of Wiedermann et al. (2016) presenting Figure 5. Time series of regional mean degree and associated degree field (excluding the ENSO-big region indicated by white color) for the three main volcanic eruptions during the study period: (a-c) Mount Pinatubo, (d-f) Mount Agung, (g-i) El Chichon. In the degree maps shown in panels (c), (f) and (i), blue dots mark grid points within a radius of 5 • around each volcano, which have been used to define the regionalized degrees shown in panels (a), (d) and (g), respectively. Red dots indicate spatially shifted regions of the same size where the largest changes to the degree field have been observed. These regions serve as the basis for computing the regionalized degrees shown in panels (b), (e) and (h), respectively. Purple vertical lines indicate the timing of the respective eruptions, whereas green vertical lines indicate the midpoints of the time windows exhibiting the strongest signature in the regionalized network properties. The time series have been restricted to ±10 years around the date of the respective eruption. Background colours indicate the corresponding ENSO strength as in Fig. 2. climate network transitivity as a useful and consistent index. Going one step further, one might easily quantify, for example, the correlation between transitivity and other (global or regionalized local) network characteristics. However, in our opinion, the particular value of the present work lies in identifying specific properties that are not exhibited by the former (as well as other not network-related indices). In this context, there is no established benchmark that could be used for further testing the significance of our results. In turn, regarding the effects of volcanic eruptions, the respective regionalized degrees for the spatially shifted "major impact regions" of both, Mount Pinatubo and Mount Agung, exhibited their overall maximum values among all time windows studied in this work in the aftermath of the associated eruptions. This indicates a very high significance of our corresponding results. Note that, however, we did not succeed in finding any comparatively strong impact signature in the climate network properties after the eruption of El Chichon, as well as after other strong volcanic eruptions of the past about 70 years (not shown). We relate the latter finding to the generally lower magnitude of the respective perturbations (in terms of a smaller amount of climate-active volcanic aerosols injected into the stratosphere). Moreover, some of the other major eruptions (e.g., the Mount St. Helens eruption in 1980) appeared in the extratropics rather than the tropics. Together with the different seasonality of these events, this could imply different effects on regional and global temperature patterns, similar as shown recently for global monsoon precipitation (Liu et al., 2016). In summary, our study confirms that ENSO does not only have a strong local effect on SAT in terms of coherent SAT trends in the tropical Pacific associated with a spatially confined increase of network connectivity (Radebach et al., 2013), but also dynamically reconfigures climate variability globally by triggering teleconnections especially with other tropical regions. In this regard, one possible mechanism could involve the modulation of monsoons by strong El Niño and/or La Niña periods, which could be further modulated by volcanic eruptions (Maraun and Kurths, 2005). Confirming this hypothesis in the context of climate network studies would, however, require much more elaborated approaches than those used in the present work, and is therefore outlined as a subject of future research. Appendix A: Comparison of different modularity estimation algorithms In Fig. A1, we show the results of five algorithms to estimate the community structure of our functional climate networks in terms of the resulting modularity values: fast greedy (Clauset et al., 2004), infomap (Rosvall and Bergstrom, 2008), label propagation (Raghavan et al., 2007), leading eigenvector (Newman, 2006a) and WalkTrap (Pons and Latapy, 2006). Further details motivating the choice of Walk-Trap as a reference algorithm in the body of this paper are provided in the figure caption. Data availability. All data used in this work are publicly available from NCEP/NCAR. In addition to the SAT data, we have made use of the monthly Ocean Niño Index (ONI) values as provided by NCEP (2017). Code availability. All code used in this work has been written in Python and published under GPLv3 license as a GitHub repository (Kittel, 2017). Detailed information for the reproduction of the results of this paper can be found there. While the published code was originally designed to produce these specific result, we kept it rather general with further extensions in mind. Thereby, it can be used as a starting point for future evolving network research, as it provides some basic structures that are needed for evolving network analysis, for example an interface for hdf5 (a high-performant file type for data storage) and automatic parallelization using mpi. Author contributions. TK, CC, NL, FR and RVD designed the analysis. TK, CC and NL conducted the analysis. TK, CC and RVD prepared the manuscript. TP, FR, JK and RVD supervised the analysis and revised the manuscript and the interpretation of the obtained results. Competing interests. The authors declare no conflict of interest. 1 9 5 0 1 9 5 5 1 9 6 0 1 9 6 5 1 9 7 0 1 9 7 5 1 9 8 0 1 9 8 5 1 9 9 0 1 9 9 5 2 0 0 0 Figure A1. Comparison of estimated modularity values for the functional climate networks obtained for running windows as described in the main text. We use five different algorithms for detecting the underlying community structure. Since modularity estimation resorts to a numerical maximization problem, higher values indicate better results. Visual comparison reveals that the leading eigenvector and Walk-Trap algorithms outperform the others regarding this criterion. Since the leading eigenvector algorithm suffers from intermittent modularity breakdowns, possibly indicating numerical instabilities, we use the WalkTrap method in this paper.
13,605.2
2017-11-10T00:00:00.000
[ "Environmental Science", "Physics", "Geology" ]
Spatially Resolved Fourier Transform Spectroscopy in the Extreme Ultraviolet Coherent extreme ultraviolet (XUV) radiation produced by table-top high-harmonic generation (HHG) sources provides a wealth of possibilities in research areas ranging from attosecond physics to high resolution coherent imaging. However, it remains challenging to fully exploit the coherence of such sources for interferometry and Fourier transform spectroscopy (FTS). This is due to the need for a measurement system that is stable at the level of a wavelength fraction, yet allowing a controlled scanning of time delays. Here we demonstrate XUV interferometry and FTS in the 17-55 nm wavelength range using an ultrastable common-path interferometer suitable for high-intensity laser pulses that drive the HHG process. This approach enables the generation of fully coherent XUV pulse pairs with sub-attosecond timing variation, tunable time delay and a clean Gaussian spatial mode profile. We demonstrate the capabilities of our XUV interferometer by performing spatially resolved FTS on a thin film composed of titanium and silicon nitride. A well-known feature of high-harmonic generation (HHG) is broadband spectra in the XUV and soft X-ray regions [1][2][3]. This radiation is typically emitted in a train of attosecond pulses with excellent spatial and temporal coherence, as shown in various interferometric and spectroscopic measurements [4][5][6][7][8][9][10][11][12]. As a result, interferometry with high harmonics found important applications in e.g. Molecular Orbital Tomography [13], in wavefront reconstruction [14] and electric field characterization [15] of high harmonics. Recently, interferometry with high harmonics provided added value to coherent diffractive imaging (CDI) [16,17] using the full high harmonics bandwidth and photon flux. However, in the extreme ultraviolet (XUV) spectral range, interferometry and Fourier transform spectroscopy (FTS) are challenging due to the high stability requirements of the interferometer itself. Two main types of HHG interferometers have been devised. In one scheme, the near-infrared fundamental driving pulse is split into two phase-locked pulses with an adjustable time delay, and this pulse pair is subsequently used for HHG [5,[7][8][9]. Although this method has been successfully used it is typically limited by the stability of the optical interferometer. The other scheme is based on wavefront division, whereby one HHG beam is divided into two phase-locked sources by a piezo-mounted split mirror. This configuration allows more stable interferometry [10,[18][19][20][21], but results in two beams with different spatial profiles and strong diffraction effects due to the hard edge of the split mirror. Wavefront division interferometry is also less flexible when one would like to change the intensity ratio between the two beams. In this letter we present XUV interferometry using a novel ultrastable common-path interferome- Figure 1: a: Schematic overview of the common-path, birefringent wedge-based interferometer. The polarization diagrams depict the polarization (in red) at various positions in the interferometer. The fast axes of the birefringent wedges are indicated using blue arrows. A manual translation stage is used to control the total optical path length through the first wedge pair, while a piezo-driven stage controls the position of the second pair. b: Schematic overview of the setup for high-harmonic generation. A lens (f = 25 cm) focuses the input pulses in the gas jet. A thin tube (inner diameter 1.4 mm) is used to guide the gas from a pulsed nozzle to the interaction region 8 mm behind the nozzle. A 1 mm aperture (not shown) blocks the near-infrared light just before the aluminum film (Al), while transmitting the high harmonics. c, d: Spatial interference patterns for high harmonics generated in argon and neon, respectively. ter with a timing stability better than 0.8 attoseconds (as) between two phase-locked high-harmonic sources. We characterize high-harmonic spectra from argon and and neon, and perform Fourier transform spectroscopy on a thin titanium/silicon nitride bilayer. Our approach combines the flexibility of near infrared (NIR) interferometers with the stability of common-path techniques. Because no XUV optics are involved the only bandwidth limitation is the phase-matching bandwidth of the HHG process, which can span more than an octave [2]. In our experiments we use a Ti:Sapphire-seeded noncollinear optical parametric chirped-pulse amplifier (NOPCPA) [22]. The pump laser for this system delivers 80 mJ, 532 nm pulses with a duration of 64 ps at 300 Hz [23]. The output of the NOPCPA is compressed to pulses of approximately 20 fs duration and an energy of 5 mJ.These pulses are then directed into our common-path interferometer, which is schematically displayed in Fig. 1(a). The phase-locked pulse pair is produced in birefringent wedges, where the optical axis is oriented at 45 • with respect to the input polarization [24,25]. This pair of wedges splits the input pulse into two parts of equal intensity but with orthogonal polarization, and introduces a delay of several picoseconds between the two electric field components. The second pair of wedges has its optical axis perpendicular to the first pair, thus providing an opposite delay compared to the first pair of wedges. The exact remaining delay can be controlled by moving one of the wedges perpendicular to the beam. The final delay depends linearly on the wedge displacement and can be tuned with sub-attosecond resolution. Behind the second pair of wedges we use an ultrabroadband thin-film polarizer to project the pulses onto the same polarization axis. The resulting pulse pair is vertically polarized and contains up to 1 mJ per pulse. The intensity ratio between the beams can be controlled by tuning the input polarization state with a half-wave plate. Behind the interferometer, the pulses are focused into a gas jet for HHG, as shown in Fig. 1(b). By σ = 0.8 as (a) Figure 2: a: Inset: spatial Fourier transform of the interference pattern of HHG in neon (enlarged version in Fig. 5). Phase (red) and amplitude (black) of the 33 rd harmonic (red circle in the inset) as a function of piezo stage position. b: Consecutive step sizes retrieved from the measured phase. c: Step size distribution averaged over all harmonics has a standard deviation of 0.25 nm or 0.8 as. tilting the last wedge of the interferometer, we ensure that the focal spots of the two pulses are separated by 280 µm in the focal plane, which is several times the focused beam diameter (60 µm 1/e 2 -diameter). This separation ensures that the two pulses generate high harmonics independently, as any partial overlap could lead to delay-dependent interference effects between the pulses, which in turn would affect ionization and phase matching. Even for this large separation, however, the remaining field strength between the beams still leads to small modulations of the interferogram at the fundamental frequency. Behind the HHG chamber we use an aluminum filter in combination with a 1 mm diameter aperture to separate the XUV from the driving NIR radiation. The XUV beams overlap and interfere in the far field, where we use an XUV-sensitive CCD camera (Andor iKon-L) to record the interference patterns. Examples of the measured interference patterns are given in Figs. 1(c,d). The beams interfere at an apex angle of 0.4 mrad, leading to multiple zones of straight fringes with a high visibility. The spatial mode profile of the individual HHG beams has a smooth Gaussian shape. Some slight diffraction features can be observed at the edges caused by the aperture in the beam path. For more details about the interferometer and HHG see the Supplementary Information. A spatial Fourier transform of an interference pattern at a fixed delay directly yields an HHG spectrum [26], as shown in the inset in Fig. 2(a), which is possible because of the near-diffraction-limited beam profile of the individual HHG beams. Although this spectrum is limited in resolution due to the small angle, the individual harmonics are still clearly resolved. This spatial transform is useful as a singleshot diagnostic of the high harmonics spectrum. A further advantage is that these individual harmonic peaks contain information on the delay between the pulses. The change in delay between two images can be calibrated from these measurements by evaluating the phase delay of the individual harmonic peaks as a function of time delay. The extracted phase for a single harmonic vs. stage position is shown in Fig. 2(a), and confirms the scan linearity of our interferometer. In addition, the intensity of the particularly selected harmonic extracted from the interferogram is also plotted. By calculating the phase delay per stage step and dividing by the central angular frequency of the harmonic, we obtain the time delay per step ( Fig. 2(b)), as well as a measurement of the interferometer stability. Analyzing the data for the high-amplitude range of the scan between 60 µm and 95 µm, we find an upper limit to the timing stability of 0.8 attoseconds (standard deviation) or, equivalently, 0.25 nm optical path length stability (Fig. 2(c)), with a measurement accuracy limited by the precision of the phase determination in the spatial Fourier transform. It is worth noting that this upper limit includes the effects of possible differential phase shifts between the two separated HHG zones. Small intensity variations could in principle lead to phase shifts between the harmonic beams if the driving pulses are not identical [5], but the present measurement shows that this effect does not limit the applicability of HHG-based FTS for the current spectral range. With the achieved stability, simulations show that FTS is feasible even at wavelengths well below 10 nm [16]. By scanning the time delay between the pulses, an accurate measurement of the HHG spectrum can be obtained using FTS. We recorded such Fourier scans of high harmonics generated in argon and neon. In FTS, the step size should be less than half of the shortest wavelength in the source spectrum, and the obtained spectral resolution is determined by the length of the scan. We typically record up to a thousand images with a step size of 15 as. To remove the influence of intensity fluctuations in the recorded interference patterns (caused by laser power or beam pointing variations) and subsequently improve the signal-to-noise ratio, we normalized the spatial interference patterns to a selected local area in the beam. Finally, a spectrum for every pixel is acquired by Fourier transforming the measured data along the time axis. Fig. 3 shows temporal interferograms for single pixels and the corresponding HHG spectra. As the interference pattern will only show those spectral components that are present in both beams, both driving pulses should produce identical HHG spectra, which has been confirmed using a grazing-incidence grating XUV spectrometer. The interferograms in Figs. 3(a,b) and Figs. 3(d,e) contain two clear timescales, corresponding to the autocorrelation widths of the individual attosecond pulses and the coherence time of the attosecond pulse train, respectively. For the autocorrelation widths, we obtain (396 ± 10) as (Ar) and (115 ± 6) as (Ne), which reflects the about 3.5 times broader spectrum of HHG from Ne compared to Ar. On the other hand, the coherence times are (11.9 ± 0.3) fs (Ar) and (6.0 ± 0.6) fs (Ne) indicating that high harmonics in Ne are produced by the most intense temporal part of the laser pulse. The measured autocorrelation width for Ne is slightly increased by the limited transmission spectrum of the Al filter. In contrast to a grating spectrometer, the frequency resolution in FTS is constant over the full spectrum, which amounts to 80 THz for the example shown in Fig. 3(c). Expressed as a fraction of the wavelength, this means a resolution of 1 in 200 for the highest harmonic orders versus 1 in 100 for the longer wavelengths. In our current interfer- ometer we can scan up to 50 fs in time delay, which corresponds to a potential resolution of 1 in 900 for the highest harmonic orders. This scan range can easily be extended by increasing the travel range of the piezo stage and the size of the wedges. This flexibility of the spectral range and resolution are clear advantages of FTS as a spectroscopic technique. In addition to the spectral and spatial characteriza- tion of the HHG beam itself, our method can also be applied to perform spectroscopy on spatially complex samples. We explore this option by measuring the transmission spectrum of a 20 nm thin titanium film grown by electron beam evaporation on a 15 nm thin silicon nitride membrane. The membrane contains a 100 m diameter aperture near one side (Fig. 4(a)). With spatially resolved FTS we can simultaneously measure the XUV spectrum transmitted through the bilayer and through the aperture, a measurement that would be challenging to perform with gratingbased spectrometers. We used the spectrum transmitted through the aperture as a reference for a direct determination of the relative absorption spectrum of the bilayer. The titanium sample was positioned in the XUV beam ensuring that both XUV pulses were overlapping on the sample. A typical transmission spectrum is given in Fig. 4(a). A single Fourier scan yields spectra for the transmission of both the aperture and the titanium thin film (Fig. 4(b)). The spectrum transmitted by the bilayer shows a clear dip around 25 nm matching with the corresponding absorption band of titanium. Comparing the strength of the harmonics in both spectra yields the spectral transmission of the combined titanium and silicon nitride layer, as shown in Fig. 4(b). The measured transmission matches with the expected transmission of an 18 nm titanium layer on top of a 16 nm layer of silicon nitride [27]. Potentially, there could be a thin titanium oxide layer on top of the titanium film, but given the small difference in XUV absorption this distinction cannot be made. In summary, we have demonstrated XUV interferometry and Fourier transform spectroscopy without the need for XUV optics. Using a birefringencebased common-path interferometer, we achieve subattosecond timing stability and a high spectral resolution. With two HHG pulses from neon gas, we measure more than octave-spanning spectra down to 17 nm wavelength and find that the HHG process adds less than 0.8 as relative timing jitter under our experimental conditions. We demonstrate that our FTS method can be used to measure the absorption spectrum of a spatially inhomogeneous thin film sample. For future experiments it is particularly promising to combine this method with CDI on nanostructures composed of multiple materials. Funding and Acknowledgements The project has received funding from the European Research Council (ERC) (ERC-StG 637476) and the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). Supplementary information follows at the end of this manuscript. Birefringent wedge-based interferometer In the interferometer, a piezo linear stage (Physik Instrumente GmbH, model number P-625.1CD) is used to displace one of the birefringent wedges. The stage has a travel range of 500 µm, a resolution of 1.4 nm and a repeatability of approximately 5 nm. The final delay introduced by the wedge-based interferometer depends linearly on the wedge displacement ∆x and is given by ∆t = (∆n∆x/c) tan ϕ, where ∆n = n e − n o is the difference in refractive index for the extraordinary and ordinary polarization states, c is the speed of light in vacuum and ϕ = 15 • is the apex angle of the wedges. We use α-BBO crystals in the interferometer because of their strong birefringence (∆n = 0.11 at 800 nm) and low nonlinear susceptibility. Behind the second pair of wedges we use an ultrabroadband thin-film polarizer to project the pulses onto the same polarization axis. The apex angle of the birefringent wedges was chosen to be 15 degrees, leading to a full scan range of 50 fs. Based on the specifications of the piezo stage and the refractive indices of α-BBO, the resolution and repeatability of the interferometer are 0.14 as and 0.5 as, respectively. The α-BBO wedges are mounted using KM100CL rectangular mounts (Thorlabs) at a height of 8 centimeters above the baseplate. This baseplate is bolted directly to the optical table. Care was taken to keep the optical components on low and stable posts to minimize potential vibrations. To minimize vibrations caused by the vacuum system influencing high-harmonic generation (HHG), we employ low-vibration turbomolecular pumps (Pfeiffer HiPace 700), while the scroll pump (Edwards XDS10) used for backing the turbos is connected using flexible bellows wrapped in vibration-damping foam. High-harmonic generation geometry Behind the interferometer, a lens (f = 25 cm) focuses the pulses into a gas jet for high-harmonic generation. The gas is supplied by a pulsed piezo valve (developed by M.H.M. Janssen, Vrije Universiteit Amsterdam) and guided through a thin tube towards the interaction region with near-infrared pulses. To ensure that both near-infrared pulses generate XUV independently, we separate the foci by 280 µm. In the chosen geometry this means that one pulse is focused slightly closer to the gas nozzle than the other pulse. We focus the pulse pair into the gas jet at a fairly large distance of about 8 mm with respect to the nozzle. Simulations of supersonic gas expansion show that the pressure difference between the two focal spots at this nozzle distance is sufficiently small to ensure similar phase matching conditions between the two beams. In addition, for gas confinement we use a thin stainless steel tube with an inner diameter of 1.4 mm to guide the gas from the nozzle to the interaction region. The near-infrared laser pulses are focused through a 0.5 mm aperture drilled perpendicular to the tube axis. To confirm that the phase matching conditions are similar for both pulses, we used a grazing incidence XUV grating spectrometer to measure the HHG spectrum of the individual pulses. The XUV spectrometer, equipped with a 600 lines/mm grating, is designed for a spectral range between 5 nm and 40 nm and was operated at a resolution of λ/∆λ = 250 at 20 nm wavelength. As for single high harmonic beams, we observed the spectra to be identical. Furthermore, the HHG spectrum observed for both simultaneously present beams did not change, confirming that the pulses did not influence each other significantly. While the chosen geometry worked well and was ex- Figure 5: a: Typical spatial interference pattern as recorded for a fixed time delay using high harmonics generated in Neon gas. b: Amplitude of the two-dimensional Fourier transform of the example interference pattern in (a). perimentally convenient to realize, a geometry where the two pulses are displaced orthogonally with respect to the gas jet direction can potentially lead to slightly better phase matching conditions. Such a geometry, however, does require precise knowledge of the transverse pressure profile in the gas jet, and a sufficiently wide jet compared to the focal spot separation. Single-shot Fourier Transform interferometry The generated XUV beams overlap with an apex angle of approximately 0.4 milliradians. Therefore, for each harmonic a fringe pattern in close analogy to the Young's double slit experiment is formed. The period of these fringe patterns is proportional to the wavelength of the respective harmonic, while the phase of the fringe is determined by the delay between the two pulses. The fringe patterns for all harmonics then add up coherently to form the interference pattern as measured on the camera. In Fig. 5(a), the plane spanned by the k-vectors of the two beams (interference plane) is rotated with respect to the horizontal plane, which results in a slightly tilted interference pattern with respect to the vertical axis of the camera image. The raw camera images can be decomposed into the contributions of the individual harmonics by a two-dimensional Fourier transform of the raw image. For a fixed angle between the beams, every harmonic wavelength corresponds to a specific spatial frequency in the Fourier transform image. Therefore, the individual harmonics appear spatially separated in the Fourier transform data. The center of the image corresponds to zero spatial frequency, and increasing spatial frequencies are positioned radially outward. This yields a set of peaks corresponding to the individual harmonics lying on a line perpendicular to the original fringe direction, as shown in Fig. 5(b). The phase of the individual peaks in this spectrum directly yields the phase of the corresponding monochromatic fringe pattern. Due to the symmetry of the Fourier transform for real data, each harmonic gives rise to two peaks, at both positive and negative spatial frequency. Comparing the phase between two measurements yields a value that is proportional to the change in delay between these measurements. If the wavelength of the harmonic is known, this can be used to extract the exact change in delay.
4,736.2
2016-07-08T00:00:00.000
[ "Physics" ]
Large basins of attraction for control-based continuation of unstable periodic states Numerical continuation tools are nowadays standard to analyse nonlinear dynamical systems by numerical means. These powerful methods are unfortunately not available in real experiments without having access to an accurate mathematical model. Implementing such a concept in real world experiments using control and data processing to track unstable states and their bifurcations, requires robust control techniques with large basins and good global properties. Here we propose design principles for control techniques for periodic states which lead to large basins and which are robust, without the need to have access to a detailed mathematical model. Our analytic considerations for the control design will be based on weakly nonlinear analysis of periodically driven oscillator systems. We then demonstrate by numerical means that in strong Introduction and experimental context Dynamical systems theory, in particular the investigation of instabilities and of chaotic motion in nonlinear systems is one of the key themes in theoretical and experimental sciences of the last decades [1,2,3].Numerous tools have been developed to study the complex motion in such systems.From a theoretical perspective continuation of stable as well as unstable solutions and of bifurcations play a prominent role [4] since global structures in phase space and a skeleton of unstable states for the dynamics can be discovered which gives insight into details of the dynamics [5].Equation free analysis has been proposed as a practical tool to derive effective equations of motion from microscopic models by numerical means [6].These effective models can then be studied by analytic and numerical bifurcation analysis to understand the lowdimensional nonlinear structures and pattern formation in complex real world systems. The idea of a model free approach has taken this concept a step further by proposing an approach where one avoids the intermediate derivation of effective equations of motion, making the idea directly applicable to experiments.In this context the required continuation of stable or unstable dynamical states relies on non-invasive control methods which are directly applicable to real world experiments.Non-invasive control, sometimes called orbit control in the engineering context, has the crucial property that the control forces ultimately tend to zero, so that the stabilised state is a genuine unstable orbit of the original system without control (see as well [7], where this idea has been popularised within the physics community).In addition to numerical implementations of an equation-free approach to obtain bifurcation diagrams at a macroscopic scale, see for instance [8], a successful implementation of this concept has been already demonstrated in mechanical and electro-mechanical hybrid experiments [9,10,11], in electrochemical setups [12], or even in experimental studies of pedestrian flows [13].Key to these experimental implementations of continuation techniques is the availability of suitable control methods which can deal with quite diverse experimental conditions.Control problems have of course a long standing tradition in engineering.They took a large boost during and after world war two, where linear control theory was formalised in a systematic manner.Subsequently these ideas have been extended to nonlinear systems, see e.g.[14,15].Putting the emphasis on non-invasive methods the relevance of control techniques for the purpose of system analysis has been rediscovered in the context of chaotic dynamics [7].Unlike in engineering, control is considered here merely as a kind of spectroscopic tool to identify structures in the phase space of the system. To illustrate some of the challenges one faces when one aims at implementing control based experimental continuation of unstable states we refer to atomic force microscopy as a paradigm. A comprehensive theoretical study of control-based continuation in a model of atomic force microscopy can be found in [16].In experimental terms atomic force microscopy is a quite versatile method to inspect surfaces and attached structures or objects thereon, down to the nanoscale [17,18].During the last decades scanning platforms have been substantially augmented, so that frame sizes of 80µm down to 1nm are possible without any drawback in lateral and vertical linearity.During topography acquisition under ambient conditions the relative amplitude drop can serve as setpoint variable while the control variable is the piezo actuator height.To a good degree, thereby the tip-sample separation is kept constant during lateral scanning and a topography is acquired as a discrete data set.Typically, physical lateral resolution is at about 1nm while vertical resolution is at about 1 Å.It is quite common to operate atomic force microscopes in the dynamic mode where the sensor element, a bending micro-cantilever with a tip, is driven into oscillation via a dither or shaker piezo.The nonlinear features in the interaction with the surface turn atomic force microscopy into a complex nonlinear dynamical systems which shows bistability, a variety of bifurcations, and chaotic motion [19,20,21,22].In dynamic force microscopy stable low and high amplitude branches with an unstable branch in between do coexist.Jumping between the two stable branches during acquisition of topography is a common distortion, particularly with molecular species, requiring manual intervention (see figure 1 for experimental images).Especially beyond the resonance frequency of the cantilever a rather broad range of tip sample separations is prone to such imaging instabilities.By applying ideas from time-delayed feedback control to atomic force microscopy noise reduction in the imaging process has been reported [23]. We have mentioned atomic force microscopy just to motivate our plain theoretical considerations.Control challenges which may occur in such an experimental context, such as the lack of a proper mathematical model, fast time scales like those encountered in nanosystems which prevent extensive online data processing, parameter drifts and non-stationary behaviour which preclude a preliminary data based modelling, or the dynamical impact of noise are the topic of our interest.Above all we want to design a control scheme which is robust and comes with a large basin of attraction, and which finally can be set up if just plain measurements of the system are available.In a previous theoretical study [24] we have outlined a scheme to deal in principle with these issues by linear control schemes applied to stroboscopic maps.However, such linear schemes have severe limitations when it comes to global properties of the control in nonlinear setups.In our work we focus on the design of non-invasive control schemes with large basins of attraction.We will demonstrate their success by numerical simulations.Actual implementations in experiments will be addressed elsewhere. The design of control schemes with good global properties is of course a standard theme in engineering.For instance, the problem of globally stable control design has been solved by exact state-space linearisation for single-input systems, see [14].Having said that, the implementation of such ideas requires some knowledge about the underlying dynamics.In very basic terms the issue of globally stable control has been revisited in the context of control of chaos [25] and has been popularised beyond the remit of engineering problems.To keep our presentation selfcontained section 2 will give a brief sketch of the basic ideas and of the related challenges in a theoretical setting.Design of control schemes with good global properties requires access to some properties of the dynamics.Since we have ultimately applications for atomic force microscopy in mind we focus here on general nonlinear driven oscillators and the corresponding nonlinear resonance behaviour.We will outline in section 3 how weakly nonlinear perturbation expansions will give us analytic access to the dynamics, and in particular to the stroboscopic map of the equations of motion.Based on these analytic estimates we will show in section 4 how to design a globally stable control scheme to stabilise periodic orbits in a non-invasive way.In particular, our design utilises the phase of the driving field as a key component of a globally stable control scheme.While our approach has been based on weakly nonlinear analysis and on the analytic expression of the stroboscopic map, we will show by numerical means that our control scheme also works well beyond the perturbative regime and can thus be applied to generic nonlinear oscillator systems.We demonstrate in section 5 that the control scheme can be used for a data based tracking of unstable orbits and for the generation of a complete bifurcation scenario.While the design of the control scheme has been based on a perturbative treatment of model equations, we will show in section 6 that all the elements of the control scheme can be obtained from data, in particular from scanning a bistable nonlinear resonance curve, as long as higher order harmonic components do not dominate the dynamics of the system.Above all, the control scheme can be implemented without any a priori access to a mathematical model. Finally, we briefly discuss in the conclusion limitations and merits of the proposed approach, in particular in the context of atomic force microscopy and related experimental setups. Globally stable control in a nutshell As we are aiming at controlling periodic states of a dynamical system it seems promising to focus on Poincare maps or stroboscopic maps since periodic states will become fixed points of the time discrete dynamical systems.To illustrate the basic idea how to design a globally stable control scheme consider a time-discrete dynamical system given by a one-dimensional map f µ where the right hand side depends on a parameter µ which will serve as control input.For simplicity and for the purpose of illustration we assume here that the map depends on the parameter µ in an additive way.Such an assumption is by no means essential for the subsequent considerations.We aim at controlling fixed points of the dynamical system, eq. ( 1), where the fixed point manifold, that means the fixed point x * in dependence on the parameter µ, is determined by In order to stabilise such a fixed point we make the parameter µ a dynamical time-dependent quantity.In its simplest instalment the time-dependence of µ is just given by a static relation with the state variable, say where h specifies the control law, i.e., the dependence of the parameter µ n on the state of the system x n .Then the closed loop dynamics reads The actual fixed point to be stabilised is determined by eq. ( 2) together with In geometric terms eqs.( 2) and ( 5) amount to the intersection of two manifolds.The static offset µ R which has been included in the control law, eq. ( 3), serves as an external parameter by which the fixed point to be controlled, x * , and the corresponding actual parameter value µ * can be selected. We aim to chose the control feedback h(x n ) in such a way so that the fixed point x * becomes a globally stable fixed point of the closed loop dynamics, eq. ( 4).The obvious choice is of course h(x) = −g(x) since the fixed point of the closed loop dynamics becomes superstable and convergence happens in one iteration step.The offset µ R of the control loop becomes in fact the fixed point value to be stabilised and the corresponding parameter value is given by eq. ( 5). Global convergence can be obtained as well under less stringent conditions.For instance it is sufficient that the closed loop dynamics, eq. ( 4), gives rise to a contraction map.Hence a rough estimate of the full internal dynamics may be sufficient to design a successful control scheme. As a simple illustration consider the logistic map on the interval [0, 1] which reads To stabilise the fixed point we employ the control scheme µ = µ n with the choice Then the closed loop dynamics yields a contraction on the interval [0, 1] and the dynamics converges globally to a unique fixed point x * .The actual value of the fixed point is determined by the control gain µ R , and the corresponding actual parameter value is given by µ * = µ R h(x * ), see eq. ( 7). Figure 2 shows time traces of successful control of the unstable fixed point for some typical parameter values. It is worth to mention, that by construction, the control scheme outlined above is non-invasive, even though a control force does not seem to tend to zero.By making the system parameter µ a dynamical quantity µ n , which is adjusted by an instantaneous feedback law such as eq.( 3) or eq.( 7), successful control is signalled by the dynamical parameter tending to a limit value µ ∞ (see figure 2).In such a case the resulting dynamics for x, in our case a fixed point (see figure 2), coincides by construction with an orbit of the system without control at fixed parameter value µ ∞ .Hence, we have a non-invasive control scheme which stabilises a proper unstable orbit of the original equations of motion.In fact, the fixed point which is stabilised in figure 2 (red symbols), coincides with the unstable fixed point of the logistic map for the limit parameter value µ ∞ (figure 2, cyan symbols).We also note that the reference value µ R in the feedback law tunes the limit value µ ∞ and the control target, see eq.( 5), but is by no means per se the target value of the control scheme.To design a globally stable control law one needs the full knowledge or at least a sufficiently accurate estimate of the underlying equations of motion.That is of course far from surprising as a globally stable solution requires the control law to remove all potentially occurring basin boundaries of the stabilised state.Such boundaries are normally caused by unstable saddles and their stable manifolds which, as said, have to be removed from the equations of motion. The reasoning summarised in this section is far from novel.Such considerations are at the heart of any sensible control design and they are normally covered in any basic course in control engineering.The main idea is also well established for, say, almost a century and simplified versions have also popped up in the physics context, see e.g.[25].Nevertheless we felt it useful to recall these basic considerations as we will use such reasoning to design globally stable control of periodic states in oscillator systems in the next sections. 3 Stroboscopic map for weakly nonlinear oscillators We will mainly deal with control of unstable periodic solutions in two-dimensional driven oscillator systems, which are described by the set of differential equations Here γ denotes the viscous damping constant, U (x) the anharmonic part of the potential, and h (s/c) the amplitudes of a harmonic periodic driving force.As pointed out in the previous section we will require some degree of access to the full dynamics, hence we adopt a system with a small parameter ε which can be employed to perform an analytic perturbation expansion. The purpose of the model, eq.( 9), is twofold.For ε = 1 the model constitutes a general nonlinear driven oscillator which will be used to test our control design (see sections 5 and 6 ), even to the level where we base control only on a recorded time series of the model.On the other hand we will use the regime of small ε in eq.( 9) to develop, inspired by analytic perturbation expansion, a globally stable control design in section 4. In particular, we will show that access to the amplitude and the phase of the driving field is sufficient to set up the control scheme. While this design will work by construction in the perturbative regime we will also demonstrate that good global properties persist for typical oscillators. As far as dynamical phenomena of eq.( 9) are concerned we can simply resort to numerical computations of stroboscopic maps, that means computing and study the properties of the map (x n , v n ) → (x n+1 , v n+1 ) by numerical means.While in our analytical studies we will keep a general potential we use for numerical purposes a simple Duffing oscillator with potential given by and we chose in numerical simulations a standard set of parameter values A dominant dynamical signature of such oscillator systems is a nonlinear resonance which occurs even beyond the perturbative regime of small ε values, where a bistability between a large and a small amplitude branch appears.That can be vividly illustrated by a numerical computation of time traces of the stroboscopic map which tend toward fixed points, and where the bistability occurs when the amplitude of the driving field is changed in a quasi-stationary manner, see figure 3.This bistability will be at our centre of interest as we aim to stabilise the unstable branch which separates the two stable states in the bistable region, with a view towards performing One of the two amplitudes h (c) or h (s) in eq. ( 9) seem to be redundant because Here θ denotes the phase of the driving field relative to the cross section of the stroboscopic map, i.e., relative to the times n2π/ω of the observation.The phase θ can be easily eliminated by choosing an appropriate cross section for the stroboscopic map, i.e., by shifting the time of observation by a constant amount.However, we will see soon that having access to the phase of the driving field will be a key element to construct a globally stable control scheme.In fact using the phase of the driving field for the purpose of control is by no means novel, see e.g.[26]. Such feedback has been proposed in experimental contexts, for instance, for noise reduction at the microscale [27,28].However, neither non-invasive control, nor control with good global properties has been at the centre of interest in these studies. As we have seen in the previous section a successful design of a suitable control feedback requires some basic understanding of the dynamics of the underlying system.That means some analytic access to the stroboscopic map is beneficial.While a closed analytic expression for the stroboscopic map is never available in non-trivial situations, we can get some insight if we constrain to the perturbative regime of small ε values.If x(t, x n , v n , ε) and v(t, x n , v n , ε) denote the time dependent solution of eq. ( 9) with initial condition (x n , v n ) the exact stroboscopic map is given by ( straightforward series expansion of the solution of eq. ( 9) in terms of ε we are able to derive an analytic approximation of the stroboscopic map.At lowest order O(ε 0 ), eq. ( 9) reduces to a harmonic oscillator and the solution with initial condition (x n , v n ) reads At first order O(ε) we obtain the linear inhomogeneous system ẋ(1 with initial condition x (1) (0) = 0, v (1) (0) = 0.The solution can be easily computed and we finally obtain for the solution after a period 2π/ω, using the expression eq. ( 13) x (1) To evaluate the remaining integrals we introduce the amplitude r n and the phase φ n of the solution eq. ( 13) by Then we have A simple substitution shows that the last integral is an odd function of r n , that means the integral can be written as r n times an even function.Therefore we can define where the quantity w can be viewed as an effective suitably averaged force.Then eq. ( 17) reads if we take the abbreviations ( 16) and ( 18) into account.The real and imaginary parts of this expression yield the remaining integrals contained in eq. ( 15).Eqs. ( 13) and ( 15) result in the stroboscopic map at first order in the expansion parameter This result of the perturbation expansion is valid for any type of potential U (x), as long as the integral in eq.( 18) does not vanish and defines a meaningful effective force w.In particular, the result, eq.( 20), does not rely on any symmetry properties of the potential1 .For the particular case of the Duffing oscillator, eq. ( 10), eq. ( 18) readily gives the explicit expression It is rather straightforward to compute the fixed points of the map, eq. ( 20) in closed analytic form.In fact, for the fixed point (x * , v * ) eq. ( 20) results in the linear system of equations Solving this system for x * and v * we get and using the definition of the stationary amplitude, r 2 * = x 2 * + v 2 * /ω 2 (see eq. ( 16)) we finally obtain an implicit equation for r * To demonstrate the accuracy of the perturbation expansion we compare bifurcation diagrams obtained directly from the numerical integration of the equations of motion with those computed from the first order perturbation result, eq. ( 20) or eq.( 24). Figure 4 shows the stationary amplitude in dependence of the driving amplitude for quasi-stationary parameter upsweeps and downsweeps.The results obtained from the analytic first order expression are surprisingly accurate when compared with numerical simulations for small values of ε.Actually deviations turn out to be so small that at the scale used in figure 4 no difference between the simulation of the differential equation, the iteration of the analytic map eqs.( 20) and ( 21), and the analytic expression eq. ( 24) is discernible.For large values of ε the first order truncation in eq. ( 20) fails when iterations of this map are considered, as the sequence of iterates tends to diverge.Nevertheless, the bifurcation diagram of stationary states for large values of ε, see figure 3, is still in qualitative and to some extent even in quantitative agreement with the data obtained analytically for small ε, cf.figures 3 and 4. While time scales and transients for small and large values of ε vastly differ there seems to be little change in the stationary states, i.e., in the location of fixed points.This coincidence is surprising but not totally unexpected as our perturbation scheme is essentially equivalent to the averaging principle or the principle of harmonic balance.While those approaches are formally first order perturbation schemes they sometimes perform well even beyond the perturbative regime as these approaches can also be viewed as a non-systematic mean field type expansion. Design of globally stable control Our main goal is to construct a control feedback so that stabilisation of an unstable state occurs for a large set of initial conditions, if possible even globally in the entire phase space.Following the basic reasoning outlined in section 2 we would need some information about the underlying stroboscopic map.At least in a perturbative regime of small values of ε such information is available as demonstrated in section 3. Hence we will base our control design on the expression eq. ( 20) for the stroboscopic map.Thanks to two driving field amplitudes, i.e. thanks to the fact that we have taken explicitly the phase of the driving field into account, a field amplitude occurs as an additive part in each of the components of the stroboscopic map, and we can use these amplitudes for control purposes.Following the reasoning outlined in section 2 it looks tempting to implement a control feedback which removes the nonlinear part, so that the remaining dissipative linear contribution ensures global convergence towards a stabilised fixed point.Such a reasoning leads us to the design where r 2 n = x 2 n + v 2 n /ω 2 , see eq. ( 16).The reasoning which lead us to the design, eq. ( 25), follows the idea of feedback linearisation which is well established in the engineering context, see e g. [30].However, we want to emphasise that the control scheme can be implemented easily by experimentally accessible parameters such as the amplitude and the phase of the driving field. Evaluating the fixed point condition under control (see eq. ( 20) with h (c/s) replaced by h so that the two offset values h ∞ which correspond to the stabilised fixed point (x * , v * ), as already illustrated in figure 2 for our toy model.Having access to the phase of the driving field, i.e., having thereby access to both terms of the driving field has turned out to be crucial for our design. While the setup defined in eq. ( 25) will by definition work for the stroboscopic map in first order of ε we still need to confirm whether the scheme has good convergence properties when applied to the full equations of motion.For that purpose let us first consider numerical simulations for small values of ε where first order perturbation theory has turned out to be quite accurate even at a quantitative level, see section 3. Implementing the control means that at the beginning of each period of the drive we readjust the amplitudes of the driving field using eq.( 25), i.e., using the current values of x n = x(n2π/ω) and v n = v(n2π/ω).We could use in principle any values for the offsets h R .However we are aiming at stabilising unstable states in the bistable regime and for driving fields which have no non-trivial phase (i.e.driving fields with ultimate value h (s) ∞ = 0).In practice one could tune such offsets when stabilisation has been achieved, but there is also a way to determine suitable offsets a priori, which we will address in the next section.For the moment we just make up "suitable" values out of thin air. Figure 5 shows time traces obtained for the driven Duffing oscillator with the control scheme along the lines of eq. ( 25).The time traces of the phase space coordinates prove successful stabilisation with a limiting value of r 2 ∞ = 0.448 . .., while the time traces of the control forces h Hence, even if the basin for successful control is finite, its size is so large that the scheme can be considered as sufficiently robust.It is not so surprising that the control design works quite successfully for small values of ε since the analysis of the previous section has shown that this case is quite well covered by the perturbative analysis.As a test for our approach we address parameter settings beyond the perturbative regime and apply the control defined by eq. ( 25) to the Duffing oscillator, eqs.( 9) and (10), for larger values of ε, say ε = 1.Time traces of the oscillator subjected to control with the parameter setup used in figure 3 are shown in figure 6.The performance in the strongly nonlinear regime, ε = 1, is in fact comparable to the perturbative regime, cf.figure 5, even though the transients are now much shorter thanks to the larger dissipation εγ.For large times the amplitude tends towards the limit r 2 ∞ = 0.411 . . .and the amplitude of the driving field has the limiting value h (c) ∞ = 0.140 . .., so that the asymptotic state is in fact on the unstable branch of the strongly nonlinear Duffing oscillator, see figure 3.In addition there is as well a small component h (s) ∞ = −0.0066 . . .giving rise to a non-vanishing phase in the drive, see eq. ( 12), which could be removed by a slight adjustment of the offsets h R .Overall our control design performs quite well even beyond the perturbative regime.To judge the overall performance we investigate by numerical means the basin of the control with parameter settings used in figure 6, i.e., h (c) R = 0.03515 and h (s) R = 0.05713.We are no longer within the perturbative regime and the basin is finite, see figure 7.However, the basin is still quite large and covers all the states which occur in plain parameter sweeps, cf.figure 3. Hence the control design can be considered as a success even from an experimental point of view.The boundary of the basin resembles the fractal structure caused by homoclinic tangles. Hence, the basin boundary in our case may be caused by the stable manifold of a saddle. Control parameter setting and tracking The control offsets h R determine the final amplitudes of the driving field and the fixed point to be stabilised.Since the control design has good global properties one could start with any values for the offsets.After successful control a continuation of unstable states could be done, as usual, by small quasi-stationary changes of the offsets.x(0) Figure 7: Basin for stroboscopic control, eq. ( 25), with control parameter setting h (c) R = 0.03515 and h (s) R = 0.05713 (see figure 6) applied to the driven Duffing oscillator, eqs.( 9) and ( 10) with parameter values as in eq. ( 11) and ε = 1.0.The filled circle (cyan) indicates the stabilised orbit. If one employs the knowledge from the weakly nonlinear analysis one can do better and determine a priori estimates for suitable offsets to stabilise an unstable state with an approximate amplitude r 2 * .Given a value for r 2 * , eq. ( 24) tells us within the limits of first order perturbation theory that the corresponding amplitudes of the driving field read when we impose a constraint on the phase of the driving field.Using these values to estimate the actual fixed point coordinates via eq.( 23) we have Finally eq. ( 26) yields the estimates for suitable control offsets as We have in fact used eq.( 29) to determine offsets for the control in the previous section (with r * = 0.4).The actual fixed point controlled, see figure 5, then differs slightly from the estimate r 2 * , in particular, if the parameters of the equations of motion are not within the range of the first order perturbative treatment. We use eq.( 29) for tracking of stable and unstable states without resorting to a quasistationary change of the control offsets h (c) R and h (s) R .Figure 8 shows the result for the Duffing oscillator in a strongly nonlinear regime, ε = 1, with the parameter setup as in figure 3.Even though we are well beyond the validity of the perturbative regime our control design performs reasonably well, allowing to track the unstable state within the region of bistability.9) and eq.( 10), for ε = 1, with parameter settings as in eq. ( 11), and with the control scheme defined in eq. ( 25) and (21).Control offsets for the tracking have been taken from eq. ( 29) (with r 2 * serving as the curve parameter).Left: Successful tracking of the stationary state r 2 ∞ (blue, full symbols) in dependence on the amplitude of the driving field, see eq. (12).For comparison the corresponding data of the oscillator without control are shown as well (open symbols, cyan), see figure 3. Right: resulting amplitudes of the driving field, eq. ( 25), when control has been successful.Cyan h There are visible deviations in figure 8 between the fixed point without control and the fixed point subjected to control, in particular, close to the fold instabilities which bound the region of bistability.In addition, the unstable branch shows a seemingly subtle structure.These deviations are caused by the control action resulting in a non-vanishing value for h (s) , see figure 8, that means in a non-vanishing phase for the driving field, see eq. ( 12).Hence our observation for the amplitude r 2 * under control does not correspond to the setup used in the system without control, see figure 3, where we have h (s) = 0.If the orbit were a plain harmonic such an effect would not matter.Since the orbit in phase space is not a perfect ellipse the measured amplitude depends on the actual phase of the driving field. There are a couple of ways to compensate for this effect.On the one hand we can slowly tune h ∞ becomes zero, which essentially amounts to an experimental root finding problem.One has to keep in mind, however, that beyond the perturbative regime the rotational symmetry shared by the low order perturbation expansion eq. ( 20) is not valid any longer. Therefore the stability of the unstable state may get lost during this adjustment.It may in fact happen that within our design successful control for a non-vanishing phase of the driving field does not per se translate into successful control for vanishing phase of the driving field when the offsets are readjusted accordingly.On the other hand, there is no need to perform such an adjustment.The phase of the driving field, eq. ( 12), can be compensated for if we consider the stroboscopic map at a different time, i.e., if we formally chose a different cross section.If we record data at times n2π/ω + θ/ω, that means if we compute a renormalised amplitude r 2 n based on x(n2π/ω + θ/ω) and v(n2π/ω + θ/ω) then such a value of r 2 n is identical to the value one obtains from an ordinary stroboscopic map in a Duffing oscillator where the driving field has vanishing phase θ = 0.That means, while we still base control on discrete time points n2π/ω we use a suitable time lag in the data recording to compensate for the non-vanishing final value of h (s) ∞ .By this trivial change we obtain a perfect match of the controlled and the uncontrolled data, see figure 9, where even subtle details of the unstable branch are detected with considerable accuracy.It is again worth to stress that the control, the data processing, and the control based continuation can be easily implemented in experiments as only a plain time series is required.12), the estimate of the stationary amplitude r 2 * has been computed from time series data x(n2π/ω+θ/ω) and v(n2π/ω+θ/ω) with a time lag given by the phase of the driving field.Stationary amplitude as a function of the amplitude of the driving field with control (full symbols, blue).For comparison the corresponding data without control, see figure 3, are shown as well (open symbols cyan). Data-driven control design Our control design, eq. ( 25), was inspired by the first order perturbation expansion of the stroboscopic map, eq. ( 20).This design has been quite successful even beyond the perturbative regime, where eq. ( 20) fails to model the dynamics properly, as exemplified by the results of 24) and (30) to the data shown in figure 3, i.e., least square fit for the nonlinear resonance line of the driven Duffing oscillator, eqs.( 9) and (10) with parameters as in eqs.( 11), h (s) = 0, and ε = 1.Line: eq. ( 24) with eq. ( 30) and w 0 = −0.2338,w 1 = 0.3290.Symbols: data obtained from a parameter upsweep and parameter downsweep (cf. figure 3). driven Duffing oscillator in the parameter setting specified above.Even though there is no a priori guarantee that the approach will work, as the perturbation expansion eq. ( 20) definitely does not capture the dynamics any longer, the result shown in figure 11 demonstrates that control works quite robust even for an initial condition which is not close to the target state.Even though the parameter setups used to produce the data in figures 6 and 11 coincide, the stabilised fixed points differ slightly as the effective force w(r 2 ), i.e., the actual control feedback is not the same, so that the final driving amplitudes h (s/c) ∞ in both cases differ slightly. In addition to time traces we have as well computed the basin of the control, obtained with the data-based control design.The results shown figure 12 indicate in fact a considerable improvement as compared to figure 7 where the design was based on the first order perturbation scheme.Hence, the approach outlined in this section, which is based on the data obtained from a simple parameter upsweep and downsweep provides a promising strategy for a wider class of driven oscillator systems.One may even exploit the dependence of the parameter sweeps on the driving frequency ω in conjunction with the analytic expression eq. ( 24) to explore the damping mechanism in more detail.Details will be published elsewhere. Conclusion We have succeeded with our main aim to design a globally robust non-invasive control scheme for the stabilisation of periodic orbits, to enable control-based continuation.The control is based on the measurement of a single data point per period of the drive, so that such a scheme is applicable in fast systems like atomic force microscopy, when no extensive data processing can be done during control, and where no accurate mathematical model is available for preprocessing. The design of our control scheme was initially inspired by the weakly nonlinear analysis of oscillator systems.Having access to the phase of the driving field has turned out to be a key for the success of the control, since thereby we have been able to ensure for large basins of attraction for the stabilised state.The control feedback contains an effective averaged force, so that the design already gives vital information about the underlying dynamics.Numerical simulations indicate that the control scheme works quite well beyond the perturbative regime, and further systematic numerical studies look promising.Above all, we have shown that the required details of the control design can be obtained from a simple scan of the nonlinear resonance curve, so that in fact no underlying mathematical model is needed from the outset. We have based our analysis on a one degree of freedom mechanical oscillator model with fairly general potential.For the theoretical analysis we have used the assumption that higher order harmonics play a limited role, even though our numerical studies show that such a constraint can be relaxed to some extent.However, it still needs to be investigated how the present approach can be generalised to higher dimensional driven dynamical systems.Again weakly nonlinear analysis could provide a hint how to proceed in this context. We have here considered a model with a simple viscous damping.In real world experiments, such as atomic force microscopy, the treatment of all losses by a single viscous damping constant γ may be regarded as undercomplex, even in the seemingly simple situation of a stiff, hydrophobic sample in a vacuum environment.While the dynamics of an oscillating microcantilever beam itself behaves largely linear, the complexity of the system arises from the largely unknown dynamics of the multitude of effects taking part in the nanoscopic junction between probe and sample.These unknowns and nonlinearities do pose the challenges.Currently, it is hard to say what physics or (bio)chemistry happens at the turning point of the tip in the vicinity of the sample, the point of strongest interaction which shapes contrast and stability of the measurement.While force-distance spectroscopy can help to unravel force contributions from molecule layering, visco-elasticity, plastic deformation, electrical polarisation and attraction, electro-chemical reactions, rupturing molecular bonds, entropic interaction, depletion forces, oxidation, receptor-ligand, or other (see, for instance [31,32]), this is of limited value to judge the situation of topography acquisition in the dynamic mode at cantilever frequencies in the 10 to 500 kHz regime.At a phenomenological level the complex dissipation processes can be modelled by a state dependent damping, and its impact can be analysed within a weakly nonlinear perturbation expansion as well, resulting in an additional effective damping term which along the side of the effective averaged force enters the shape of the nonlinear resonance curve.Both contributions, the effective damping and the effective force can be disentangled by monitoring in addition the frequency dependence of the nonlinear resonance.Hence, by having access to stable as well as unstable branches we can quite accurately determine potential and damping from measured data and thus contribute, for instance, to the outstanding challenge to understand the dissipative mechanisms in atomic force microscopy. A robust control scheme which enables large basins of attraction is one of the keys to implement model free continuation of bifurcations in experiments, to make the power of continuation tools in the study of mathematical models available as a 21st century data-based spectroscopic tool.Our control design meets all these constraints, so that we can track unstable states even when no quasi-stationary parameter sweep can be implemented.Having developed a suitably robust control scheme we have taken the next step towards finally implementing control based continuation in real world complex experiments.Details in that direction will be reported elsewhere. Figure 1 : Figure 1: Dynamic force microscopy of a conductive polymer blend (PEDOT:PSS) thin film deposited by spin coating on a glass substrate (rotational frequency is 5000 revolutions per minute, film thickness ≈ 32 nm).Cantilever type: Nanosensors SSS-NCHR with a force constant in the regime of 10-13 N/m and a nominal tip radius of 2 nm.AFM instrument: Park Systems NX-20 (in ambient air).AM amplitude drop A/A f ree ≈ 0.75.Left: topography, middle: phase between excitation and cantilever bending, right: amplitude.The phase exhibits a clear bimodal distribution peaking at -33 • and +14 • (not shown). Figure 3 : Figure 3: Bifurcation diagram of eqs.(9) and(10) with ε = 1, h (s) = 0, and other parameters as in eq.(11).Data have been obtained from a numerical simulation of the stroboscopic map, i.e., from time traces evaluated at integer multiples of the period of the driving field.Skipping a transient of 50 iterations the orbit settles on a stable fixed point (x * , v * ).The dependence of r 2 * = x 2 * + v 2 * /ω 2 on h (c) is shown for a quasi-stationary parameter upsweep (cyan, full symbols) and a parameter downsweep (red, open symbols).The black line shows the analytic result obtained from a first order perturbation theory, see eq. (24). finally a control based data driven continuation of bifurcations in experiments such as atomic force microscopy. Figure 4 : 2 * = x 2 * + v 2 * Figure 4: Bifurcation diagram of eq.(9) for ε = 0.05 (other parameters are as in figure3).Data are obtained from a numerical simulation of the stroboscopic map (red, open symbols), or using iterates of the analytic map derived by first order perturbation theory eqs.(20) and (21) (cyan, full symbols).Skipping a transient of 800 iterations the orbit settles on a stable fixed point (x * , v * ).The dependence of r 2 * = x 2 * + v 2 * /ω 2 on h (c) is shown for a quasi-stationary parameter upsweep (left) and a parameter downsweep (right).The black line shows the analytic expression obtained for the fixed point at first order perturbation theory, see eq. (24). design select the components of the fixed point to be stabilised.The amplitudes h (c/s) n settle on values of the driving amplitude h (c) ∞ = 0.128 . . .and h (s) ∞ = −0.0002 . .., which correspond to the unstable branch right in the middle of the bistable region, cf.figure 4. We have also checked how the control performs for different initial conditions.For initial conditions in the range −5 ≤ x(0) ≤ 5, −5 ≤ v(0) ≤ 5 we always find successful stabilisation of the unstable periodic state, in line with what we expect from the first order perturbation treatment.For larger values of the initial condition occasionally solutions seem to diverge, which is far from surprising given that the equations of motion and the control scheme contain cubic terms. Figure 8 : Figure 8: Control based continuation of the stationary state in the Duffing oscillator, eq.(9) and Figure 9 : Figure 9: Control based tracking of the stationary state in the Duffing oscillator with parameters and control design as in figure 8. To compensate for the non-vanishing phase of the driving field, see eq. (12), the estimate of the stationary amplitude r 2 * has been computed from time series data x(n2π/ω+θ/ω) and v(n2π/ω+θ/ω) with a time lag given by the phase of the driving field.Stationary Figure 10 : Figure 10: Least square fit of the analytic expression eqs.(24) and(30) to the data shown in figure3,
10,088.4
2024-08-16T00:00:00.000
[ "Engineering", "Mathematics" ]
Analog Mars Rover Service as a Robotic Hardware and Team Building — Magma White is an analog Mars rover platform created by ABM SE and offered to the developers of scientific equipment built for space exploration missions, who want to test their devices at low-and mid-Technology Readiness Levels in demanding conditions of desert, Alpine and polar regions or artificial environments. The rover offers a remote access to the payload through the Magma White mission control system. The paper summarizes the background of the analog solution. It covers universal interfacing setup and issues related to the team and technological partners, who supply elements of the payloads. Two analog missions provide a case study: Dachstein 2012, when “WISDOM” ground penetrating radar for Exomars was tested onboard Magma White, and Morocco 2013, with “L.I.F.E.” payload and complete remote access from Europe. INTRODUCTION ABM Space Education sp.z o.o.(ABM SE) was established in Poland basing on expertise and human resources of student robotics teams of Mars Society's University Rover Challenge (URC) in Utah.Mars Society Polskaan NGO supporting participation of Poland in international space programs aided the startup.Up till today 11 Polish URC teams have been initiated by these activities, originating from 7 Polish universities.Rover teams included Skarabeusz (2009), Magma (2010), Magma2, Scorpio, Copernicus (2011), Scorpio2 (2012), Hero, Copernicus 2013, Hyperion, SKNL "Legendary" Rover Team, Scorpio3 (2013).ABM SE develops a non-competition, utility versions configured as analog Mars rovers, basing directly on the best solutions and specialists from the competition designs (including the URC and community), D-RATS, Arizona USA (NASA), NEEMO underwater habitat, Florida (NASA), PolAres, Rio Tinto, Spain/Dachstein Alpine caves/Morroco Erfoud (NGO -Austrian Space Forum).Sites such as Atacama [21], [6] or Gobi deserts are being used and examined as good Mars analogs.There are also numerous studies focusing on environmental analogies for astrobiology research, including specific extremophile habitats, lakes, craters, volcanic areas.Further research is being conducted in artificial indoor and outdoor facilities, such as NASA ARC "Roverscape", JSC Rock Yard, JPL Mars Yard, CNES Mars Yard [11].Most of the mentioned programs included test of robotic platforms, usually analog exploration and supporting robots. Finally one should mention also competitions focusing on space analog tasks, such as NASA Lunabotic Mining Competition, Google Lunar X Prize (stimulating analog studies), NASA Centennial Challenges or Mars Society's University Rover Challenge, that are directly related to development of planetary exploration rovers. It should be stated, that programs run by large entities and by NGOs/communities have several differences.Furthermore, the community programs are becoming a very attractive form of analog research.It can be assumed, that these open programs stimulate development of small enterprises willing to enter the space market or to specialize in cost-efficient space solutions.New players, without background of wellestablished, dedicated funding programs can benefit from this opportunity [3].The significance of analog research is reflected in the European 7 th Framework Program [7], proving a demand and potentially a market for this type of activity.The open analog programs feature: low cost of research, volunteering work, efficient media mechanism, involvement of non-space players as sponsors and partners, lack of costly bureaucratic and procurement processes and procedures (as opposed to the research performed by large entities), creative freedom (as opposed to complicated and rather strict content selection procedures at large entities), enforcing of rationalization caused by low research budgets.This last feature proves to be very interesting, as it can directly lead to innovation, competitiveness and allows to perform research in times of financial cuts.These features have been also the foundation of ABM SE and Polish URC rovers.The designs have proved to be not only competitive, but also reliable due to their simplicity and easy operation. Open analog research provides abundance of data, including recording and systematization of even very basic, simple operational and usage observations and experiments.These data are generally available as scientific data, in major part on the Internet, as mission reports, and can be used also by larger entities.It can be assumed, that these data are especially attractive for the industry, since performing of such simple, but often lengthy activities within regular structures of industrial or public employment is economically unjustifiable.So open analog research proves to be a reasonable way of gaining these data and recording potentially critical observations, that might save missions, programs and budgets. In some areas of activities analog research provides very good simulation of actual planetary conditions, while in some areas it is only an approximation.Still a merger of realistic simulation and approximation creates a potential for inclusion of all possible elements of a planetary mission.Each of the studies focuses on different aspects of such a mission, while the others are approximated.For mechanical and other solutions for exploration rovers, analog research proves to be quite effective.It allows to implement proper knowledge for actual planetary robotics in the area of structures, control systems, procedures and communication architecture.The applicable knowledge ranges from simple observations [2] to technical compendia for flight hardware [9], [1].Due to the Earth mass it can be assumed, that structure efficiently working on Earth, will likely work efficiently on Mars, Moon or other bodies less massive than the Earth.A proper algorithm for calculation of mass-to-power ratios for planetary robotics is a tool that puts analog research one significant step further. B. ABM SE analog Mars rover hardware and software Magma White is a mobile robotic platform developed by ABM SE as analog Mars rover.[Fig.1].It is a 100 cm long, 90 cm wide, 35 kg rover made of plastics capable of withstanding standard ambient temperatures in most of Earth's environments.The rover has 6 wheels with independent motors, non-steered, with 6 independent beams, secured with lightweight, temperature-resistant flexible bands.The deck allows installation of various accessories, including the main camera mast, main antenna mast and the 3-stage robotic arm, constituting the standard equipment.This equipment can be replaced by special payloads.Main computer system is located in the central trunk, isolated from external conditions.Rover runs on off-the-shelf batteries, allowing operation of more than 6 hours.Communication is realized over standard Internet protocols: WiFi 5 GHz, and 2,4 GHz frequency for diagnostics.Control software and interface are dedicated for Windows PC and Android smartphone.This solution allows easy operation and fast introduction for potential users, as well as easy integration with existing infrastructures in any location.Range of rover operation depends on the type of antenna used.The rover electronics are arranged in a modular fashion.It allows time-effective reconfiguration of the platform for each expedition, payload and experiment.The electronic modules are easily accessible and can be quickly replaced in case of potential failure or according to mission schedule requirement.General functional blocks include: power, computer, motor driver, communication, camera, and additional modules.Voltage generating module works in pulse mode to provide efficient power preservation.Onboard computer controls all execution and input modules.All other modules, such as camera, communication, can be replaced with analogous modules of different type (WiFi, analog radio, BT etc., different types of cameras, sensors, etc.).Power and interface buses with efficient throughput allow flexible adaptation to mission requirements.Currently a version 3 control software is used: after the initial version for a single PC, a new more extended version was prepared, together with an Android smartphone control module, and currently completely new software is being created, in a form of local/remote control center for the rover and the payloads, capable of simulating various space-specific features, such as time delays and communication breakdowns.This general characteristics result from experiences gained during the University Rover Challenge [12] and during later tests.Various types of suspensions have been tested, as well as RF communication for commands and video, with various results.The hull is a structure resulting from the adopted suspension model and it was designed by Wojciech Głażewski, who also designed two URC competition rovers from the Magma series.Such characteristics allow operation of the rover in: rocky and sandy desert terrain in temperatures up to 40°C (tested; higher possible), simulated regolith, with slopes up to 40 degrees, on rock fragments several centimeters large.The rover can reach average speed of 5 km/h in easy desert terrain.Also ice and snow surfaces have been tested.Ambient temperatures below zero Celsius do not cause any problems.The rover performs very well on pure clean ice and crushed ice.It is capable of making a smooth, easy traverse and does not slid while turning or stopping.Slow speeds are recommended.Deep snow is difficult to cross; further tests are required.The rover has also been successfully tested in urban, pavement, grassy, wood terrains of Central European type.The purpose of this rover setup is to provide potential to carry equipment and experiments developed for planetary exploration missions, such as: arms, sampling tools, optical and laser meters, geophysical devices with antennas or electrodes, drills, scoops, penetrators, various cameras, as well as mechanical add-ons and alternative or auxiliary power sources.The flexible architecture, both of the structure and the electronics, allows connection and remote operation of any kind of scientific devices up to 20 kg of total weight.Analog's modularity allows continuous development of the design, from simulated and replacement modules and materials, towards architectures and solutions for spaceflight. C. Significance of team building Development of competent team is as important, as the development of the hardware/software, especially for a small company.Creative pool and growing experience are the most important resources of a team, and they are not easily replaceable, especially in a niche type of activity.Spacefocused education in Poland is officially present only at the Warsaw University of Technology, and a few universities also teach the necessary skills, but without shaping of the graduates' identity, as space sector professionals.In terms of establishing a stable team a specific approach had to be adopted.It required several years of preparatory activities and a strategic plan.Initial activities were based on Mars Society Polska, that actively initiated, encouraged and supported student URC teams.From those teams competent management and engineering staff has emerged, and the self-confidence was further leveraged by competition successes.Currently these specialists constitute the core team of ABM SE, and new specialists are being recruited from ever emerging Polish URC rover teams.The company invites the best technical URC solutions to be incorporated in the professional version.The company also recruits non-URC staff directly, emphasizing the opportunity to take part in analog space missions.Most of them are fist job employees, with background in technical physics, mechanics, electronics and IT.This approach allows to shape a dedicated team of space sector professionals, who offer original, generic technical solutions.The team is young and creative, willing to use very modern tools with the potential for space applications.Also the URC success story, and some additional projects, such as ABM SE's Virtual Mars Rover system in a form of video game, attract many promising professionals.Game and simulation systems are believed to be a very efficient hardware and team building tool, as confirmed by experience and opinions [8]. Analog space missions prove to be attractive and appealing not only to the direct participants-employees.Since these missions are both attractive and accessible to media, as expeditions performed in medially exotic or space-like and reasonably possible to reach environments, they constitute a great education and outreach tool [18].In case of projects that allow additional funding or support, this tool can be used for direct stimulation of the space sector.It engages non-space parties, especially technological companies, as sponsors and providers of non-space solutions needed for simulation.These parties join because of outreach and advertising potential of the mission.As a result these parties can eventually become interested in direct engagement into higher-profile missions, whether analog or space-oriented.ABM SE case is a good example of such industrial team building.The process started with a distributor of electronic components for the URC rover designs (TME), who later joined the ABM SE's analog mission in Dachstein, together with a distributor of electromechanical components (Archimedes).Both companies joined also the ABM SE's Morocco 2013 mission.With this supply and development environment it is much easier to select proper analog design, and the company itself is able to provide the partners with detailed feedback related to performance of their components in relevant environments. D. Dachstein 2012 Mars analog mission Dachstein 2012 Mars analog Mission was organized by the Austrian Space Forum (OEWF), with participation of 12 international scientific teams [10].It took place between 27 th April and 1 st May 2012, in Dachstein Giant Ice Caves in the Alps, providing a good Mars regolith analog, potential Martian cave analog and area of testing the of equipment on clean ice (horizontal and sloped ice surfaces).Some additional challenges were important, such as protecting the rover against high humidity (up to 100%), low temperatures (from +2°C to -2°C inside the cave and lower temperatures outside the cave), providing proper grip on ice, and especially limiting of the interaction with the natural conditions inside (of) the cave.The cave is a protected habitat.Limiting of human supervision was the first exercise for the team, emphasizing the need to develop a supervision-free and highly sterile solution.In practice the exercise was not very successful, but allowed to notice this problem.ABM SE's team had two goals.One was the first field test of the new Magma White electronics and control architecture, with remote control from the Mission Control Center located in a cable car station below the caves.Development works before the mission and recording of parameters during the mission focused on gathering the data for analysis of intelligent power management [14], [17].Runs on clean ice, on rocky, sloped area, as well as overnight hibernation tests were performed inside the caves.The most complex activity included controlling the rover remotely and performing a joined task with the Antipodes experiment: an analog EVA space suit tester has interacted with the rover on the test site, with parallel communication with a scientific team in Wellington, New Zealand.The task was performed in a satisfactory way, considering its procedural complexity.For the ABM SE team it was the first occasion to participate in a global mission control procedure.The second goal for ABM SE was a joint operation with Wisdom ground penetrating radar (GPR) team [4].The device is being developed for the Exomars rover, and Magma White provided an opportunity to achieve sample GPR results from automated runs on even ice surface.25 cm movement increments were adapted, and Wisdom hardware was synchronized with Magma White driving system, performing a sequence: 25 cm movemeasurementprocessing. The sequence was preprogrammed, so the operator had only to initiate it and then decide on the end point, when the rover approached a dangerous ice fall.25 cm increments were controlled by a dedicated encoder.The adapted algorithm is presented [Algorithm 1].Basing on observations, a 4 cm precision margin was not exceed for each move.Wisdom module was installed in the front of the rover, after removal of the standard robotic arm [Fig.2].Antennas were easily accommodated, maintaining a constant, proper distance from the probed surface.Additionally, GPR main module and an auxiliary laptop were mounted on the main installation guides on top of the rover deck.The whole joint experiment had been prepared for several months in advance.Magma White team had a mockup fixing plate prepared according to Wisdom team instructions.The plate was installed in ABM SE's workshop and trial runs with 9 kg load (equal to the Wisdom hardware at that time) were conducted in Poland to ensure smooth operation in Austria.Wisdom and Magma White were integrated in Dachstein very fast and without the need of structural modifications.2,4 GHz communication channel had to be switched off to eliminate GPR interference.Wisdom team managed to capture satisfactory sample profiles of ice layers with the use of the rover.Joined activity was presented at the 2012 International Workshop for Planetary Missions in Greenbelt Maryland [5].PRoVisG [15] system for generation of 3D surrounding reconstruction was an additional payload.Its markers were installed in selected points of the cave and onboard of the moving rover.The idea was to reconstruct movements of the rover in a virtual 3D environment, and later correlate it with Wisdom measurements. E. Morocco 2013 Mars analog mission Mars analog simulated Mission Morocco 2013 was also organized by the OEWF, between 1 st and 28 th of February 2013, in the Moroccan Sahara desert near Erfoud.In December 2012 a two days preparatory dress rehearsal was also performed in Innsbruck, with a dozen of international teams present.Comparing to Dachstein 2012, Morocco 2013 was a more realistic simulation.The terrain simulated wellknown features of Mars surface.For the purposes of Magma White runs it was divided into three categories: easy (flat, hardened and loose sand surface with fine gravels), moderate (loose sand and diversified gravel size, dry bushes present, periodical shallow stream beds, relatively poor Mars analog), hard (rocky hill slopes with inclination up to 50%, diversified rock sizes).Operations were performed during the day, with temperatures reaching +35°C.The rover was stored for the night in a non-heated tent, while the temperatures dropped to -4°C.Additional difficulties included presence of fine dust and (of) strong winds every 5-6 days.The Mission Support Center (MSC) was located in Innsbruck.The desert camp simulated a habitat of a field (landing) crew and remained in constant communication with the MSC, over the Internet.The experiments were either taken to the Sahara directly by the Austrian field crew or first installed by the experiment teams during the first week of February, and then left under the supervision of the Austrian field crew.These experiments were later controlled from the field base, MSC or directly from experiments home sites (such as Toruń, Poland in case of ABM SE).During the simulation OEWF was to supervise all experiments, with the use of the field crew of 8, including 2 EVA spacesuit simulators, and the scheduled communication windows.ABM SE did send a team of 3 to Morocco to install Magma White rover in the base camp and perform the communication test with the Innsbruck MSC and the ABM SE headquarters in Toruń.The team left after 5 days of nonsimulation mode activities.ABM SE's first simulated sessions were performed from Innsbruck MSC, and the remaining sessions from Toruń, through the MSC.All telemetry, video and control data were available in all three locations.The base camp did have a fair Internet connection from a local provider, and at a later stage a broadband satellite link was installed in the desert, improving the communication.Magma White did connect to the base camp network via 5 GHz connection.Two basic mission operation modes were employed: simulation (sim) and non-sim.The sim mode did apply a 10 minutes delay on all communication with the field crew and 10 minutes delay for the crew to communicate with the MSC.It was the basic mode used every day by the OEWF for most of the activities.For non-sim activities a natural signal, video and command delay resulting from the network and satellite band and link quality was present.On the field crew side ABM SE's experiments required basically switching on and off (of) the rover and charging (of) the batteries.L.I.F.E detector [20] was the main payload for Magma White in Morocco 2013 [Fig.3].L.I.F.E.(Laser-induced fluorescent experiment) is a 4 kg laser/optical module in an aluminum housing.Laser light from the tip has to be projected on a probed surface (preferably rock), where microbial life is suspected.The tip had to approach samples quite closely (about 1 cm distance was assumed), and proper flexibility of its movement had to be maintained.Algorithms for aiding of instrument positioning were adapted.One algorithm [Algorithm 2] controlled tilting of the instrument platform to an optimum position, where a proximity sensor on the instrument tip recorded the closest tip position without moving (of) the whole rover.Another algorithm [Algorithm 3] combined Algorithm 1 of the rover movement and Algorithm 2 of the instrument tilt control, allowing almost autonomous approach of the sample.This semi-autonomous method with the use of algorithms still requires proper calibration.L.I.F.E.instrument was installed in the front part of the rover (after removing the standard robotic arm), on two servos allowing its tilting from vertical position (-90 degrees in relation to the rover plane) to servo-off traveling position (+20 degrees in relation to the rover plane).This arrangement allows flexible approaching of probed samples.It does not allow movement to the sides, so the whole rover has to be turned.Additionally control laptop, connected directly to the rover's system through an Ethernet cable, was installed on the installation guides on the rover's deck.Working with L.I.F.E.revealed several elements that have to be corrected, including implementation of a solution allowing shading (of) the sample from ambient light, and possible mass reduction, at least for the analog tests, since manipulating of a solid one-piece instrument consumes a lot of rover's power resources.For future payloads a modular approach is suggested, where only the manipulated head is placed on moving parts, and as much of other instrument systems as possible are placed in another fixed module.This approach is, however, difficult to achieve with and optical device, such as L.I.F.E., so limiting of the housing mass could be a good solution here.No Earth vs. Mars gravity mass-to-power calculations were performed for optimization of power consumption by the experiment this time. During the mission the following main tests were performed. • Rover follow, run 1, non-sim, terrain: hard.Exercise with EVA suit tester closely following the rover's path, rover as a safety scout.Local control.Status: success. • L.I.F.E.local test, run 1½ , additional workshop run, nonsim.First approaches of sample rock set in the workshop conditions.Implementation of autonomous instrument tip control.Status: success. • L.I.F.E.payload runs, runs 1, 2 non-sim, run 5 sim, terrain: easy.Approaches of rock samples in the field, on a preselected rock, sampling trials, test of handling the payload by the rover, power consumption tests.Status: moderate success, sampling method requires additional development. • Remote control test from Toruń with local supervision, runs 2, 3, non-sim, terrain: easy to moderate.Establishing of overseas control link, the rover controlled from ABM SE headquarters, supervised locally by ABM SE team in Sahara.Trial runs.Status: success. • Presentation to the Moroccan Minister of Science and Higher Education, run 4, non-sim, terrain: easy.Operated by ABM SE team from Innsbruck.With local non-ABM SE supervision.The Minister present at the field camp in Sahara. • Remote control test from Innsbruck, with local non-ABM SE supervision, run 5, sim, terrain: easy to moderate.ABM SE team (has) moved to Innsbruck and all activities of the rover were supervised by the OEWF field team.Control commands and crew messages were sent by ABM SE team from MSC. Status: success. • Remote diagnostics with local non-ABM SE team, run 6, sim and non-sim, workshop run.Performed by ABM SE team from Innsbruck.After losing (of) communication with the rover it was moved to the workshop by the field crew and diagnostic procedure was performed, step by step, with and without the field crew, with reestablishing of some vital rover functions.Status: success, no definite cause was stated, but the procedure was efficient and reestablished rover functions.The problem did not appear again till the end of the mission. • Presentation to ESA delegation to Poland, run 7, non-sim, terrain: easy.Rover operated from Toruń, monitored from Warsaw by ABM SE and ESA delegates. • Remote control test from Toruń, without supervision, run 8, sim/non-sim, terrain: easy to moderate: ABM SE team (has) left MSC.Control commands were sent from Toruń, with MSC monitoring.Field crew participation limited to switching the rover on and off.Status: success. • Navigation and range test without GPS reference, run 8, sim/non-sim, terrain: easy to moderate.Navigation basing on visual terrain features, analysis for future autonomous software processing.Testing of WiFi coverage in the base camp vicinity.Control from Toruń.Status: success. • Power source longevity test, run 8, sim/non sim, terrain: easy to moderate.Free exploration run with L.I.F.E.payload removed.Control from Toruń.Status: 6 hours of operation confirmed, more available but not tested. • Joint run with Hungarian PULI rover, run 9, sim/non sim, terrain: easy.MSC, PULI Mission Control in Hungary and ABM SE headquarters were involved in running of this activity, together with the supervisory field team.Coordination of several control centers was practiced, operation of two rovers, navigating with visual system and taking photos of each other was practiced.WiFi coverage in the vicinity of the base camp was tested, by comparison of WiFi signal strength on each of the two rovers in the same and different locations.Additionally operation in strong wind conditions, with fine dust being blown onto the rovers, and their systems, including optical, was tested.Status: success. • EVA suit monitoring, run 10, sim, terrain: moderate.Test operated from Toruń, in coordination with MSC.Rover monitored EVA suit tester activities during "Delta" experiment.Practicing of coordination of rover actions with suit actions, to provide a mobile supporting, safety and monitoring platform for EVA.Status: success, with remarks related to a requirement of dedicated procedure for joint activity. • Navigation test, run 11, sim/non sim, terrain: easy to moderate.Spot tracking device installed on the rover, as additional GPS system.The aim of the test was to teach the driver and the route planning team to correlate quickly data from camera images with a satellite image and to decide on the optimum, safest moves, observe the results, observe the power level.Control from Toruń.Status: fair, with slight deviation from the planned route and with loss of Spot signal after 23 track points.Rover was driven to the base without further position updates. • New software field trials, all runs.Status: success. • New electronics field trials, all runs.Status: success. • Dust influence trials, all runs.Status: no adverse effects noted.A simple dust deposition monitoring method failed. • Prolonged usage trials, all runs.Status: success.Only one failure noted during 11 main runs and additional workshop activities, performed between 1 st -28 th Feb, without direct, hands-on access of ABM SE team to the hardware from 7 th Feb.The failure was not caused by internal rover's problem, but by a problem on the interface between the rover and local network (probable low bandwidth cause).Sea shipping container and car transport from the coast to Erfoud must be considered as additional stressing factors. CONCLUSION The missions described in the paper prove proper development hardware/software and practical team training strategies for creation of a universal planetary exploration analog system, capable of performing tasks in desert, alpine and potentially polar environments.The target platform will constitute a breadboard for higher TRL space and non-space systems.Direct postulates include development of communication system and philosophy, capable of coping with low-bandwidth or communication breakdowns and development of a specific level of autonomy.These postulates call for organizing a High Arctic expedition, where satellite communication limitations are present, requiring a dedicated communication solution, and where components can be tested in even more demanding conditions.The platform and the Arctic mission perspective raise interest among engaged and potential partners.This brings up a postulate to keep developing the basic hardware platform along with highly universal interfaces for third-party payloads.Basing on just a single system consisting of one rover and up to three control centers, the missions engaged a numerous team, with various competences, including technical and scientific ones, in a form of employment and voluntary scientific support.In a wider context, the team included also representatives of industry partners, interested in the offered service.Finally, the missions provide an opportunity to present specific technologies, separately or in a whole package, to the international community, by means of direct presence in the field with a group of international teams, large coverage of international, specialized and popular media and by summarization and publication of the results in reports and scientific literature.It can be stated that contemporary space analog research has the potential of stimulating space industry in the area of planetary robotics, even without being attached to any specific mission or program.It can also attract additional non-space budgets and open door to the space sector for non-space players.The described missions have proved to be a valuable platform for various types of compatible activities.They have been organized by an SME, with a relatively low budget.Also progress from the URC, through the two analog missions, to the planned High Arctic mission has been shown.The main conclusion states, that such a coordinated, progressive effort, involving more and more parties finds justification and should be continued.Financing is the only major limitation for continuation of this program in a similar form.Much better results could be achieved with stabilized funding of the program.Such funding can be achieved by means of a dedicated grant.The conclusion calls for application for such a grant, either to ESA, EC, Polish or other funding schemes. Fig. 2 Fig. 2 Magma White with WISDOM payload and PRoVisG markers during the Dachstein 2012 analog Mars mission Fig. 3 Fig. 3 Magma White with L.I.F.E.payload during the Morocco 2013 analog Mars mission Algorithm 1 .Algorithm 2 . 3 . Controlling of wheel movement with a defined distance to travel (increments) Controlling of the movement of L.I.F.E.instrument platform Algorithm Controlling of rover wheel movement with a defined distance between the instrument tip and a sample (sample approach)
6,629
2013-12-01T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Consistency conditions for fields localization on braneworlds The general procedure for analyzing the localization of matter fields in brane models is by integrating, in the action, its zero-mode solutions over the extra dimensions. If this is finite, the field is said to be localized. However, the zero-mode solutions must also satisfy the Einstein equations. With this in mind, we obtain stringent constraints on a general energy-momentum tensor by analyzing the Einstein equations. These consistency conditions must be satisfied for any braneworld model. We apply it for some fields of the Standard Model. For a free massless scalar field, the zero-mode localization is consistent only if the field does not depend on the extra dimensions. For the spin 12\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{2}$$\end{document} field with Yukawa-like interactions, we find a very specific relation between Yukawa function and the warp factor. As a consequence, the spinor field localization becomes inconsistent for most of the models studied in the literature. For the free vector field case, we find that the zero modes do not satisfy the consistency conditions. Finally, we consider the mechanisms proposed to localize this field. We find that a few survive, and even for these, the consistency conditions fix the free parameters or the possible class of solutions allowed. Introduction Braneworld scenarios gained prominence after the emergence of the 5D warped models introduced by Randall-Sundrum (RS) [1,2]. Since then, many other models with localized gravity have been proposed. Some of them also in 5D, such as: models with thick branes generated by scalar fields with different potential functions [3,4]; deformed brane models with internal structure [5,6]; thick branes with purely geometric features [7]; proposals in cosmological contexts [8]; and f (R) theories [9]. A comprehensive and more a e-mail<EMAIL_ADDRESS>(corresponding author) b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>detailed review of the thick braneworld can be found in Ref. [10]. Besides these 5D scenarios, other proposals in higherdimensional configuration were presented. For example, in 6 dimensions, braneworld models generated by string-like topological defect with a scalar field [11,12]; vortex defect theories in the context of abelian Higgs model [13]; or cigarlike thick braneworld models [14,15]. Also, there are versions of these models in a cosmological context [16,17], among others [18][19][20]. There are also proposals in higherdimensional scenarios [21][22][23][24]. Amid this large variety of work, we can find some with the focus on the general features of braneworlds, as Refs. [25,26]. In these papers the consistency of the gravitational field, Newton's law on the brane, search for resonant gravitational modes, and other related issues are studied. In this braneworld context, beyond the gravitational field, the issue of the Standard Model (SM) fields localization should also be verified. Some general studies have been performed in the literature. Reference [27] presents a detailed study of the free spin 0, 1 2 and 1 fields localization in RS-II delta-like brane model. Among these fields only the free scalar field and the left-handed spinor can be confined on that model. In Refs. [28][29][30][31] the localization of the above fields for thick branes embedded in Ad S 5 space was analyzed. For these cases, the scalar and the left-handed spinor can be confined, and again the U (1) gauge field is not localized. Another study, performed for thick branes embedded in d S 5 space [32,33], showed the same results for the scalar and the spinor fields; however, unlike the early models, the free vector field can be confined in such models. The same analysis was also performed for other dimensional configurations. In 6D string-like models, for example, Refs. [34,35] show that the free spin 0 and 1 fields can be localized. In Ref. [36], the author shows that, beyond the scalar and the U (1) vector fields, the free spin 1 2 field can also be confined on the brane. In arbitrary higher-dimensions, Refs. [37,38] obtained the same results above for the Standard Model fields. With this, we already have an indication that the confinement of fields, mainly the spinor and the gauge vector fields, is closely related to the geometric features of space. In another direction, aiming to obtain the localization of fields, some mechanisms were proposed. For example, in Refs. [39][40][41][42][43] the localization of a spin 1 2 field is obtained for various braneworld scenarios by proposing a Yukawalike coupling. Reference [44] shows the confinement of a non-abelian Yang-Mills field by introducing non-minimal couplings with gravity. Among others, for these and other fields, and for those and various other braneworld models we may mention [45][46][47][48][49]. In most 5-dimensional braneworlds, the free U (1) gauge field cannot be confined, as mentioned above. However, by adding some suitable interaction terms, that field can be localized. For example, in RS-II like models, the localization can be obtained by adding an interaction term between a 3-form field (topological term) and the vector field [50]; or by proposing a non-covariant mechanism with two mass terms, one in the bulk and another just in the brane [51]. The confinement is obtained by fine-tuning these mass parameters; or by proposing non-minimal interactions between the gravity and the vector field through the Ricci scalar [52,53]. For 5D thick brane models, Refs. [54][55][56][57][58][59] proposed a modified kinetic term of the vector gauge field by adding couplings with a scalar field function. In 6D models, Ref. [60] proposed a delta-like brane generated by a brane intersection and, to confine the vector field, the author proposes interaction terms between this and the Ricci scalar and/or the Ricci tensor. In higher-dimensional models, Ref. [61] carried out a general study of the abelian vector field localization through the couplings with the scalar and the Ricci tensor. Despite these various results indicating that the Standard Model fields can be well-defined on the braneworld scenarios, there is not yet a study on the consistency of the localization procedure. Generally, when we talk about fields localization on braneworlds, it means that we wish to factor out an action S (D) defined on the bulk into a sector containing an effective action S (4) e f f on the 3-brane and an integral K in the coordinates of the extra dimensions. Thus, we say that the theory is well-defined, i.e., the field is localized on the brane, when the integral K is finite. In this manuscript, we aim to analyze the consistency of this localization procedure with the Einstein equations for the SM fields. Special attention is paid to the U (1) gauge field, both the free field and the cases where some localization mechanism is used. This work is organized as follows: In Sect. 2, we obtain two general consistency conditions that must be satisfied by any field in order that the localization procedure to be consistent. We apply these conditions for spin 0 and 1 2 fields in Sects. 3 and 4, respectively. In Sect. 5, we carry out the consistency analysis for the vector field, free and with some localization mechanism. Conclusions are left for Sect. 6. Einstein equations-consistency conditions To perform a more general and comprehensive discussion, let us consider a braneworld in (D = d + n) dimensions with metric given by the generic ansatz Here d is the brane dimension, indexed by (μ, ν, . . .), and n is the number of extra dimensions, labeled by ( j, k, . . .). Beyond this, the metric (1) will be considered diagonal with signature (−, +, +, · · · ). As mentioned before, in the study of fields localization on braneworld scenarios like (1), we wish to factor out a matter Lagrangian like into an effective action on the brane and an integral in the extra dimension, i.e., Thus, we say that the theory is well-defined, or the field is localized, if K is finite. In doing this, the metric (1) is considered only as a background previously determined by some process. And also, it is assumed that the matter Lagrangian (2) will not change the background geometry. Here, we will discuss exactly the consistency of this last assumption by studying the Einstein equations. The Einstein-Hilbert action for an arbitrary braneworld model, with a matter Lagrangian like (2), can be written as Here, L b is related to the brane generation mechanism, and it is a function only of extra dimensions. The term L m is the matter Lagrangian related to the Standard Model fields, which the confinement should be studied. By performing the variation of the action (4) with respect to the metric, we get the Einstein equations We should add to Eq. (5) the equations of motion (EOM) related to the fields in the Lagrangians L b , and L m . Fortunately, these EOM are not important to our discussion, and they will not be written here. From the metric (1), we can get the Ricci tensor components, and the components R jμ = 0. From these, the Ricci scalar can be written as whereR =ḡ jkR jk andR =ĝ μνR μν . That way, Eq. (5) can be separated into the following two equations: By using Eqs. (6), (7) and (8), Eq. (9) can be written aŝ where Eq. (11) is solved to obtain the vacuum braneworld metric, the matter Lagrangian L m is considered to be equal to zero, and the Lagrangian L b is a function only of the extra dimensions. In this setup, T b μν (x, y) =ĝ μν (x)e 2σ (y) L b (y), and, with this, we can perform the separation of variables in Eq. (11) as with α a constant that will be interpreted as the cosmological constant on the brane. Thus, the braneworld metric in the vacuum (L m = 0) should satisfy Eq. (12). In the study of field localization, where the starting point is the matter Lagrangian L m , the metric is exactly that obtained in the vacuum, and therefore it satisfies (12). Now, let us assume that Eq. (12) is still valid even after adding Lagrangian L m . With this, Eq. (11) can be written aŝ Thus, we observe that the left-hand side of (13) does not depend on the extra dimensions y j , therefore, for consistency reasons, the energy-momentum tensor of the matter field should satisfy the condition As regards Eq. (10), it can also be simplified by using Eqs. (6), (7) and (8). By doing this, we get a condition on the components T m jk given by These consistency conditions, (14) and (15), are completely independent of the brane model, the number of extra dimensions, and also of the matter field considered in L m . That way, such conditions have a general valid, and should be satisfied for any model with the features considered above. Note that these conclusions are closely related to the possibility of the metric not changing by the presence of matter fields. In other words, these consistency conditions mean that backreaction effects from the matter fields on the bulk geometry can be eliminated. Let us apply these results for some known cases. Applications-scalar field Let us start by discussing the scalar field localization. The Lagrangian for a massless scalar field in the braneworld model (1) is given by In studying the localization of this field, we can obtain the equation of motion whereḡ is the determinant ofḡ jk (y), andĝ is the determinant ofĝ μν (x). With this, by proposing Φ(x, y) = φ(x)ξ(y), it is possible to separate the variables for the zero mode as From this, a solution for (19) can be obtained, and the localization can be analyzed. As discussed in Refs. [27][28][29]32,34,36,54,55], there is a constant solution for (19) that can be confined for a wide variety of models. In order to test the consistency conditions (14) and (15) for the zero mode, let us calculate the energy-momentum tensor from (16). In doing this, we get By using the constant solution ξ 0 (y) = c 0 for (19), the components T m μν , obtained from Eq. (20), can be written as Therefore, the consistency condition (14) is immediately satisfied. About condition (15), we can get, from (20), for the zero mode, Thus, by comparing this with (15), we conclude that both consistency conditions are satisfied. Therefore, the free scalar field (zero-mode) localization is consistent with Einstein's equations, and any possible back-reaction effect from the scalar field on the background metric must be caused by the massive modes. Note that nowhere was it necessary to define the braneworld model, or the number of extra dimensions, for the consistency conditions to be met. In this way, these results for the zero mode of the scalar field are valid for a wide variety of models, whether for those with thin or thick branes, and for arbitrary codimension. Applications-spinor field Now let us see briefly the spin 1 2 field localization for an arbitrary codimension 1 model. In this particular configuration, the metric (1) will be written as with μ, ν = 1, 2, . . . , d (even). Beyond this, the consistency conditions are given by We will consider the spinor field coupled to an arbitrary scalar function f (y) through a Yukawa-like interaction term. The Lagrangian for this case will be written as where are the spin connections. The Gamma matrices in curved space Γ M are related to those in a local flat frame by 1 (25), the following equation of motion can be obtained: , the spin connections can be calculated: And with this, Eq. (26) gives us whereD μ = ∂ μ +ω μ (x). Here, to solve the above equation, let us consider the zero-mode solution satisfying −iΓ μD μ Ψ 0 = 0. That way, we get for the massless mode At this point, we will use a Gamma matrix representation such that Γ d+1 is a d × d diagonal matrix (remember that d is even) in the following shape: where And, Eq. (29) can be split as The zero-mode solutions for (32) are given by Therefore, by specifying the function f (y), and the warp factors σ 1 and σ 2 , the localization discussion can be performed. As we are interested in verifying the consistency conditions (24), let us calculate the energy-momentum tensor for these zero-mode solutions. From the Lagrangian (25), we get the components Then the first consistency condition in (24) is satisfied if e σ 1 (y) ξ 2 0 (y) is a constant quantity. By using the solutions (33) and (34), we get Here, κ ± are constants, and the prime is for the derivative with respect to the extra dimension. These relations in (36) should be valid for any value of y. Note that already for the free case (λ = 0), consistency cannot be obtained. In fact, we should have σ (y) = 0, and this is not satisfied for any nonfactorizable braneworld model. For models with λ = 0, we conclude that the spinor field localization can be made consistent with Einstein's equations only if f (y) ∝ e −σ 2 (y) σ 1 (y). When we analyze some models in the literature, the mechanism presented in Refs. [27,37], for the RS-II model [2], can be made consistent. However, for thick brane models, the localization mechanisms presented in Refs. [39][40][41][42][43]54] are not consistent; thus those mechanisms should be reviewed. Otherwise, the braneworld metric should be modified to take into account the presence of the spinor field. Applications-vector field Now, we will discuss the consistency of the vector field localization. This subject was already treated early in the literature for codimension 1 delta-like models [62]. The authors of this reference show that the free vector field localization (zero mode) is not consistent with Einstein's equations. Here, let us discuss this issue for the free field in an arbitrary braneworld and also for some localization mechanisms commonly used in the literature. Free vector field localization To start the discussion, we will consider the free field in a brane model with metric given by (1). The Lagrangian for this case can be written as By calculating the equations of motion, we get Here, we can propose A N = A T μ + ∂ μ θ, B k with ∂ μ A T μ = 0, and thus Eq. (38) can be split, for the components A T μ , as The equations of motion for the fields θ and B k will not be important to our discussion, thus they will not be written here. From Eq. (39) and by proposing A T μ (x, y) =Â T μ (x)ξ(y), we can get Now, as we did for the scalar and spinor fields, Eq. (41) can be solved for m 2 = 0 and, with these zero-mode solutions, the vector field localization can be studied. Equation (41) has a constant, ξ 0 (y) = c 0 , and also a non-constant solution for the zero mode. This last one is closely related to the braneworld and its specific form is model dependent. Fortunately, in most cases, the constant zero-mode solution is the only one that can be confined [34,36,37,61]. We are interested in studying the consistency of the confinement, therefore let us obtain the energy-momentum tensor for the Lagrangian (37). By doing this, we get And, for the zero mode, the components T m μν can be written as From this, the consistency condition (14) is satisfied only if e −2σ (y) ξ 2 0 (y) = const. As a first result, we find that the constant zero-mode solution cannot satisfy this requirement, and this is codimension independent. Therefore, for all those models where the confinement is performed with such solution, the localization is not consistent. For 5D models, the free gauge field cannot be confined because the K integral in (3) is not finite [27,54]. In this way, the result obtained from (43) just confirms the non-localization of this field. However, most interesting results are obtained for codimension 2 and higher-dimensional models. In the literature, there are a large variety of models in these dimensional configurations where the metric (1) gets the particular shape [11,12,14,15,37] Localization for a free vector field in such scenarios was already studied in [34][35][36]38,63,64]. Generally, for all these references, the constant zero-mode solution is confined given that the K integral, in (3), is finite. However, by using our consistency conditions, we find that the localization of the free gauge field in such scenarios is not consistent. Therefore, even with a localized zero mode, it is not possible to ignore back-reaction effects from the U (1) gauge field on the backgorund geometry. Of course, there are still other braneworld models where the metric is not like that in (44), as those in Refs. [20,21,60]. However, the conclusion for these cases is the same: the constant zero-mode solution is not consistent with Einstein's equations. That way, the vector field localization really seems to need a mechanism to be consistent, and in the next section we will discuss some of such mechanisms. Vector field localization through mechanisms Now, we will review some mechanisms used to confine the abelian vector field. Again, the focus is on verifying the consistency of the localization procedure with Einstein's equations. Let us first consider the codimension 1 braneworld models. In this dimensional configuration, the metric (1) can be written as With this, we can discuss the consistency for a wide variety of models, whether with a thin or with a thick brane, with or without internal structure. (i) Scalar field coupling Let us start by considering a localization mechanism commonly used to confine the spin 1 field on thick branes [54][55][56]58]. In these models, the gauge field is coupled to some scalar function G (y) through an action like The function G (y) will be defined later for some known cases. By calculating the equation of motion and assuming the gauge A M = (A μ , A d+1 = 0), we get the separated equations where we already used the metric (45) and also A μ (x, y) = A μ (x)ξ(y). Beyond this, the effective action on the brane can be written as where the Neumann boundary conditions were used for ξ(y), and the quantity K is given by With this, by observing (49), a gauge field (massless mode) on the brane can be obtained by setting m 2 = 0. Moreover, by properly choosing the function G(y), this massless mode can be confined. In Refs. [54,55] this function is chosen in the form G(y) = e λπ(y) , where π(y) is a scalar field, namely the dilaton. From this, the zero-mode localization is obtained for some values of λ. In Ref. [56] the authors get a general form for G(y) that should be valid for an arbitrary thick brane with σ 2 (y) = 0. In another direction, Ref. [65] proposes G(y) as a function of the Ricci scalar, and also for this case the localization can be obtained. Let us turn back to the consistency conditions (14) and (15). From the action (50), the energy-momentum tensor can be written as Thus, by using the above configuration and considering only the zero-mode solution, we get where ξ 2 0 (y) is the zero-mode (m 2 = 0) solution of (48). From Eq. (52), we find that the consistency condition (14) is satisfied only if G(y)e −2σ (y) ξ 2 0 (y) = const.. That way, by using the constant solution for the massless mode ξ 0 (y), we conclude that G(y) only can assume the specific form G(y) = e 2σ (y) . The same conclusion is obtained from the condition (15). That condition on the function G(y) considerably restricts the allowed models for this type of coupling. For example, in Ref. [54], the authors define G(y) = e λ 2 σ (y) . In this case, we find that the coupling constant λ must be defined as λ = 4, for the localization to be well-defined. In Ref. [55] G(y) = e τ √ 3r σ (y) is used, and the localization is obtained for τ ≥ − r 3 , with 0 < r < 1; or τ > − 1 3r , with r > 1. By using our consistence condition, we find that τ = 2 √ 3r . Therefore, for both references localization can be done consistently. There is a very interesting reference, namely [56], where the function G(y) is defined as ∂φ , with s a constant, and W (φ) is the superpotential related to the scalar field φ(y) which generates the braneworld. The authors show that for a brane model generated by scalar field with Sine-Gordon potential [3], i.e., the function G(φ) is given by G(y) = sech 2s (2cy) and localization can be obtained. When we compare this with our result, which for the specific model [3] has G(y) = e 2σ (y) = sech 2b (2cy), the constant s should be s = b. On the other hand, for the brane model [54], where Ref. [56] obtains G(y) = sech 4s (ay). Now, by using our result G(y) = e 2σ (y) = sech 4b (ay) e −b tanh 2 (ay) , we see that the function G(y) does not match, and the superpotential (54) does not allow a consistent localization. A similar result can be obtained for deformed thick brane models [5]. This conclusion indicates that does not have general validity as a localization mechanism, i.e., it does not work for any braneworld model. Therefore, except for (53), the function does not provide a consistent localization for the gauge field (zero mode). Another interesting model is presented in Ref. [65], where G(y) is a function of the Ricci scalar, namely, The authors argue that, if G(R) is a continuous function, the zero-mode localization of the vector field is determined by the behavior of G(R) when y → ∞. They find that this function must be, asymptotically, something like G(R ∞ ) ∝ |y| − p , with p a positive value. Considering the AdS feature of the space, the authors show that the warp factor must be, asymptotically, of the form Therefore, the consistency condition obtained by us for models like (46), i.e., G(y) = e 2σ (y) , can be satisfied for G(R ∞ ) ∝ |y| − p , if p = 2. However, we cannot say that this is valid for another range of the variable y. Moreover, there seems to be a contradiction in the arguments used by the authors themselves. They propose a localization mechanism in an asymptotically AdS space-time, thus, the Ricci scalar is, in that limit, R(|y| → ∞) ∝ −C R (constant). Therefore, G(R) should go to a constant value in that limit, and the localization cannot be reached. Anyway, the requirement of G(R) = e 2σ (y) does not seem to be so easy to meet for an arbitrary model. Other interesting points can also be discussed. For example, for models like [54,55], we find that the coupling parameters (λ or τ ) are not free; they must be fixed for consistency reasons. In this way, the analysis performed in Refs. [66][67][68] by research resonances of the gauge field with an action like (50) should be reevaluated. There is no freedom in choosing the parameters λ or τ , used to plot the graphics in those references. (ii) G-N localization mechanism Now, let us verify the non-covariant mechanism proposed by Ghoroku and Nakamura (G-N) in Ref. [51]. In this paper a metric like (45) is used with the warp factors given by σ 2 (y) = σ (y) = − ln (1 + k|y|). The Lagrangian for the vector field with G-N mechanism is written as For this model, although not being gauge invariant or even covariant, the effective theory on the brane has the desired features: a massless vector field with gauge symmetry. After some steps like those performed in the previous case, we can get an EOM for the effective vector field A T μ , and by proposing the separation of variables A T μ =Â T μ (x)ξ(y), the localization issue can be addressed. By doing this, the zeromode solution can be obtained with the ansatz and it will be localized if a > 0. With this, we can analyze the consistency conditions (14) and (15) for the energymomentum tensor. From the Lagrangian (57), we get for the zero-mode Thus, by using the zero-mode solution, we find that e −2σ ξ 2 0 = e 2(a−1)σ = const. and the consistency condition (14) will be satisfied when a = 1. This value of a fixes all parameters in the Lagrangian (57) in the following form: c = −2k and |M| = √ 3|k|. By a similar analysis, we show that the consistency condition (15) gives us the same result and this localization mechanism can be performed consistently. (iii) Non-minimal coupling with gravity Finally, let us discuss the localization mechanism proposed in Refs. [52,53,60]. In this work, the G-N mechanism is used as a motivation to propose the vector field coupling with gravity through the scalar and the Ricci tensor. The action for this case is given by Here d is the brane dimension, R and R MN are the Ricci scalar and tensor, respectively. By proposing again A M = A T μ + ∂ μ θ, B , and after some steps, it is possible to obtain an EOM for the transverse field A T μ . In order to do that, by using the separation of variables A T μ (x, y) =Â T μ (x)ξ(y), we obtain a zero-mode solution for ξ(y) given by Beyond the additional conditions where D = d +1. The conditions, (61) and (62), are required for a zero-mode solution to exist, and the condition (63) should be satisfied for the solution ξ 0 to be confined on the brane. By doing a similar analysis to that in (59), we conclude that the consistency conditions (14) and (15) can be satisfied for a = 1. Therefore, this localization mechanism can provide a consistent confinement of the vector field. By eliminating higher-order terms, the brane components of energy-momentum tensor for the action (60) can be written, for the zero mode, as Thus, we see that the condition (14) is satisfied when a = 1. When we use a = 1, the localization condition (63) can be written as d − 2 > 0, and it is always satisfied for models with d ≥ 4. By putting a = 1 in Eqs. (61) and (62), we get This result shows that vector field localization, by using the mechanism (60), is consistent with the Einstein equation only when both interaction terms are switched on simultaneously. Beyond this, the two parameters λ 1 and λ 2 are completely fixed by consistency reasons. This result allows us to comment briefly on the one presented in Ref. [69], where the authors plot some graphics of the potential and the relative probability for various values of a. As we found from the consistency conditions, the parameters are fixed and such a freedom for the parameter a does not exist. In fact, the authors argue that massive resonant modes can exist if a > 3, which, when compared with our result, a = 1, shows that resonant modes cannot exist. (iv) Localization in codimension 2 or higher models Generally, for most of the models in co-dimendion 2 or higher, the free U (1) gauge field is already naturally confined just by minimum couplings with gravity [34][35][36][37]64,[70][71][72]. Thus, there are not many localization mechanisms for this field in those dimensional configurations. However, as we saw in Sect. 5.1, the free field case is already not consistent with Einstein's equations, so some localization mechanism really seems to be necessary. In Ref. [60], the vector field is confined in codimension 2 intersecting delta-like branes by proposing a mechanism like that in Eq. (60). In Ref. [61], this study is performed for a generic model with arbitrary codimension embedded in asymptotically AdS space. For both cases, the results are similar to those obtained early in item (iii). In other words, the consistency with Einstein's equations is obtained just when both interaction terms are switched on simultaneously. In fact, this conclusion is codimension independent for this localization mechanism. Let the action for the vector field in a generic model with the localization mechanism like (60) be given by where d is the brane dimension and n is the codimension. By performing some steps like that in the item (iii), we can get a zero-mode solution given by We have the additional conditions where D = d + n. Again, to verify the consistency with Einstein's equations, we need the energy-momentum tensor. By eliminating higher-order terms, the brane components of energy-momentum tensor, for the action (66), can be written for the zero mode as Thus, we see that the condition (14) is satisfied when a = 1. If we use a = 1, the localization condition (69) can be written as d −2 > 0, which is always satisfied for models with d ≥ 4. By setting a = 1 in Eqs. (67) and (68), we get Therefore, again, we get the same conclusion as presented in item (iii): the localization mechanism (66) is consistent with Einstein's equations only when the two interaction terms are switched on simultaneously. In this way, any back-reaction effect from the vector field on the background metric can be eliminated, at least for the zero mode. Final remarks In this work, we discussed the consistency of field localization in braneworld models. By studying Einstein's equations in the presence of matter fields, we obtained the constraints (14) and (15) for the energy-momentum tensor that should be valid for any brane model. Such constraints are a consequence of the assumption used in field localization, where a confined matter field does not modify the bulk metric. In this way, the localization procedure used in the literature will be consistent only if such conditions are satisfied. We applied these consistency conditions for some cases, namely, the spin 0, 1 2 and 1 Standard Model fields, with and without localization mechanisms. For the scalar field, as discussed in Refs. [27][28][29]32,34,36,54,55], there is a constant zero-mode solution that can be localized. By using this confined constant solution, we showed in Eqs. (21) and (22) that the energy-momentum tensor for the scalar field satisfies the consistency conditions (14) and (15). Therefore, the scalar field localization (zero mode) is consistent with Einstein's equations. Afterwards, we apply those conditions for the spin 1 2 field in codimension 1 models, with a Yukawa-like interaction given by L m in = λ f (y)Ψ Ψ . Also, for this field, there is a variety of models with this kind of coupling [39][40][41][42][43]54]. For each of these cases, a different Yukawa interaction is proposed and the spinor zero-mode localization, actually one of the chiralities, is obtained for some proper condition. For the consistency conditions (14) and (15), we obtained the energy-momentum tensor (35). And from this, the spinor field localization is consistent with Einstein's equations only if f (y) ∝ e −σ 2 (y) σ 1 (y), with σ 1 and σ 2 the warp factors in (23). As discussed in Sect. 4, this relation eliminates the freedom to choose the function f (y). With this, the Yukawa interaction used in Refs. [27,37], for RS-II type braneworlds, is consistent with the Einstein equations by properly choosing the interacting parameter λ. However, for those functions f (y) used in thick brane models like [39][40][41][42][43]54], the localization is not consistent and it should be reconsidered. We must stress that the analysis performed for the spinor field considered only Yukawa-like interactions, and also only in codimension 1 models. There are still other localization mechanisms and other dimensional configurations where this analysis could be carried out as for example [73]. Finally, we verified the consistency condition for the vector field. As discussed widely in the literature [27,34,36,37,61], the free gauge field (zero mode) cannot be confined in 5D, however, for some higher codimension models it can be localized. In Sect. 5.1, we obtained the energy-momentum tensor (42) for the gauge field and, from this, the consistency conditions were analyzed. As a general result, the conditions (14) and (15) are consistent for the zero-mode vector field only if e −2σ (y) ξ 2 0 (y) is a constant. However, as discussed in that section, there is no zero-mode solution ξ 0 confined that satisfies the above condition. Such a result is independent of the braneworld model or the number of extra dimensions, thus the localization of this field is not consistent with Einstein's equations and a mechanism to confine it really seems necessary. In this direction, we analyzed the consistency of some localization mechanisms in Sect. 5.2); for example, the mechanism proposed in Refs. [54][55][56]65], where the action for the gauge field is given by something like (46). For these kinds of coupling, there is a zero-mode constant solution that can always be localized when G(y) is like Gaussian. By using the consistency conditions, the localization of ξ 0 = c 0 is consistent with Einstein's equations just if G(y) = e 2σ (y) in Eq. (52). In this way, the Gaussian feature is confirmed; however, such an expression does not present any free coupling parameter. Moreover, that function G(y) eliminates some mechanism proposed in the literature, like that in Ref. [65], where G(y) = G(R) is a function of Ricci scalar. Other interesting points can still be discussed here. As there is not a free parameter in G(y) = e 2σ (y) , the analysis performed in Refs. [66][67][68] about resonant modes of the gauge field with action given by (46), for models like [54,55], should be reevaluated. Since some results are obtained by using a free coupling parameter, which, by our analysis, does not exist. We also discussed the non-covariant mechanism proposed in Ref. [51]. For this case, the zero-mode solution for the gauge field sector is given by ξ 0 (y) = e aσ (y) , and the consistency conditions (14) and (15) were satisfied when a = 1. With this value of a, all parameters in (57) were fixed, namely, c = −2k and |M| = √ 3|k|. In Ref. [51] is also discussed the localization of the scalar component, and the authors conclude that the two sectors cannot be confined simultaneously. Moreover, the theory does not indicate that these sectors should be confined. Maybe, the consistency condition could be used to solve this, but we did not perform such a study here. Inspired by this mechanism, we analyzed the localization mechanism proposed in Refs. [52,53]. And by starting from (60), the consistency with Einstein's equations was obtained only if both interaction terms are present. And just like in the previous cases, the parameters λ 1 and λ 2 in (60) were completely fixed by consistency reasons. Beyond all these codimension 1, higher codimension models were investigated. And also for this dimensional configuration, by using the localization mechanism (66), the consistency with Einstein's equations is possible only if both interaction terms are switched on simultaneously. In this way, we believe that a comprehensive analysis was performed as regards the consistency of fields localization, and such a study can be used as a guide for building new confining mechanisms. Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Fundação Cearense de Apoio ao Desenvolvimento Científico e Tecnológico (FUNCAP) through PRONEM PNE-0112-00085.01.00/16. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: This manuscript has a purely theoretical character, thus, it has no data.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 .
8,990.8
2020-05-01T00:00:00.000
[ "Physics" ]
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. Introduction In the last years, Ambient Intelligence has experienced a significant development, mainly because of the advances in the miniaturization of processors and the proliferation of embedded systems in many different objects and applications (e.g., communications, industrial, automotive, defense and healthcare environments). The International Data Corporation (IDC) Semiconductor research team announced in its April 2011 preliminary report that smart systems would consume more than 12.5 billion processor cores representing more than $100 billion in revenue by 2015 [1]. This means that there will be over six times more microprocessor cores in smart systems than in PCs. The ambient intelligence paradigm was supported by these advances. Built mainly upon pervasive and ubiquitous computing, sensor networks and artificial intelligence, ambient intelligence has always had the goal of creating smart environments that aided people in their daily lives, reacting and adapting to their needs in an unnoticed way [2]. The information and knowledge society, as well as the Internet of Things, is surrounding peoples' lives with technology by the day, especially in urban areas. In 2008, about 50% of the world's population lived in urban areas, whereas it is envisaged that a 70% will live in cities by 2050 [3]. In this urban age cities are seen as places of prosperity and opportunities. Its citizens have high expectations that represent a challenge for the cities, especially from the technological point of view. Cities thus become the best candidate scenarios to develop the new technological challenges, giving birth to the concept of Smart Cities. Smart Cities are cities that perform well on 6 characteristics: economy, people, governance, mobility, environment and living [4]. The goal of Smart Cities is to improve the way in which people interact with each other and with the urban environment. Their management represents a challenge in the field of Ubiquitous Sensor Networks (USNs), computing and communications. Smart Cities can be understood as a macro-scale case of the Ambient Intelligence paradigm, building upon the same principles and with the goal of offering a broad range of services, mainly in the framework of the so-called Internet of Services (IoS). They offer a broad number of services, ranging from intelligent buildings to tourism recommendations or tracking and monitoring systems. Because of the growth in population aging, one of the most common applications is healthcare, especially in the field of Ambient Assisted Living systems that provide services to elderly people [5]. The deployment of this kind of applications in a Smart City requires the usage of a large number of heterogeneous sensors (in the order of tens of thousands of units). They usually have in common the specifications that apply to Wireless Sensor Networks (WSNs): the need to be battery-powered, low power consumption, limited resources and small size. Research has mainly paid attention to the communication and security challenges, as well as to the energy consumption of the Ubiquitous Sensor Networks. However, the Internet of Services has to deal with yet another computer paradigm: the High Performance Computing (HPC) infrastructures needed to tackle the computing demands in order to offer these services to the citizens. As all the data collected by the different sensors nodes has to be processed, converted and matched with other data so as to generate useful information, the computational needs of these systems are huge. Because of the nature of their workloads (the kind of processing that must be performed), this demand is usually satisfied by data centers. Large data centers are composed of tens of thousands of servers with tens of peta bytes of storage, and multiple hundreds of giga bit bandwidth to the Internet. The electric bill of the data centers (including the electricity needed for cooling and air conditioning in the data center) was projected to pass 7 billion US dollars in the US alone, while the power density reached 60 kW/m 2 for data centers by 2010. The Environmental Protection Agency (EPA), in its August 2007 report to the US Congress, affirmed that data centers consumed about 61 billion kilowatt-hours (kWh) in 2006, roughly 1.5 percent of total U.S. electricity consumption, for a total electricity cost of about $4.5 billion [6]. The EPA report also stated that the energy consumption of servers and data centers has doubled from 2002 to 2005. This rapid rates of growth in data center electricity use slowed significantly from 2005 to 2010 [7], but it still represents 1.3% of all the electricity use for the world, and 2% of all electricity use for the US, yielding worldwide to 250 billion kWh per year. According to a 2008 Gartner report [8], 50% of data centers would soon have insufficient power and cooling capacity to meet the demands of high-density equipment. Nowadays, apart from data center energy consumption and associated costs, there is an increasing interest in the environmental impact of data centers, in terms of their carbon dioxide (CO 2 ) footprint. Until now, it has been assumed that the computation needed by the applications of the Internet of Services would be performed "in the cloud", without paying much attention to the inherent problems of this assumption. Only recent works have begun to work on architectures that take into account the computational needs [9]. This kind of approaches often disregard the energy consumption derived from the computation and cooling in the HPC facilities. However, the high economical and environmental impact of the energy consumption in data centers requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. Our research work proposes an energy management solution to tackle the computational needs of smart environments and ambient intelligence applications that makes use of energy-minimization workload assignment policies. The goal is to minimize the energy consumption (and thus, the electricity bill) of the data centers that process the data provided by the sensors, by redistributing part of the computational demand on the HPC servers to the idle resources of a WSN infrastructure. This WSN infrastructure will be comprised of (i) a large amount of sensor nodes (in the order of thousands) with very-low resources; (ii) a smaller number (in the order of hundreds) of base stations with low resources and (iii) a few gateway nodes (the order of tens) with medium resources. The proposed solution off-loads computation from the HPC facility to the base station and gateway nodes, taking profit from their idle computing times. It does so by profiling the different tasks of the workload to be executed, classifying them in the HPC facility and predicting their energy parameters for the other nodes. The solution is thus based on application-awareness and node heterogeneity, and is built upon previous work, which proves that the usage of node heterogeneity at the data center level can yield substantial energy reductions [10]. In this case, however, the concept of heterogeneity is ported from the data center level to the smart service infrastructure. The main idea behind this work is to reduce energy by allocating the lowest-demand tasks to the low-power and low-resource nodes, while sending the highest-demand tasks to the HPC servers. This way, the workload is scheduled in the devices where it has a better performance. These techniques reduce considerably the amount of energy used by the High-Performance infrastructure while increasing performance. This work advances in the technology of energy-efficient computing in IoS and Ambient Intelligence applications, and in the mechanisms to place data centers in a more scalable and sustainable energy-efficiency curve. This paper is organized as follows: Section 2 gives further information on the motivation and the related work on this topic. Section 3 presents an overview of the proposed solution. The allocation algorithms are presented in Section 4. Results and evaluation are shown in Section 5. Finally, the main conclusions of the paper are drawn in Section 6. Related Work A Smart City can be defined as a city that "uses information and communications technologies to make the critical infrastructure components and services of a city more aware, interactive and efficient" [11]. There is an important and bidirectional relationship between the Internet of Things, Services and People-that is, the future Internet-and the applications for Smart Cities and Ambient Intelligence. All of them need to provide support for heterogeneity, mobility, scalability, security, privacy and trust. Because of these similarities in functionality, they face the same challenges. Amongst other, some of these challenges are: (i) the need to manage heterogeneity in a large number of dispersed sensors and servers; (ii) the huge amount of heterogeneous information to be processed; (iii) the need to have this information available everywhere and always and (iv) and the need to use a common communication network. Several European projects such as WISEBED [12] or SmartSantander [13] propose architectures (see Figure 1) over which all the communications and services of the Smart Cities can be built. They envision network deployments of more than 20,000 sensors. These works however propose only a high-level approximation that does not tackle the problem of the computational demand generated by this huge WSN. One of the first problems regarding computational demand can be found in the nodes that are part of Ubiquitous Sensor Networks. These nodes have too few resources to satisfy the needs of the multiple monitoring algorithms needed to offer the user-demanded services. Moreover, the architectures of these nodes have been developed for a specific application, while the near future envisions nodes that should be able to collect information from very different natures. Also, the ever-changing world that face WSNs requires the development of applications based on evolutionary algorithms, artificial intelligence and learning. The power consumption of WSNs is not ready either to tackle with the computational loads needed for the algorithms and still be low-power and battery-operated. The amount of information generated by a global deployment of a WSN thus implies the usage of a data center in order to store and process the obtained data. Meanwhile, these data center will have to provide the required infrastructure to perform all the computations needed by the executing applications. These applications are very different in nature, ranging from medical diagnosis applications to weather forecasts. On the other hand, supercomputing facilities present a huge economic and environmental impact due to their very high power consumption. There are a number of different techniques to reduce the energy cost and power density in data centers in different levels of granularity: chip-level, server level, rack level, data center level, etc. Over the last years, this problem has been addressed by the well-known technique of Dynamic Voltage and Frequency Scaling (DVFS) [14], load monitoring [15], the introduction of heuristics to minimize the total power of a data center [16] or dynamic resource provisioning [17]. In spite of all these measures, the energy consumption of data centers keeps growing, mainly because of the dramatic increase of supercomputing facilities. Figure 2 shows the number of world servers installed from 2000 to 2010. As it can be seen, the increase is currently reaching 5.75 million new servers per year. This translates into huge amounts of power consumption, mainly devoted to the infrastructure of the data centers and the volume servers (see Figure 3. The total energy usage by data centers is expected to exceed 400 GWh/year on 2015. Bearing this data in mind, it is clear that current effort is not enough, and that further research should be made to reduce power consumption of supercomputing facilities. For data centers that have highly-variable loads, a very common technique for energy reduction is to move tasks from under-saturated servers to other servers and turn off the unused machines [18]. As the idle power of a server sometimes accounts for more that 50% of the maximum power of the server [19], great saving can come from using only the appropriate number of servers and turning off the unused ones. This is not the case, however, of data centers for smart-environment services. In HPC data centers with very stable workloads, occupancy levels are constantly high. This kind of data centers are usually dimensioned for the particular workload and the services they are using. These solutions would be complementary to our work and could be applied once some of the computation of the data center has been off-loaded to the WSN infrastructure-thanks to our allocation algorithms-and, thus, the occupancy levels become lower. Most of the works proposing allocation algorithms have traditionally applied Mixed-Integer Linear Programming (MILP) or Mixed Integer Non-Linear Programming (MINLP) [20], Greedy algorithms [21] or Markov Chain algorithms [22] in order to generate the best task allocations. Most of these approaches do not propose a precise objective function and/or accurate mathematical formulation of the optimization problem. Although some of these solutions behave well in homogeneous data center level scenarios, they do not consider the heterogeneity inherent to smart environment applications. In this paper we consider, not only the heterogeneity that comes from the usage of different servers inside an HPC facility, but the usage of heterogeneous elements outside of the facility. Moreover, for large deployments such as the ones of Smart Cities, linear minimization algorithms present scalability problems. This work proposes the usage of non-optimal fast scheduling solutions for the energy optimization problem, by the usage of Satisfiability Modulo Theories (SMT) solvers in a hybrid ambient made of data center HPC servers, PC-like servers and embedded base station systems of WSN infrastructures. Recently, some enterprises like BrightComputing [23] have begun to develop software tools that allow to allocate tasks both in HPC facilities and in the cloud. This is clearly a response to the need of allocating HPC-workloads in HPC facilities and cloud-workloads in cloud servers. Slurm [24] resource manager will soon provide support for these features too. The application-awareness is thus recognized to be a good way towards energy minimization. However, an accurate application-awareness and energy-optimization allocation algorithm has not yet been proposed. In this paper, we will be using a modified version of the Slurm resource manager in order to allocate the workload not only in the HPC facility but also in the WSN infrastructure. To do so, we will formally propose and implement a dynamic optimization algorithm that allows the allocation of the workload on runtime inside and outside of the HPC facility. Proposed Solution In this section, the architecture overview of the proposed solution is presented, as well as the requirements that the system has to accomplish. We also explain the different techniques used, and their similarities and dissimilarities when compared to the most common data center allocation scenarios. Heterogeneous Architecture Overview First, in order to develop our solution, we will assume that the aforementioned Smart City architecture is representative. Taking its main elements as a reference, we can state that the services will be deployed over a network with a topology like the one depicted in Figure 4, which is comprised of the following components: (i) sensor nodes; (ii) base stations; (iii) gateways and (iv) HPC servers, organized as follows. A Wireless Sensor Network (WSN) or a Ubiquitous Sensor Network (USN) is composed of a great number of sensor nodes with very limited resources. A small part of these sensor nodes may have a direct connection to the Internet (via 3G, for example). The majority of them, however, situated in highly dense sensor node areas, will not have a direct connection to the Internet, but will instead transmit their information to a base station. Whereas the sensor node is low-power, battery-operated and has very low resources (typically the sensor nodes have tiny microcontrollers, of tens of MHz of frequency), the base station is usually a microprocessor-based embedded system with higher computational capabilities (working at frequencies of hundreds of MHz). The base stations are often connected via radio to the nodes (e.g., WiFi, Bluetooth, RF, Zigbee), via Ethernet to a gateway or directly to the Internet. They are also usually AC powered. These kind of systems have much more sensor nodes than base stations (in the order of 100-200 nodes per base station). In some ubiquitous distributed systems, the base station could be the last step between the nodes and the HPC infrastructure that provides and centralizes the services. In the real world, however, and for large deployments such as the ones in Smart Cities, the gateways are responsible for the interaction with the real service provider. These gateways are often PC servers, with higher computing resources than the base stations (e.g., a dual core processor @2 GHz with 2 GB of memory). These PCs can be found either connected to the base stations or not, and usually provide graphical interfaces to end-users for configuration, process data and provide security and trust (see also Figure 1). It must be noted that, during the normal 24/7 operation of the network, all the sensor nodes, gateways and PC servers will be turned on. The gateways and PC servers might be idle for part of their time; however, they are not turned off. The HPC servers can be turned off or re-used to compute data for other applications. In Table 1 we summarize the approximate parameters of these elements. The computational demand of the services deployed in the network is often just processed in the HPC infrastructure. In this paper we propose to distribute this computation to the base stations and the gateways of the network. These systems have much lower power consumption than the HPC server and, in their idle times, can be used to process the non-intensive parts of the workload to be executed. Moreover, even though in this paper we will not tackle the cooling costs of the HPC infrastructure, it must be noted that cooling accounts for a 30%-50% of the total energy demand of the HPC infrastructure. Decreasing the computational energy of the HPC servers also decreases the cooling cost. The lower-resource nodes do not need cooling equipment and, thus, the savings are even greater. To distribute the computation towards all the available nodes, we will use the workload allocation and resource managing techniques that are explained in the next subsection. Workload Allocation and Resource Manager In order to allocate and manage the computational demands of the services deployed in Smart Cities and ambient intelligence, we will use some scheduling concepts that come from the world of data centers and HPC. The raw information collected by the WSN sensor nodes will have to be parsed and converted, algorithms will have to be applied and exhaustive processing will be performed to generate useful information. This process comprises the execution of a lot of tasks. As long as the services provided to users do not change, all these tasks, very heterogeneous in nature, will be repeated through time with little changes: the algorithms to be performed each time will be the same, and the only variation will be the data used. Thus, we can assume that the workload exhibited during a period of time (i.e., one day) is representative of the workload that the WSN and data center facility will have in any other period of time (i.e., the next day). This workload can be understood as a collection of job sets randomly distributed in time. Each job set is composed of a random number parallel tasks without data dependencies; however, the number of different tasks is fixed for all workloads and all the tasks in the job-set are labelled. This means that, when a job-set arrives, the resource manager knows how many tasks of each type it has. As the tasks repeat through time, it will be possible to profile them (in terms of CPU usage, memory usage, etc.) and measure the energetic demand (in kWh) of the tasks in the HPC servers, in order to characterize its computing needs. The data used by each of the tasks will be generated in different sensor nodes across the WSN infrastructure. We understand that even though the tasks are the same, the data they use will come from different sensor nodes each time. We assume, however, that the differences in the dataset do not impact the energy profiling of the tasks, as what drives the energy consumption is the algorithm used, not the dataset. Moreover, once characterization is performed, we will be able to classify the tasks according to their computational demand. This way, tasks exhibiting low (and even medium) computational demands can be allocated to outside the HPC facility. In order to allocate the tasks to both inside and outside the HPC facility, we will make use of a Resource Manager (Slurm, in this particular case, which is one of the most commonly used). The traditional functional system found in today's data centers comprises: (i) a task scheduler, which queues the tasks in time, deciding their priority of execution; and (ii) a resource manager, which has the knowledge of the available resources of the system and decides where each task is going to be executed. In our case, we assume that the workload entering the system has already been scheduled by a commercial scheduler, and we implement our solution in the resource manager. The complete system is described by Figure 5, and works as follows: • Cluster creation: Before a service starts, a cluster of machines is created. One of the HPC servers in the facility acts as the Resource Manager (RM). The cluster is composed of an arbitrary number of base stations, gateways and HPC servers. Machines are divided into different partitions according to their resources and are assigned a different location identifier depending on the physical location where they are deployed. Each gateway and base station will manage the tasks whose data comes from sensors in their same (or nearest) location. • Profiling and classification: When a service is launched, the first job set arriving to the HPC facility is used for profiling and classification purposes. Each different task that composes the job set is sent to a node only in the HPC facility. While executing, the different tasks are profiled to obtain the following parameters: total execution time, memory usage, average CPU load and energy consumption. They are classified by using a naive k-means algorithm in 3 different classes according to their computational demand: high-demand, mid-demand and low-demand. As the tasks of the job-set are labelled, the second and subsequent job-sets will be directly classified into one of these three classes by the allocation algorithm. • Ubiquitous Green Allocation techniques: the purpose of the allocation is to reduce the energy consumption of the HPC infrastructure while tackling the computational demand of the services in the smart facility. To do so, the allocation algorithm will first identify and classify the new incoming tasks into one of the already-existing groups, which have already been assigned to one of the three different classes. Next, according to the idle resources available, it will try to place high-demand tasks in the HPC servers, mid-demand tasks in gateways and low-demand tasks in base stations. In order to tackle the data locality issues, tasks will be executed as near as possible to where the data they need is generated. This way, data will not have to travel through the WSN infrastructure to the HPC facility. In Section 4, the Ubiquitous Green Allocation techniques are formally detailed and further explained. These algorithms will be implemented as a new Slurm plug-in and will be executed on runtime each time a new job set arrives. Ubiquitous Green Allocation Algorithms The idea behind this allocation algorithm is to minimize the total energy consumption of the smart infrastructure (by using the processors of the HPC facility and the base stations and gateways of the WSN infrastructure). This goal can be described as follows: Let us denote by P a set of processors and by T a set of tasks that must be executed. Each processor p belongs to one machine m-a machine could be either a server, a gateway or a base station-denoted as p m , which consumes certain idle power π m . Every task t has a duration and consumes a certain amount of energy depending on the target processor, σ tp and e tp respectively. τ max is the time instant at which all the tasks have been executed. The problem consists in finding the most appropriate allocation of tasks t in processors p, that minimizes the energy consumption, as expressed in Equation (1). In other words, to globally minimize the energy consumption we have to minimize the sum of (i) the energy variation e tp that occurs when allocating a particular task t in a particular processor p and (ii) the energy that machines consume for the fact of being turned on, that is, the idle power they consume multiplied by the complete execution time. However, for our particular case, the allocation step has to take into account several more variables: (i) the number of idle gateways and base stations available when a new job set arrives; (ii) the resources of these nodes and the performance of each task in each node; (iii) the data locality issues; and (iv) the total amount of time to compute the solutions so as to improve performance. Even though Equation 1 can be solved by means of a linear minimization [10] to obtain energy savings when allocating tasks to heterogeneous nodes, the new restrictions imposed in our case, as well as the need to compute the allocation of a high number of tasks in a high number of nodes, suggest the usage of other solutions rather than an ILP solver. Because of the nature of linear minimization problems, they present good solutions for a relatively small number of nodes. In a deployment aimed to give service to thousands of users, this kind of algorithms does not scale, and it would be better to implement new algorithms that speed-up the allocation process. Because of its speed and versatility for adding new restrictions, in this work we have decided to implement the solution by means of an SMT solver. Even though this kind of solver does not obtain an optimal solution, we will prove that, if iteratively executed for a small amount of time, it provides solutions that considerably reduce the energy consumed by the facility. The proposed solution can be implemented in a two-step iterative algorithm: • To calculate the best type of node for each task: this step comprises the assignment of different types of tasks to different types of processors. Given the resulting 3 classes of the task classification step, we have to match different cpu-demand tasks to resources. The algorithm will take into account the place where the dataset needed to perform the task was generated, as well as the amount of resources near the data generators. That is, depending on the number of processors of each type (base station, gateways, HPC), the number of tasks of each type, and the place where data was generated, we will get and idea of where should each type of task be executed. The general constraint here will be that: (i) a low-resource node can only execute a subset (or the whole set) or low-demand tasks; (ii) a medium-resource node can execute a subset of low-demand and mid-demand tasks; and (iii) a high-resource node can execute all tasks. The SMT solver decides which subset of tasks from the allowed ones does each node execute. • To assign tasks to nodes: The second step consists in a greedy solution that tries to allocate the maximum number of tasks (without exceeding a maximum time) in the nearest lowest-resource nodes first-which are the best from the energy-efficiency point of view. That is, it will try to allocate first the low-demand tasks in Base Stations nodes. The SMT solver checks whether the allocation is possible and obtains a solution. If the conditions are satisfiable, it proceeds to allocate mid-demand tasks in gateway nodes. If they are not, it allocates less tasks in base station nodes, and allocates the remaining tasks in medium-resource nodes. In order to decide which tasks should remain in low-resource nodes or be migrated to mid-resource nodes, we again make use of the algorithm in the first step. All the tasks than can neither be executed in low-resource nodes nor in medium-resource nodes are executed in the HPC facility. The pseudo-code for the complete algorithm is given in Figure 6. We choose the maximum time as the maximum for a similar scheduling in the HPC infrastructure-that is, the case in which all tasks are allocated in high-resource nodes. That time is reduced by using the SMT solver to place low and medium demand tasks in low and medium resource nodes. It must be noted that locality is managed in such a way that, if the algorithm is unable to allocate a task of, e.g., locality 1 into either the low or the medium-resource nodes, it will allocate the task into the HPC facility, not into an element belonging to locality 2. This way, the algorithm minimizes the communication energy between nodes as well. The aforementioned algorithm will have to be executed each time a new job set arrives, in order to allocate the workload. Results In this section we present the results obtained when applying the classification and allocation techniques proposed in this paper. These techniques have been tested in the three different real machines that have acted as the different elements of the smart environment. Their parameters are shown in Table 2. (2 × Quad-core) As we could not try our solution in a real smart infrastructure with thousands of sensor nodes nor in a real HPC facility, we had to use the real energy consumption and timing values of these three elements to simulate most of the behavior of the real system. The steps followed in order to evaluate our methodology are the following: • Generation of a network that contains the three different types of elements. We have worked with one simulated network of 10,000 sensor nodes, 200 base station nodes, 15 gateways and 10 HPC servers (with 8 cores each). The elements are evenly split into 3 different localities, so that the number of all the elements is approximately the same in all the localities-e.g., locality 1 will have 3,300 sensor nodes, 30 base station and 5 gateways. If a task is executed in the HPC facility, its dataset will travel from the sensors that generated it, directly to the nearest base station and gateway in their locality in order to get to the HPC facility-i.e., data will make 3 hops. If a task is executed in a Base Station, then data will only travel 1 hop. As it can be seen, the unity to measure the locality is the number of hops data has to travel from the source-the sensor node-to the destination -the node where it is used for computation. • Generation of a synthetic heterogeneous workload that emulates the workload of a real service in a smart environment: to do so, we have combined cpu-intensive and non-cpu-intensive tasks into the job sets that compose the workload. We have used all the tasks from the SPEC CPU 2006 benchmark [26] (which are very computationally demanding) and from the Collective Benchmark [27]. This means a total of 60 different types of tasks. A synthetic random workload of 1,500 tasks, randomly split in different job sets of 150, 200, 250 or 300 tasks and with random arrival times of 10, 20 or 30 minutes, has been generated. • Profiling of the tasks of the first job set in the IntelXeon machine: as explained in Section 3.2, we use the first job set of the workload to profile the tasks in the HPC facility. The profiling step gathers information about the following features for each task: average CPU usage, memory used, time needed to complete execution and energy. On Figure 7 the results for the energy profiling of the tasks are shown. The Y-axis represents the energy variation (in kWh) when allocating a certain task in a certain processor-that is, the values of e tp for p being an Intel processor. This Figure lets us deduce intuitively the three different types of tasks: the low-demand tasks consume very little energy, the medium-demand tasks consume a little more, while there are other tasks that comparatively consume a lot of energy. However, making this assumption only with the energy results and without paying attention to other characteristics such as the CPU-usage would be a naive approximation. Therefore, in the next step a clustering that takes into account all the features is performed. • Task classification of the first and the subsequent job-sets: using all the characteristics obtained during the profiling step, a naive k-means algorithm splits the different tasks into three different classes, according to their computational demands. A projection of the resulting clustering on the energy and time axis is shown in Figure 8. As expected, the low-energy tasks (which also have low CPU-demand) are assigned to low-demand classes and the CPU-intensive tasks are divided into mid-demand and high-demand classes. According to this clustering and because all the tasks of the job set are labelled, each task will be automatically assigned to one of the classes. As in this paper we are trying to assign low-demand tasks to low-resource nodes, mid-demand tasks to midresource nodes and high-demand tasks to the HPC facility by means of the allocation algorithm, a good clustering will be one that splits tasks such that their execution time in each processor is coherent; that is, the allocated task properly adapts to the resources it has been assigned to. In order to validate our clustering, we execute the tasks in the processors where they are classified and we measure their execution time. Results are shown in Figure 9. As it can be seen, tasks are executed within proper time limits, and there are not low-resource tasks that need too much execution time. Because the Base Stations consume less power than the Gateways, and the Gateways less than the HPC servers, we can also conclude that the energy graphic will have the same shape as the time graphic of Figure 9. • Ubiquitous Green Allocation: The allocation algorithm is validated by using Slurm resource manager to simulate the whole workload allocation and distribution-that is, the 1,500 tasks. Slurm can be run on multiple-slurmd mode to emulate a cluster of machines, setting the task affinity to core. This means that Slurm will bind each task to one and only one core, and a machine will execute as many tasks in parallel as available cores. We have programmed a Slurm plug-in that allows us to change the allocation policy. This will let us compare Slurm default allocation (which consists of a round-robin assignment policy) with the allocation proposed by our SMT solver algorithm. • Energy and time savings calculation: Once all the workload has been executed, we calculate the total energy consumed by the allocation and compare it to the energy consumed if the workload had been executed only in the HPC facility. In this calculation we take into account both the savings obtained by executing tasks in less energy-consuming nodes, and the energy savings that come from the decrease in execution time and thus, in idle energy of HPC servers. We also provide data on the communication savings, given as the number of hops for the datasets going through the WSN infrastructure. Table 3 shows the results, in terms of the energy used to execute tasks, the total energy consumed (energy to execute tasks plus idle energy of the HPC machines for the fact of being turned on), execution time of the allocation for different configurations of HPC servers (HPC), Base stations (BS) and gateways (GW), and the average number of hops for the datasets (that is, the communication energy). We have considered that BSs are idle and available to compute during a 25% of their time; GWs are idle and available during a 50% of their time, and HPC servers dedicated for this computations, and thus can be used at 100%. The first row of the table presents the results for the state-of-the-art allocation: only the resources of the HPC facility are used for computation, and the resource manager Slurm uses its default round-robin allocation algorithm. It must be noted that the average number of hops for a dataset is 3, because the information gathered by the sensors has to travel to the HPC facility going through a BS and a GW. Our goal is, taking this case as the reference, to minimize energy and maximize performance. The second, third and fourth rows show the results for our SMT solver algorithm given different combinations of the resources to be used-i.e., a different amount of GW and BS used for computation. In this case, the resource manager uses the results of the SMT solver to perform the allocation. As it can be seen, with the usage of the proposed techniques and algorithms we can achieve from 10% up to a 40% speed-up in the execution time of the workload, which is also translated into energy savings in the idle energy consumed by the machines-as they can be turned off, or simply used to process other services. The total energy used to execute the tasks also decreases in a 48% for the best case. It must be noted that this results are highly dependent on the number and the combination of different types of nodes that form the cluster, as well as on the percentage on non-cpu-intensive tasks that make up the workload. In this sense, if our workload has a large number of non-cpu-intensive tasks-approximately 50%, as it is in this case-then we will obtain very good results by increasing the number of low and medium resource machines. On the other hand, should the workload consist only of cpu-intensive tasks, the benefits would be much less. As for the communication infrastructure, even though we have neither calculated nor taken into account the savings, it is clear that tackling with locality allows us to reduce dataset traffic through the infrastructure. If a particular task that needs a particular dataset is executed in a BS or a GW near the source, data does not have to travel to the HPC facility, and communication costs are reduced. These solutions have been computed by using the Yices SMT Solver [28]. As for the execution time, it must be said that the maximum execution time needed to allocate a jobset was 4 minutes. As most of the tasks have higher execution times, this is a feasible solution, which can be used as a Slurm plug-in to be executed on runtime, during the normal execution of the workload. As a drawback, a spare core will be needed in order to calculate the task allocation. However, this does not have an impact an energy consumption. One core of the HPC facility working at its maximum power consumption for at most 4 minutes each time a new job-set arrives means a worst-case energy consumption of less than 1 kWh for the whole workload. Conclusions This paper proposes novel energy management techniques to tackle the computational needs of smart cities and ambient intelligence applications, by making use of energy-minimization workload assignment policies. These techniques are inspired by the solutions that come from the world of data centers. They use application-awareness and heterogeneity in order to assign low-demand and mid-demand computational tasks to idle nodes with low and medium resources in the WSN infrastructure, instead of executing them in the HPC infrastructure. The proposed solution uses SMT solvers to generate energy-efficient assignments that take into account several variables such as maximum execution time and data locality. The results prove that this kind of non-optimal assignment can increase the energy savings of the smart infrastructure up to a 40%, mainly because of the savings that come from decreased execution times. Also, the usage of this kind of algorithms allows the implementation of energy-efficient task assignments in large sensor deployments. Future work will focus on the development of more accurate and efficient SMT solver algorithms, which contemplate more constraints of the problem, as well as on the usage of real smart environment applications, such as healthcare services. Also, an accurate comparison between the performance of ILP minimizations and SMT solvers for different sizes of the WSN deployment is envisioned. In the near future a simulation framework will be developed that integrates all the components together-HPC infrastructure, WSNs, communications and Slurm RM-in order to evaluate more accurately the beneficial impact of the data locality on energy savings and performance.
9,639
2012-08-03T00:00:00.000
[ "Computer Science" ]
Design and Development Of Lecturer Attendance System Using Radio Frequency Identification (RFID) Lecturer attendance record is required by the university to know the presence of lecturers in teaching in class. In general condition, lecturer attendance is recorded on the attendance sheet, or input to web application accessed on a class computer. However, there are some problems in its implementation so that at the end, lecturer presence is carried out using a manual form where the academic staff needs to reenter the lecturer attendance data into the applications. Based on the above, the authors designed and developed a lecturer attendance information system to record lecturers' attendance using radio frequency identification technology by implementing a near field communication card (NFC Card). The device used to record and read presence data during lectures, by tapping an Mi-fare NFC card to an NFC reader / writer device. The flow of this research method begins with a literature study of NFC card, observe the flow of lecture attendance process and data recorded into lecturer attendance sheet, analyzing the database design, the system design which has compatible with NFC reader and writer devices, designed system interface and continue to develop system. The result is system consists of master data, system attendance, verification and reporting module. The results show that NFC card implementation is more practical for lecturers in conducting lecture attendance and NFC card could be tapped out into an NFC device at a maximum distance up to 7 cm with the reading angle relative to NFC reader/writer with range 00 until 300 can read NFC Card. . Near Field Communication Card RFID (radio frequency identification) is a technology that combines the functions of electromagnetic or electrostatic coupling at the radio frequency of the electromagnetic spectrum, to identify an object. RFID is used as a tool to automatically control a chain of activities. RFID is a card (card) that can only be read (read only) or can be read and written (read/write), does not require direct contact or light paths to operate, can function in a variety of environmental conditions, and provides a level of data integrity high, and difficult to forge, so that RFID can provide a high level of security [1]. Radio frequency identification system (RFID) is an automatic technology and aids machines or computers to identify objects, record metadata or control individual target through radio waves. Connecting RFID reader to the terminal of Internet, the readers can identify, track and monitor the objects attached with tags globally, automatically, and in real time, if needed. This is the so-called Internet of Things (IoT). RFID is often seen as a prerequisite for the IoT [2]. The advantage of using an NFC card for lecturer attendance is more practical and easy to use, time in and time out are stored on the card by tapping into the lecturer attendance information system. The device used is the Mi-Fare NFC card along with an NFC reader / writer (Near Field Communication) tool to record and read lecturer data on the card, where during lectures, only by tapping the NFC card to the NFC reader/writer device to record time attendance of lecturers into the attendance information system. The purpose of this study is to determine the stages of designing and building a lecturer attendance system using NFC Card, as an alternative practical solution to record time attendance. The flow of this research method begins with literature studies related to how the NFC card works, how to store and read data into an NFC card. In addition, observing the lecture attendance process flow of lectures in the class. Next stage is data collection to find out the components of attendance data that need to be filled in on the lecturer attendance sheet. The next stage is system analysis to determine the programming language used, compatible and integrated with NFC reader and writer devices, designing an attendance system database. After that next stage is design of the system interface for the lecturer academic attendance system. The results of the analysis and design of the system and database are then used for the development of the attendance system, and system testing is carried out to find out whether the system can run well, and the data recorded is correct. II. RELATED WORK Eka Putra, et al (2019) presented The Design And Development Of an E-Ticketing Information System For Vehicle Parking Using an NFC Card. The ticketing system, which uses parking tickets, has weaknesses such as, tickets can be lost and can also be damaged / torn or tucked / lost and the identity of the vehicle does not match the parking ticket and is easily faked. This information system is designed by applying a card as a vehicle ticket for parking with data input in the form of a vehicle number stored on the RFID card. The system output information is in the form of an RFID card which shows the recorded motor vehicle number and driver's name. The system provides daily parking recapitulation reports and monthly vehicle parking data summaries [3]. Eka Putra, et al (2019) presented Design and Development of Login Security System Using Radio Frequency Identification. System security is important in information systems to prevent unauthorized users from accessing data. Login system applies security using encrypted passwords stored on RFID cards. This research designed login security system storing encrypted password using MD5 encryption into the Mifare Tag RFID card and equipped NFC reader to read data from RFID Card. By storing encrypted password characters on RFID cards, login system security is stronger and cannot be traced by unauthorized parties to log into systems. Some stage of system design is through study of literature, designing process flow, system algorithms, designing encryption methods and system interfaces, writing card module coding, card reading module coding, implementation, and system testing. The system login applied by scanning RFID card on the NFC reader, if the password on RFID matches then the user successfully logs into the system. Based on the testing of RFID Tag readings, the maximum distance from the reading of RFID Tag cards is up to 7 cm with a reading range of 0 0 to 30 0 with a success rate of 100% authentication. By using RFID Tag cards, increase security for logging into the system, because user cannot log in without having a card with the appropriate password [4]. Adhitama, M, et al (2019) presented Analysis of Reading and Writing Book Data Using Mifare RC-522 RFID for Libraries. Modern libraries need a system for identifying books with one another. Currently, the library automation system has implemented a barcode identification system. However, this technology requires direct or visible contact for the system to read. An alternative technology that can be used is RFID. The advantage of RFID is that it does not require direct contact (contactless) with a low error rate and data that can be written repeatedly. The research method used is a waterfall. The components used in this system are the RFID Tag / Transponder (Mifare Classic 1K), the RFID Interrogator Module (Mifare RC-522) coupled with the Arduino Uno R3 which is connected to a computer. In the test results, it was found that this system can read data on the tag with a recommended reading distance of under 3.5 cm (has an accuracy rate above 80%) and write data on the tag with a recommended writing distance of under 2.5 cm (has an accuracy rate above 80 %) [5]. S. Santoso, et al (2017) presented The Development of Student Attendance Applications Using Smart Cards For The Development of Smart Campuses. One of the campus activities is learning and teaching, when students enter the class filling out attendance on paper, activities require more time and paper. This application records attendance data to a database. Applications created are able to store attendance data when in the classroom. Using a smartcard that has Radio Frequency Identification (RFID) on the card, the application data is stored in a database system, for easy reporting, applications are built to provide activity reports, and reports can be printed. Based on the measurement of the RFID card and reader, the reading ability of the RFID card is about ± 2 seconds, with an accuracy rate of 98 percent [6]. C. Costa, et al (2013), presented Radio Frequency Identification (RFID) as a technology to improve the management of information flows in the supply chain and food security in the agricultural sector. RFID technology is capable of presenting great opportunities for the agricultural sector and there are several obstacles slowing down its implementation. The survey provides an overview of the opportunities and constraints for widespread adoption of RFID. The aim of the research is to provide an updated analysis of current developments in RFID technology for different product typologies in the food agribusiness industry, addressing at the same time its potential in technology and logistics development regarding different production / distribution sectors [7]. Yee-Loong Chong, et al. (2015), presented predicting RFID adoption in the health care supply chain from a user perspective. The research carried out integrates the integrated theory of acceptance and use of UTAUT technology, namely performance expectations, effort expectations, facilitation conditions, social influence) and individual differences, namely personality (neuroticism, awareness, openness to experience, friendliness and extraversion) and demographic characteristics (i.e. age and gender) to predict RFID adoption in the health care supply chain. Of the 252 doctors and nurses who were studied using 11 variables proposed in an effort to predict the adoption of RFID by doctors and nurses, it was found that individual differences were able to predict RFID adoption better than the variables originating from UTAUT [8]. Mohandes (2017) presented Class Attendance Management System Using Near Field Communication Mobile Devices. This study developed a Class Attendance Management System (CAMS) prototype that has been developed and evaluated using NFC-enabled mobile devices and NFC (or RFID) tags / cards. Faculty can monitor student attendance during the academic period, issue alerts, and request resignation of students due to poor attendance in accordance with institution policies. The app on NFC-enabled phones reads student IDs by simply tapping it on the NFC student ID card and gets positive responses from universities [9]. Kaur M, et.al (2011) presented RFID Technology Principles, Advantages, Limitations & Its Applications. This study gives an overview of the current state of radio frequency identification (RFID) technology. Aside from a brief introduction to the principles of the technology, major current and envisaged fields of application, as well as advantages, and limitations of use are discussed. Radio frequency identification (RFID) is a generic term that is used to describe a system that transmits the identity (in the form of a unique serial number) of an object or person wirelessly, using radio waves. It's grouped under the broad category of automatic identification technologies. RFID is increasingly used with biometric technologies for security [10]. LakshmiSudha, K. et al (2015) Barcode based Student Attendance System. This study presenting student attendance play significant role in order to justify academic outcome of a student and college as overall. Unfortunately, there is no automated attendance record keeping application available in colleges. There is a need for a tool to systematically keep the student's attendance record due to increasing number of college students The project that we are going to make is to help the teachers in our college to avoid maintaining the registry book. This project uses a barcode scanner. B.B.S.A.S uses Barcode scanner to take the attendance of students entering the lab. Each student's ID card will have a barcode at the back side of it. This barcode contains unique data of the student such as roll number, branch and year. Etc. Student will scan their barcode at the end so that the student can't cheat. The display screen will show the attendance of the particular student after scanning his/her barcode. Teachers and administrator will only have access to the system with their respective login ID's and passwords [11]. Mohamed, B.et al (2012) Fingerprint attendance system for classroom needs. Fingerprint attendance system aims to automate the attendance procedure of an educational institution using biometric technology. This will save time wasted on calling out names and it gives a fool-proof method of attendance marking. A hand-held device is used to mark the attendance without the intervention of teacher. The device can be passed and students can mark attendance during the lecture time. Students would be made to place their finger over the sensor so as to mark their presence in the class. It can communicate with a host computer using its USB interface. This device operates from a rechargeable battery. GUI application in host computer helps the teacher to manage the device and attendance [12] Shoewu, O. et al (2012) Development of Attendance Management System using Biometrics. The development of an attendance management system using biometrics is proposed. Managing student attendance during lecture periods has become a difficult challenge. The ability to compute the attendance percentage becomes a major task as manual computation produces errors, and also wastes a lot of time. For the stated reason, an efficient attendance management system using biometrics is designed. This system takes attendance electronically with the help of a finger print device and the records of the attendance are stored in a database. Attendance is marked after student identification. For student identification, a biometric (fingerprint) identification based system is used. This process however, eliminates the need for stationary materials and personnel for the keeping of records. Eighty candidates were used to test the system and success rate of 94% was recorded. The manual attendance system average execution time for eighty students was 17.83 seconds while it was 3.79 seconds for the automatic attendance management system using biometrics. The results showed improved performance over manual attendance management system. Attendance is marked after student identification [13]. A. Research Flow The research flow begins with direct observation of the flow process of lecturer attendance process once teaching in classroom. Direct observation analyzes the attendance sheet of the lecturer on the lecturer attendance list to find out what data is entered when the lecturer records attendance on the academic attendance sheet. This is done to suit the type of data recorded in the lecturer attendance system. Next stage of research is to do literature studies related to how NFC cards work, practice making programs / coding on how to save data into an NFC card and how to read data from an NFC card. The next stage is data collection of attendance data on the lecture attendance sheet. Next stage of research is to design general flow system and data flow diagrams, design database and system user interface. The final stage is developing lecturer attendance information system. The general flow of research can be seen in Fig. 3. B. Research Location and Time This research conducted in one of IT campus in Denpasar area, Bali, Indonesia which was carried out in 2020 within a period of 1 year. C. Research Object The research object is lecturer, and time attendance sheet and time attendance of lecturing. Furthermore, when lecture start lecturing, time in is recorded once at the end of lecturing, time out was also recorded. D. Flow of Lecturer Attendance System The general flow of lecturer attendance system can be seen in Fig. 4. In Fig. 4 explains general flow of lecturer attendance system starts from entry lecturer data and record the data into NFC Card. Lecturer data has been available in NFC Card. Then staff entering the lecturing schedule into system. Once lecturer start lecturing then they should do time in by tapping NFC Card into NFC read/write device connected to lecturer attendance system. System match the time in with the lecturing schedule which has been input previously. Then time in stored into lecturer attendance system. Once lecturer finish lecturing then they should do time out by tapping NFC Card into NFC read/write device. In Fig. 5 explains general description of lecturer attendance system starts from entry lecturer data and record the data into NFC Card. Then staff entering the lecturing schedule into system. Once lecturer start lecturing then they should do time in by tapping NFC Card into NFC read/write device connected to lecturer attendance system. System match the time in with the lecturing schedule which has been input previously. Then time in stored into lecturer attendance system. Once lecturer finish lecturing then they should do time out by tapping NFC Card into NFC read/write device. The final step is lecturing system provide attendance reporting recapitulation for academic administrations. E. Manual Time Sheet At Fig. 6 presented manual timesheet used for time attendance by manually, which is lecturer to put time sheet into manual time attendance once they start lecturing in the class. While finished then this time sheet to be given to academic staff to enter time sheet into attendance applications. The design and development of traceability system uses the Visual C # programming language with MySQL database, and utilizes with Near Field Communication device for tapping the card to records and read attendance data. A. User Interface Design The design of the Lecturer Data user interface can be seen in Fig. 7. This module store lecturer data into system and also into NFC Card. This card brought by lecturer once they give lectures. The design of the Lectures Schedule User Interface can be seen in Fig. 8. This module store lecturer schedule data into system. The system will match this schedule with the time once lecturer tapping NFC Card for giving lectures. The design of the Lectures Attendance User Interface can be seen in Fig. 9. This module store time in and time out of lecturer data into system once card tapped into NFC device. Duration time of lectures also presented in this module as time difference between time out and time in of lectures. The design of the Lectures Attendance Verification Interface can be seen in Fig. 10. This module used to display attendance data once tapping the card into NFC device. This module check the data stored into card by tapping into NFC device. B. Contex Diagram and Data Flow Diagram Design Design of context diagram can be seen in Fig. 11. A context diagram is a top level (also known as level 0) data flow diagram. It only contains one process node that generalizes the function of the entire system in relationship to external entities. Context diagram above is composed of 3 external entities, 1 process, and 4 data stores. The entities are lecturer, courses, lectures schedule. The arrow represents data flow into system or out from system, where each data flow is shown in the Fig. 11. Traceability system as the process of this design, and the data store consist of master lecturer, master courses, schedule and attendance database. Master lecturer is used to save lecturer data, and master courses is used to save courses data. Schedule data source is used to save schedule data and attendance is used to save data lecture attendance during give lectures. Meanwhile data flow diagram (DFD) is a much more complex representation of a context diagram. DFD show a further level of detail not shown in the context diagram. The Data Flow Diagram (DFD) shows the data flow between the processes within a system. Data flow diagram shown in Fig. 12. Context diagram in Fig. 11 is spitted into 2 process to be data flow diagram consist of master data module and lecturer attendance system. Master data is used to manage master data, while lecturer attendance is used to manage all transaction data related to attendance during lecturing process. There are 4 entities, 4 data store and 2 process. The arrows represent data flow from entity to process or data store, shown in Fig. 12. C. Hardware and Software Requirements The hardware requirements to implement this traceability system can be seen in Table I. To store lecturer attendance data into cloud server A. System Result This lecturer attendance system consists of user interface of master data module and lecturer attendance and verification module. Main menu lecturer attendance system consists of submenu master lecturer, master lecturing schedule, attendance and verification. Lecturer main menu could be seen in Fig.13 Lecturer master data module is a module to input lecturer data such as lecturer registration number and lecturer name and program. Lecturing schedule module could be seen in Fig.15 which is used to input lecturing schedule which is consist of courses code, class, day, time, lecture name. System will have matched this schedule time with attendance of lecturer once start lecturing in class. Lecturer attendance module is shown in Fig.16 and is used to record time in and time out for lecturer to give lectures in classroom. By tapping NFC Card into NFC device, then it matched with current lecturing schedule and recorded the time in and time out of attendance. There have also time duration which is difference between time in and time out of attendance. Verification attendance module is shown in Fig.17 and is used to verify and show content of Card data stored in the NFC Card. By tapping NFC Card into NFC device, then it shows content of Card data into the module, so lecturer knows about what subject has been lectured and what duration time of lecturing process. And then Fig.18 shows about the attendance report recapitulation of lecturer has fulfil the lecturing process. From the report, it shows about courses, registration number of lecturer, lecturer name, class, day, time in and time out and time duration. B. Discussion Based on testing result, we found that NFC Card have maximum distance to read /write data from NFC device. NFC device could read NFC Card at a maximum distance of 7 cm in an upright position and could not read in the sideway position. The testing results of the read/write NFC Card into NFC device are shown in Table III. Table IV, it can be observed that the NFC Card could be read/write at tapping angle position from 0 0 until maximum 30 0 relative to NFC Reader/Writer. By using NFC Card implemented in attendance system, more practical for lecturers in conducting lecture attendance. VI. CONCLUSION The conclusion of this research is system design stages consist of direct observation, literature study, data collection, system analysis and hardware requirements, database and user interface design, system development, and system testing. Testing the system by tapping an NFC Card (RFID) on a NFC Card reader/writer, obtained the results of testing the maximum distance from the reading of the NFC Tag card is up to 7 cm with the reading angle relative to NFC reader/writer with range 0 0 until 30 0 can read NFC Card. By using an NFC card implemented in attendance system, more practical for lecturers in conducting lecture attendance.
5,311.6
2021-02-28T00:00:00.000
[ "Computer Science", "Engineering" ]
PDGF‐C promotes cell proliferation partially via downregulating BOP1 Platelet‐derived growth factor C (PDGF‐C) is a member of PDGF/VEGF family, which is well‐known for important functions in the vascular system. It is widely reported that PDGF‐C is able to modulate cell proliferation. However, it is still not very clear about this cell modulating mechanism at the molecular level. In a screening of factors regulated by PDGF‐C protein, we fished out a factor called block of proliferation 1 (BOP1), which is a pivotal regulator of ribosome biogenesis and cell proliferation. In this study, we investigated the regulation of BOP1 by PDGF‐C and its role in modulating cell proliferation. We found that BOP1 was downregulated at both mRNA and protein levels in cells treated with PDGF‐C‐containing conditioned medium. On the other hand, BOP1 was upregulated in PDGF‐C deficient mice. Furthermore, we confirmed that overexpression of BOP1 inhibited HEK293A cell proliferation, whereas knockdown of BOP1 promoted cell proliferation. The mitogenic effect of PDGF‐C could be attenuated by downregulation of BOP1. Our results demonstrate a clear PDGF‐C–BOP1 signaling that modulates cell proliferation. At the cellular level, PDGF-C was identified as a potent mitogen when it was identified (Hamada et al., 2000).After it is secreted extracellularly in a paracrine fashion, PDGF-C could bind to PDGF receptor alpha (PDGFRα) (Fredriksson et al., 2005).Although currently much is known about the downstream of the PDGF-C pathway, some of which may be related to its mitogenic function.It is important to understand the downstream mechanisms underlying PDGF-C mitogenic functions. To explore the molecular mechanism of PDGF-C mitogenic functions, we scrutinized our published microarray data of factors modulated by the PDGF-C protein (Tang et al., 2010).Among these candidates, we identified a factor called block of proliferation 1 (BOP1) which may mediate the functions of PDGF-C in modulating cell proliferation.BOP1 is a member of the PES1-BOP1-WDR12 (PeBoW) complex, which plays critical roles in the ribosomal RNA processing (Rohrmoser et al., 2007;Strezoska et al., 2000).It was initially reported to be an inhibitor of cell proliferation by interfering with multiple reactions in pre-RNA processing in the cells (Holzel et al., 2005;Strezoska et al., 2002).However, subsequently it was discovered that BOP1 could also promote cell proliferation in some cancer cells (Vellky et al., 2021). C Cell ell B Biology iology I International nternational This study was designed to test the hypothesis that the mitogenic effects of PDGF-C is mediated through a pathway involving the regulation of BOP1. | Cell culture HEK293A cells from the American Type Tissue Collection were cultured in Dulbecco's modified Eagle medium (ExCellBio) supplemented with 10% fetal bovine serum (FBS; ExCellBio) and 1% penicillin and streptomycin (Cellgro) at 37°C in a 5% CO 2 + 95% air incubator.Cells were passaged 1 day before they were subjected to various treatments. | Transfection Plasmids and siRNA were transfected into HEK293A cells with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocols.In brief, plasmids or siRNA were mixed with Lipofectamine 2000 in Opti-MEM medium and incubated for 15 min at room temperature for the nucleic acid-Lipofectamine complex formation. The complex was then added to cell cultures in a complete culture medium as described above.Cells were then incubated for 48 h.In this study, the transfection efficiency was approximately 90%. | Collection of conditioned medium Two days after HEK293A cells were transfected, the cells were washed in phosphate-buffered saline (PBS) and then maintained in serum-free medium which was then collected 6 h later to be used as conditioned medium (CM) after removing of debris by centrifugation. | Western blot Cultured HEK293A cells were lysed in radioimmunoprecipitation assay buffer supplemented with protease and phosphatase inhibitor cocktails (Thermo Fisher Scientific).After that, they were quantitated with a DC™ Protein Assay Kit (Bio-Rad), separated in a sodium dodecyl sulfate-polyacrylamide gel electrophoresis matrix, and transferred to a polyvinylidene difluoride membrane (Bio-Rad). After a brief Ponceau S (p3504; Sigma) staining, the membrane was blocked with 5% bovine serum albumin, followed by incubation with a primary antibody at 4°C overnight, and incubation with horseradish peroxidase (HRP)-conjugated secondary antibody (1:5000) for 1 h at room temperature.Each incubation was followed by three washes with Tris-buffered saline buffer with Tween-20.Two sets of primary antibodies, mouse anti-BOP1 (sc-390672, 1:1000; Santa Cruz) and rabbit anti-β-tubulin (AP0064, 1:1000; Bioworld), were used in this study.Immobilon Western Chemiluminescent HRP substrate (Merck Millipore) was used to detect specific antibody binding.Signals were captured using a G-BOX (Syngene) imaging system.To determine the relative expression of a certain protein, ImageJ (NIH) software was used to quantify the relative expression level of the protein, followed by normalizing to the β-tubulin housekeeping control. | Cell proliferation assay HEK293A cells were cultured in 96-well plates.The cell viability was determined by the cell counting kit-8 (K1018; APExBio) staining at 48 h after transfection.It is a convenient and sensitive method for cell density measurement.The absorbance of each well was measured by a microplate reader at 450 nm and analyzed as described in the protocol from the company. The expression of BOP1 was normalized to the level of β-actin in the same cDNA.The ∆∆ 2 Ct method was used to compare mRNA expression levels. | Statistical analysis The statistical analysis was performed using the GraphPad Prism 8.0 software.All data were tested for normality and equal variance.If the data passed these tests, the two-group comparison was evaluated by Student's t-test and one-way or two-way analysis of variance.If the data did not pass these tests, Mann-Whitney test was performed to compare the two groups.All quantitative data in this study was presented as mean ± standard error of mean, and p < .05 was considered statistically significant.Each experiment was repeated at least three times. | PDGF-C promotes cell proliferation PDGF-C was identified as a potent mitogen when it was identified (Hamada et al., 2000).To study the functions, we established a cell Then we applied the serum-free CM to another passage of HEK293A cells and measured their proliferation.As shown in Figure 1c, the cells cultured with PDGF-C protein containing CM demonstrated an approximately one-third higher rate of proliferation than the cells cultured with the control CM.This result is consistent with our hypothesis that secreted PDGF-C promotes cell proliferation in this cell culture system. | PDGF-C downregulates BOP1 expression but does not alter its subcellular localization Next, we explored the molecular mechanism of PDGF-C mitogenic effects.In a previously study, we performed a microarray study of retina treated with PDGF-C protein (Tang et al., 2010).Data was deposited in the Gene Expression Omnibus under accession No. GSE19207.In this data, we focused on BOP1 as a candidate because it is a regulator of cell proliferation.In the microarray study, Bop1 was downregulated by 1.45 fold in the retina treated with PDGF-C protein (GSE19207). To confirm the possibility that BOP1 is a downstream factor of PDGF-C signaling, we examined the expression of BOP1 in HEK293A cells after cultured with PDGF-C containing CM. BOP1 expression was examined at both mRNA and protein levels.Quantitative realtime PCR (qPCR) study indicated that the expression level of BOP1 mRNA was downregulated in cells cultured with PDGF-C containing CM (Figure 2a).Western blot study confirmed that BOP1 protein was significantly downregulated in cells treated with secreted PDGF-C protein (Figure 2b).We further examined other possible effects of PDGF-C signaling on BOP1.To follow the localization of BOP1 after the application of PDGF-C CM.We found BOP1 protein was normally located in the nucleoli which were not counterstained with DAPI (Figure 2c).In the cells cultured with PDGF-C CM, the localization of BOP1 was not altered. Therefore, these results demonstrate that PDGF-C downregulates BOP1 expression but does not alter its localization. | pdgf-c Deficiency upregulates BOP1 expression To further confirm that BOP1 is a downstream effector of PDGF-C signaling, we examined the expression of BOP1 in pdgf-c deficient mice.Both PDGF-C and BOP1 are predicted (https://www.ncbi.nlm. nih.gov/gene) to be highly expressed in skeletal muscles.Thus, skeletal muscles in the mouse posterior limb were isolated for the study of BOP1 expression.qPCR study showed that the expression level of BOP1 was upregulated in pdgf-c deficient mice at the mRNA level (Figure 3a).Western blot verified that BOP1 protein was also significantly upregulated in pdgf-c deficient mice (Figure 3b).These results indicate that pdgf-c deficiency upregulates BOP1 expression. It confirms that PDGF-C downregulates BOP1 expression. | BOP1 inhibits HEK293A cell proliferation Our results indicated that the expression of BOP1 is suppressed by PDGF-C.Therefore, the effects of BOP1 downregulation on cell proliferation was studied.BOP1 was shown to be a cell proliferation inhibitor for NIH 3T3 cells (Pestov et al., 1998).However, in some cancer cell lines, it could instead promote cell proliferation (He et al., 2022). To examine the effects of BOP1 on cell proliferation in our culture system, we modulated the expression of BOP1 in HEK293A cells and examined their proliferation.Western blot confirmed that BOP1 was successfully regulated at the protein level (Figure 4a).Cell proliferation decreased significantly in the BOP1 overexpression group, while increased significantly in the BOP1 knockdown group (Figure 4b).These results confirm that BOP1 is an inhibitor of cell proliferation in HEK293A cells.effects of PDGF-C CM in promoting cell proliferation under physiological expression level of BOP1 was about 35%.And it dropped to 11% in BOP1 knockdown condition.This result suggests that BOP1 partially modulates the mitogenic effects of PDGF-C, but it does not provide the only pathway. | DISCUSSION In this study, we examined the downregulation of BOP1 by PDGF-C, and confirmed that the downregulation of BOP1 protein promoted cell proliferation.These results demonstrate PDGF-C-BOP1 signaling is able to modulate cell proliferation.In addition, the study of PDGF-C CM on BOP1 transfected cells suggests that BOP1 is one of the multiple downstream mitogenic effectors of PDGF-C.Taken together, we demonstrate that PDGF-C promotes cell proliferation partially via downregulating BOP1 expression. We expressed PDGF-C protein in HEK293A cells, collected CM, and examined its effects on another passage of HEK293A cells.There may be differences between CM and the PDGF-C protein as the CM also contains other factors.In our CM system, these additional factors could be introduced by the overexpression of PDGF-C.PDGF-C is an important factor involved in many biological processes.It was discovered more than 20 years ago (Li et al., 2000). The main significance of this study is that we identified a novel pathway that links PDGF-C signaling with chromosomal stability because the downstream factor BOP1 is the member of the PeBoW complex which has close relationship with chromosomal instability (Rohrmoser et al., 2007;Strezoska et al., 2000).BOP1 was initially identified as a proliferation inhibitor in an isolation of growth inhibitory sequences from a cDNA library (Pestov et al., 1998).And the assay of the initial study on BOP1 was performed on NIH 3T3 cells.Its proliferation inhibitory function could be confirmed both in our study on HEK293A cells and the study on human aortic smooth muscle cells (Wu et al., 2019). Another significance of this study is that we identified PDGF-C as an extracellular modulator of BOP1.Because of the limitation in BOP1 study, as we know, there is no report about an extracellular factor to modulate BOP1.The functions of BOP1 in cancer cell proliferation is different from our results.In cancer cells, BOP1 promotes cell proliferation, epithelial-to-mesenchymal transition, tumorigenesis, and cancer development (Chung et al., 2011;He et al., 2022;Vellky et al., 2021).In this aspect, as a modulator of BOP1, PDGF-C and its receptor may be the targets for modulating cancer cells.The significance of PDGF-C in modulating cancer cells via regulating BOP1 should be further investigated. The mechanism of BOP1 in inhibiting proliferation may be based on its role in the PeBoW complex.BOP1 is basically a ribosome biogenesis protein.At the molecular level, BOP1 forms the nucleolar PeBoW complex together with Pes1 and WDR12, which are also essential for cell proliferation (Grimm et al., 2006).Also, BOP1 is related to chromosomal instability (Lips et al., 2008).The basic function of the PeBoW complex require the maturation of 25S and 5.8S ribosomal RNAs and 60S ribosome biogenesis (Pestov et al., 2001;Strezoska et al., 2000).In Saccharomyces cerevisiae, the counterparts of PeBoW complex is composed of Nop7, Ytm1, and Erb1 (Miles et al., 2005;Tang et al., 2008). Two-hybrid assays and GST pulldown experiments showed that Nop7 and Ytm1 interact directly with Erb1, but not with each other (Hölzel et al., 2005;Miles et al., 2005).In this case, we may speculate that overexpressed BOP1 may work as a dominant negative regulator of the PeBoW complex, which could explain the cell proliferation inhibitory effect of BOP1 overexpression on HEK293A cells. In addition to our findings, how could PDGF-C recruit BOP1 is not known yet.As a secreted protein, PDGF-C activates PDGFRα, which leads to downstream signaling.Considering that BOP1 is prominently localized to the nucleolus (Strezoska et al., 2000), it is reasonable to speculate that there would be some intermediates to convey the signal from PDGFRα to BOP1.As introduced above, several signaling pathways have been identified so far to be involved in PDGF-C functions.The possible intermediate candidates may give us some hints to explore the process for PDGF-C to recruit BOP1. In summary, this study identified a novel mechanism in which PDGF-C promotes cell proliferation partially via downregulating BOP1. PDGF-C deficient (PDGF-C −/− ) mouse line was created and reported byDing et al. (2002).All animal experiments were performed in accordance with the National Institutes of Health guideline for the Care and Use of Laboratory animals and were approved by the Institutional Animal Care and Use Committee of Zhongshan Ophthalmic Center.Wild-type littermates were used as the control group. Immunofluorescence stainingCells growing on glass coverslips were processed for immunofluorescence 48 h after transfection.Cells were washed with PBS briefly and then fixed in 4% paraformaldehyde for 15 min.After three times of washing with PBS, cells were pretreated in PBS supplemented with 5% Donkey serum and 0.5% Triton X-100 for 1 h.Mouse anti-BOP1 antibody (sc-390672, 1:100; Santa Cruz) in PBS supplemented with 5% Donkey serum and 0.1% Triton X-100 were incubated at 4°C overnight, and an Alexa Fluor-conjugated secondary antibody (1:500; Invitrogen) was applied for 1 h.Then the coverslips were incubated with 0.1% DAPI for 5 min.Three times of washing were applied after each antibody incubation step.Cells were observed under a confocal microscope (LSM710; Carl Zeiss). culture system.HEK293A cells were selected for the study because they expressed very low level of endogenous PDGF-C protein in our pilot study.To ensure that protein is normally expressed and posttranslationally modified, recombinant human full-length PDGF-C (pdgf-c) plasmids were transfected into HEK293A cells, and CM was prepared.PDGF-C protein and the corresponding receptor PDGFRα were detected in the CM and cell lysate, respectively (Figure 1a,b).Thus, CM was chosen for the study of PDGF-C functions. 1 Overexpression and function of PDGF-C in HEK293A cells.HEK293A cells were transfected with plasmids encoding ZsGrn (control) or plasmids inserted with PDGF-C.CMs were collected and applied to another passage of HEK293A cells.(a) Expression of PDGF-C protein in the CM.(b) Expression of PDGFR-α in HEK293A cell lysate.(c) PDGF-C promotes HEK293A cell proliferation.The data are presented as means ± standard error of mean.**p < .01. n = 8. F I G U R E 2 PDGF-C downregulates BOP1 expression but does not alter its subcellular localization.HEK293A cells were cultured with CM as introduced above.(a) The expression of BOP1 determined by qPCR.The fold change relative to the level of control group is displayed.(b) The expression of BOP1 determined by Western blot.Relative expression level is displayed.(c) PDGF-C does not interfere the subcellular distribution of BOP1.HEK293A cells were treated with CM.The subcellular distribution of BOP1 was examined by immunofluorescence staining.The data are presented as means ± standard error of mean.*p < .05. n = 3 for a, and n = 5 for b.Bar = 20 μm. Downregulation of BOP1 attenuates the mitogenic function of PDGF-C Our results demonstrate that PDGF-C downregulates the expression of BOP1, and the knockdown of BOP1 indeed promotes cell proliferation.Taken together, these results suggest a PDGF-C-BOP1 signaling to modulate cell proliferation.Then we further explore the significance of BOP1 in modulating the mitogenic functions of PDGF-C.We examined the mitogenic effects of PDGF-C CM on cells in which the expression of BOP1 was downregulated.The significance of BOP1 in modulating the mitogenic functions of PDGF-C can be viewed in two extreme ways.Hypothesis A (Figure 5a) considers BOP1 as the only effector of PDGF-C and in this case the effect of PDGF-C CM would be completely abolished by totally downregulating BOP1 expression.Hypothesis B (Figure 5b) is the other extreme, if PDGF-C promotes cell proliferation via intermediate signaling molecules other than BOP1, downregulation of BOP1 would not change the mitogenic effect of PDGF-C.To test these hypotheses, we compared the mitogenic effects of PDGF-C CM under physiological expression level of BOP1 and downregulated expression of BOP1.As shown in Figure 5c, the F I G U R E 3 pdgf-c Deficiency upregulates BOP1 expression.Skeletal muscles in the posterior limb were isolated from WT and pdgf-c deficient mice for determining the expression of BOP1.(a) The expression of BOP1 determined by qPCR.The fold change relative to the level of WT mice is displayed.(b) The expression of BOP1 determined by Western blot.All bands are cropped from the same membrane to represent the statistic results.The data are presented as means ± standard error of mean.*p < .05. n = 3 for a and n = 6 for b.F I G U R E 4 BOP1 inhibits HEK293A cell proliferation.HEK293A cell were transfected with plasmids encoding BOP1 or siBOP1.Cell proliferation was examined in full medium supplemented with serum.(a) BOP1 expression determined by Western blot.(b) Cell proliferation after the regulation of BOP1 expression.The data are presented as means ± standard error of mean.*p < .05;**p < .01;***p < .001.n = 3 for a and n = 6 for b. Therefore, CM could better represent the overall effects of PDGF-C on cell proliferation.Furthermore, PDGF-C expressed in HEK293A cells could be fully posttranslationally modified, which is missing in bacteria expressed proteins.Nonetheless, we cannot rule out the possibility that CM contains unknown factors in addition to PDGF-C.But all our results support our hypothesis that PDGF-C modulates cell proliferation partially by a pathway involving BOP1. F I G U R E 5 The mitogenic effect of PDGF-C is partially regulated by BOP1.The role of BOP1 in the mitogenic effect of PDGF-C is hypothesized in two extreme ways.(a) Hypothesis A: The mitogenic effect of PDGF-C is regulated only by BOP1.(b) Hypothesis B: The mitogenic effect of PDGF-C is regulated by other factors than BOP1.(c) Downregulation of BOP1 attenuates the mitogenic effect of PDGF-C.HEK293A cells were transfected with vehicle or siBOP1.After 2 days, PDGF-C CM or the control CM were applied to those cells.The relative cell proliferation rate is displayed.The data are presented as means ± standard error of mean.*p < .05. n = 4
4,292.4
2023-08-24T00:00:00.000
[ "Biology", "Medicine" ]
The Reasons Suggesting a Close Link between Thermodynamics and Relativity Since the advent of relativity, it is widely accepted that the law of conservation of energy must include the energy created by disintegration of matter, or converted into matter. The aim of the present paper deals with the insertion of this concept into the basic equations of thermodynamics. Introduction The first evocation of a link between thermodynamics and relativity seems to have been advanced by R.C. Tolman in 1928 [1].In view of a photo now presented in Wikipedia [2] it can be imagined that he had the possibility to discuss directly this question with A. Einstein. In the recent years, the same general topic has been treated by several authors.Some of them are mentioned in the section References [3]- [8].The reading of their papers often requires a solid background in theoretical physics.Contrasting with this situation, the explanations presented below can be easily understood by scientists having a basic knowledge in classical thermodynamics.We will begin the discussion with the process of gas expansion into vacuum and the observation that its classical interpretation raises a problem.one part contains a gas noted 1, while the other part has been evacuated and the vacuum is noted 2. If the piston, previously locked (initial state), is liberated we know that it will move until the gas fills the whole system (final state). In the usual interpretation of the process, the adopted reasoning can be summarized as follows.The system being isolated, its internal energy, noted U, is considered as constant, implying the relation: Inside the system, if we apply to the gas the work equation: e dW P dV = − (2) we are led to write: Having P 2 = 0 (since P 2 is the pressure of the vacuum), the obtained result is: As concerns the change in heat, designated dQ , the usual proposal is: because we don't conceive that the gas can exchange heat with the vacuum. If we suppose that, inside the system, the possible exchanges of energy are limited to work and heat, the equation corresponding to the change in internal energy takes the form: One of the postulates implicitly admitted in thermodynamics is that the change in internal energy of a system passing from an initial state A to a final state B is independent of the level of irreversibility of the process.Knowing that the conditions of reversibility are characterized by the relations: (where T i is the internal temperature and dS the change in entropy) (where P i is the internal pressure and dV the change in volume) this is a way to say that, even in conditions of high irreversibility, the sum dQ + dW is supposed to have the same value as the sum T i dS − P i dV, itself often written TdS − PdV. The consequence is that, for a determined process, the term dU in Equation ( 6) is interpreted as being always equal to the term dU in equation: itself often written: Referring to the gas evoked above, and taking into account the information Therefore the change in entropy of the gas is given by the relation: Remembering that T is in Kelvin (absolute temperature), all the terms of Equation ( 13) appear to be positive, with the consequence that the term gas dS is positive too. At this stage of the discussion, the important point to keep in mind is that the change in entropy of the gas is positive (Equation ( 13)) while its internal energy is considered constant (Equation ( 11)), as well as the internal energy of the whole system (Equation ( 1)).The same can be said for the vacuum, knowing that it represents the difference between the whole system and the gas. Although this interpretation is classically admitted, it raises a problem that can be introduced as follows.Let us imagine that instead of being a vacuum, the second part of the system contains a gas whose initial pressure is P 2 .Applying Equation (2) to both parts, we get: Having 2 1 dV dV = − , the second equation can be written dW PdV = and the result obtained for the system is: ( ) The term dV 1 being positive when The conclusion of this observation is that when an isolated system is composed of two parts having a mutual exchange of work, its mechanical energy W increases, inducing an increase in its internal energy U.In such conditions, the ability of having the relation 0 syst dU = , suggested by Equation (1), requires that inside the system another kind of energy, namely its thermal energy Q, goes decreasing.The expected information is therefore an argumentation showing that the change in heat of the system obeys the relation: and more precisely: Before seeing if it is possible, it can be noted through Equation ( 15) that, for a given value of P 1 , the product ( ) is all the greater as the value of P 2 is small.The highest value of dW syst is reached when P 2 = 0, that is when the second part of the system is a vacuum, a situation corresponding to the case initially considered.Admitting that there is no exchange of heat between the gas and the vacuum (as suggested by the proposal 0 gas dQ = given by Equation ( 5)), the internal energy of the system appears to be entirely conditioned by the relation 0 syst dW > and therefore leads to the conclusion: Obviously, we are confronted to a problem because this result does not agree with the postulate 0 syst dU = (Equation ( 1)) classically admitted for an isolated system. As will be seen below, we are confronted to the same kind of problem when we consider the process of a heat exchange between two parts of an isolated system. The Process of Exchange of Heat Between Two Parts of an Isolated System Let us consider an isolated system divided into two parts separated by a fixed diathermic wall.In the initial state, we suppose that part 1 contains 1 liter of water at T i = 10˚C (283 K), while part 2 contains 3 liters of water at T i = 50˚C (323 K).We know that the natural evolution of this system will consist in an exchange of heat between both parts, until they reach the same final temperature T f . The classical intepretation of the process can be summarized as follows. The system being isolated, its internal energy is considered as constant, proposal which obeys the relation 0 syst dU = , numbered above as Equation (1). Remembering from Equation ( 6) that the general expression of dU syst is syst syst syst dU dQ dW = + , and that, in the present case, the term dW syst is negligible or nil (reason explaining that it is often not evoked in thermodynamic textbooks) the idea is implicitly admitted that: In such conditions, combining Equation (1) ( 0 syst dU = ) and 20, leads to the relation: which implies itself: The verification of this last proposal can be done observing that: The final temperature can be mesured by a thermometer, or calculated using the well-known equation: where C 1 and C 2 are the specific heat capacities of part 1 and part 2. Knowing that the weight of 1 liter of water is 1 kg and admitting that Cp water , whose value is 4184 J kg −1 K −1 , can be considered constant over the temperature interval taken into account, their values are: and the result obtained for T f is: Then, introducing this value in the general equation: ( ) we get: from which it can be seen that As concerns the change in entropy, the starting equation is: whose integration gives: Applying Equation ( 27) to part 1 and part 2 leads to Therefore, the change in entropy of the system is: showing that the term dS syst obeys itself the relation: In the same manner as we have seen, in Section 2, that the starting proposal 0 syst dU = (given by Equation ( 1)) raises a problem, we will see here that the starting proposal 0 syst dU = (isolated system) raises a similar problem.The reason is the following. In thermodynamic textbooks, the concept of entropy is often introduced through an opposition between a reversible process and an irreversible process. It consists in saying that: -When a system is concerned by a reversible process, its change in entropy is defined by the relation: where T i represents the internal temperature. -When a system is concerned by an irreversible process, its change in entropy is defined by the relation: where T e represents the external temperature and dS i an additional term, always positive, designated as the internal component of the entropy. It is well known that, in practice, a process is always irreversible, but with a level of irreversibility more or less important.When the level of irreversibility is very low (i.e when the difference between T e and T i is very small), the term dS i tends towards zero and Equation (31) tends towards Equation (30). In contrast with the term dS i , the term dS e , designated as the external component of the entropy, is defined as: Using these conventions, which were introduced in physical-chemistry by J. W. Gibbs, the condensed expression of Equation (31) takes the form: The term dQ having the same value in Equation (31) and Equation ( 32), (as will be verified hereafter) we can be tempted to think that, for a heat exchange, we have necessarily the relation: Concerning this topic-which is the fundamental point of the present discussion-it can be noted that when a liter of water is heated from 283 K to 313 K (case examined above), the usual calculation of ΔQ is effectively independent of the level of irreversibility of the process.We get the same value when the water is directly put in contact with a thermostat at 313 K, or heated progressively by contact with several heat sources at different temperatures, or by contact with another mass of water (such as the part 2 just considered) whose own temperature is decreasing from an initial value to the final value 313 K. Despite the fact that this interpretation is widely admitted and seems correct, the attention is called on the fact that Equation (31) is not an energy equation but an entropy equation.It can be converted into an energy equation multiplying each of its tems by T e and we get: whose meaning is: Combining the fact that dS i is positive with the fact that T e is an absolute temperature, the additional term designated dQ add appears to be necessarily positive. From a comparison between the energy Equation (34) and Equation (35), and taking into account the information already given by the entropy Equation (30) and Equation ( 31), it appears that the terms dQ rev and dQ irr can be directly defined as: The terms T i and T e , being positive (absolutes temperatures), the sign of dS is always the same as the sign of dQ.The consequence is that, for a given process (taking into account that dS can be factorized, since S is a state function) the difference between dQ irr and dQ rev , i.e.: As a consequence, the terms dQ irr and dQ rev are always linked together by the relation: If we apply the integrated form of Equation ( 37) to both parts of the considered system, we must take into account the following peculiarities: -For part 1: T e being the external temperature, it is the temperature of part 2, but more exactly its average temperature that can be noted 2 * T .Its value is: Therefore the value of the term .8358 7810 16168 J Obviously, such a result is not in adequacy with the conventional conception of thermodynamics, which lies on the idea that, in the case of an isolated system, the term called dQ add is necessarily nil, since no energy can be added or deducted. Referring to this situation, some complementary observations are given below. The Link between Thermodynamics and Relativity It is well known that the difficulties encountered in learning thermodynamics are more conceptual than mathematical.Referring to the process just examined (heat exchange between two parts of an isolated system), the problem seems related to the fact that, if the entropy Equation ( 31) is totally accepted, its translation under the form of the energy Equation (34) and Equation ( 35) is generally absent from thermodynamic textbooks or courses. The cause of this situation is probably the apparent impossibility to detect a physical symptom proving the reality of the energy symbolized by the designations T e dS i or dQ add . We have seen with Equation (39) that, for a given value of dS, a heat exchange is characterized by the relation Remembering that for a system concerned by both an exchange of heat and an exchange of work, the change in internal energy dU is defined as: it appears that we have necessarily: This proposal concerns indifferently a system exchanging energy with its surroundings or an isolated system which is the seat of an internal exchange of energy. Taking into account that an isolated system is defined as having no exchange of energy with its surroundings, and that 0 rev dU = , it seems that the inequality 0 irr dU > must be systematically substituted to the equality 0 irr dU = usually admitted. Of course, we easily conceive that for the creators of the thermodynamic theory, it was impossible to imagine another condition than 0 dU = for an isolated system.The situation is different today because, in the meantime, the concept of relativity has been introduced and particularly the Einstein mass-energy relation 2 E mc = .Knowing that in this new prospect, a given amount of mass can be transformed in energy and conversely, the idea that an energy can be created (or destroyed) inside an isolated system becomes possible. Under the light of this wider understanding, the suggestion has been advanced recently [9] The situation is similar for an exchange of heat because, referring to Equation From the theoretical point of view, the calculation of a reversible exchange in energy is nevertheless possible and is often done in themodynamics courses, particularly in the case of an ideal gas. Concerning the nature of the energy created, and taking into account that it does not appear under the form of work or of heat, the hypothesis has been suggested that it can be a potential energy of gravitation [12].This insight opens a possible relation between the decrease in mass of a planet and the progressive increase of its distance to other celestial bodies. Correlatively, it has been suggested that for living matter, the term dm of Equation ( 46) and Equation (47) could be positive, contrasting with inert matter where this term is interpreted as being negative [13]. Conclusions The difficulties encountered in thermodynamics are more conceptual than mathematical.It seems that a possible answer lies in the idea that every irreversible process is an illustration of the Einstein mass-energy relation. Equation (46) and Equation (47) are nothing but an attempt to insert this conception into the basic tools of the thermodynamic theory. Among the other hypothesis explaining the behaviour of gaseous system, it is the one suggested by T. Yarman and A. L. Kholmetskii, based on quantum − and consequently the term dW syst -J.-L.Tane DOI: 10.4236/jamp.2017.591441713 Journal of Applied Mathematics and Physics and therefore that Equation (21) ( 0 syst dQ = ) is effectively verified. - For the system: Adding the values obtained above for the terms In a similar way, it can be shown that, for a given value of dV, a work exchange is characterized by the relation:
3,721.2
2017-09-15T00:00:00.000
[ "Physics", "Education" ]
Geomechanical Modelling Using Artificial Neural Networks Combined With Geostatistics The principal minimum horizontal stress plays an important role in the study of reservoir characteristics, modeling of oil and gas reservoir, drilling, production and stimulating wells. However, it is currently not possible to measure the minimum horizontal stress along the wellbore as a geophysical parameter logging. Minimum horizontal stress is measured by leak-off test (LOT) at only several points in a well. In order to have the values all along the wellbore, experimental formulas were established to determine the minimum horizontal stresses for different fields. Then these formulas must be calibrated with LOT data whose number is usually limited, even sometimes unavailable. On the other hand, the empirical formulas of one field might not be accurate for another. Introduction Geomechanics and geophysics are essential tools for reservoir simulation, for assessing hydrocarbon reserves, as well as for drilling and production. In a geomechanical model, stress information is particularly important, especially the minimum horizontal stress (Shmin) which is obviously a significant parameter used for many kinds of work in petroleum engineering. Theoretically, Shmin can be determined by equation relating to other geomechanical parameters such as pore pressure and vertical stress [1]. On the other hand, empirical formulas were also established to determine minimum horizontal stress indirectly through sonic wave velocity measured from logging [2]. In reality, Shmin values determined from these empirical formulas still need to be calibrated with values obtained from the leak-off test (LOT). However, the main inconvenient of LOT is the limitation of data because LOT is normally conducted at some discrete points along the wellbore, and the number of wells taking LOT data is often limited. Consequently, the small amount of LOT data may reduce the accuracy of the calibration. Moreover, as each field has its own geological characteristics, the application of the same formulas from one field to another will lead certainly to errors. With the development of Artificial Intelligence, new methods based on artificial intelligence tools applied to technical fileds are increasingly popular. As one of artificial intelligence tools, artificial neural network (ANN) helps computers perform tasks for human with self-learning thinking. Dowd and Saraς (1994) presented a brief description of ANN and explained the basics of the feedforward back-propagation network and the use of ANN in geostatistics. The multilayered feedforward back-propagation algorithm was used to predict the desired variogram values. This purely mathematical research concluded that geostatistical simulation via neural network is a possibility [3]. M. Kanevsky et al. (1996) used ANN and geostatistics methods for environmental mapping. The authors drew a map of the nuclear radiation distribution around a reactor where a nuclear accident happened in 1986. From the radioactive data measured at various points, the geostatistical interpolation method was used to draw a map of the distribution of nuclear radiation of the whole region. Besides, another map was made using artificial intelligence. The authors compared geostatistical and ANN models by using cross-validation technique as well as validation data sets. The result showed that the method of using ANN was very promising in environmental and Earth sciences [4]. Workflow Due to the complexity of this study, a schematic diagram step by step of this research is presented in Fig. 1. The models using ANN were generally built with the same steps, but the data was different for each case. Firstly, the influence of parameters such as depth, number of wells and number of stratifigraphies on the ANN models were studied. Secondly, these results will help to develop a model using combination of ANN and geostatistical method which in this study is the Kriging interpolation. A comparison between the results of this model with the results given by only geostatistical method was also effectuated so that the advantages of this new workflow can be drawn. Data set This study collected logging data from 4 wells, which were denoted as X1, X2, X3 and X4. These wells belong to Hai Thach -Moc Tinh Field in the continental shelf of Southeast Vietnam. Fig. 2 shows a view of these 4 wells in a two-dimensional space. The data set included the geomechanical parameters of the wells, however, in this study's scope, only 4 geomechanical parameters were used as follows: As the depth interval of sequential stratigraphy might not be the same for each well, mud windows were established in Fig. 4 in order to determine the range of data that should be taken so that the influence of stratigraphy could be studied. Three fundamental parameters govern the value of the density while drilling: the maximum pore pressure of the considered phase, the minimum fracturing pressure of the considered phase and the critical stability density of each formation. The Mud Window is the density range between pore and fracturing pressures (Chalez, 1999) [11]. However, a safe and intact Mud Window is the density range between the collapse pressure (or breakout pressure), at which the wellbore collapses due to the pressure difference between the formation and the well, and fracturing pressure. Two distinctive stratigraphies were observed from the mud windows in Fig. 4. Hence the data were divided into two groups in order to study the influence of stratigraphic uniformity on the results: • Data set 1: which included logging data of the wells from 2500 m to 3500 m. Data in this depth range belongs to the transition between the 2 stratigraphies. Data set 1 has 2000 data points and the data was taken every 0.5 m. • Data set 2: which included logging data of the wells from 3300 m to 4000 m. Data in this depth range belongs to the second stratigraphy from the top down. Data set 2 has 1400 data points and the data was also taken every 0.5 m. The number of data points of two data sets is eligible to build models and train artificial networks. Estimation workflow of minimum horizontal stress using artificial neural networks ANN is an artificial information processing system inspired by how neural networks work in the human brain. ANN is one of the techniques of artificial intelligence applied to solve specific problems of classification, pattern recognition and data prediction. ANN converted input data into output data by calculations performed in neurons. However, in order for the output value to be accurate and reliable, the network needs to be built and trained to describe accurately the nature of the relationship between the output data and the input data. A typical artificial neural network has three main parts: Input layer; Hidden layers and Output layer (Fig. 5). • Input layer: The input data is entered as vectors, and the number of vectors corresponding to the number of neurons of this input layer. • Hidden layers: Hidden layers connect the input value to the output value. The neurons in hidden layers are mainly responsible for interpreting the input layer's neurons and then sending the information to the output layer's neurons. • Output layer: The output data is organized as vectors, and the number of vectors corresponding to the number of neurons of this output layer. The network is built through network training process which essentially adjusts weights through epochs so that the output values are reliable and have small error compared to the actual values. At the end of the training, the weight value will be saved and used to forecast output data when other input data is available. Gradient descent algorithm is the most widely used algorithm for optimizing artificial neural networks by updating weights and bias. The Gradient descent formula is given as follows: Based on the basic theory of ANN, the following workflow was proposed to develop ANN models in this study: Step 1: Data collected and analyzed Data of the three wells X1, X2 and X3 were used to build artificial neural networks. Parameters including TVD, PP and Sv are selected as input and the minimum horizontal stress is taken as the output of the networks. However, depending on the purpose of the network, there will be different data sets for each model, and this will be presented more in detail in step 2. Once the ANN models are built, the data of the remaining well, X4, will be used to estimate the minimum horizontal stress. Each data set is randomly divided into 3 parts, including: • Training data: accounts for 70% of the data set. These values are used during network training. • Test data: accounts for 15% of the data set. These values are used to test the effectiveness of the network during and after network training. • Validation data: accounts for 15% of the data set. These values are used to check whether overfitting has occurred or not. Step 2: ANN models building In this study, four artificial neural networks models denoted and X3, and output data is minimum horizontal stress of these three wells. Model C has two hidden layers with the numbers of neurons in each layer are 9 and 8. Data set 1 is used for model C. Model C uses input data of three wells, hence one well more than models A and B. This is to assess the effect of the number of wells on the Shmin estimated result. -Model D: Input data for this model is TVD, PP and Sv of the three wells X1, X2 and X3, and output data is minimum horizontal stress of these three wells. Model D has two hidden layers with the numbers of neurons in each layer are 9 and 10. Data set 2 is used for model D. Model D uses data set 2 instead of data set 1 to assess the impact of stratigraphic consistency on the estimation. For reminder, data set 1 and data set 2 were mentioned in Section 2.2. Fig. 6 shows the structure of artificial neural networks. The number of neurons in each layer of neural networks is summarized in Table 1. Figure 6 -The perception network for ANN models. The transfer function of the first and second hidden layers in four mentioned models is hyperbolic tangent sigmoid function because it squashes to a wider numerical range between -1 and 1 and has asymptotic symmetry. On average, it is more likely to create output values that are close to 0, which is beneficial when forward propagating to subsequent layers. The hyperbolic tangent sigmoid function is shown as follow: Step 3: ANN models training, test and validation Network training is basically the process of adjusting weights and bias. Weight values were taken by default with random values at the start of network training. During the training process, in each epoch, an algorithm is used to adjust the weight values until the desired error of output value is reached. This process increases the ability of the network to predict reliable results when a different set of input data is used. The results of the network training will be displayed as mean square error (MSE) and regression coefficient (R 2 ) of the three data sets training, test and validation. The formulas for calculating MSE and R 2 are given as follows: Step 4: Reliability checking After step 3, the artificial neural networks now must be checked for accuracy and overfitting. The MSE and R 2 values in step 3 will be displayed on a performance plot and regression plots. An artificial neural network is considered reliable if it achieves the following results: -Network error, network test and cross-validation are low. In this study, the maximum acceptable error is about 5% of the mean value of the calculated parameter. -Network training error is stable, which means the error does not vary too much in the last epochs. -Overfitting is not significant. Overfitting occurs when the MSE of the training error is much smaller than the MSE of the test error. After making sure the ANN works well with high reliability, the network can then be used for further studies. Step 5: Application of the models for the well X4 whose minimum horizontal stress need to be estimated. Specifically, for each model, we have input data as follows: • Model A: Input data is PP and Sv of the well X4 in data set 1. • Model B and model C: Input data is TVD, PP and Sv of the well X4 in data set 1. • Model D: Input data is TVD, PP and Sv of the well X4 in data set 2. Estimation workflow of minimum horizontal stress using a combination of artificial neural network and geostatistics The geostatistical interpolation technique used in the study is Kriging (Oliver and Webster, 2014) [12]. Basic equation of Kriging is: All data used in this section are from data set 2. The steps for building the model combining ANN and geostatistics are detailed as follows: Step 1: An ANN model named E was built using input data TVD, PP, Sv of the well X1 and network's target is minimum horizontal stress of this same well X1. Fig. 6 and Table 1 show the structure of the model E. Step 2: After the model E was trained, tested and validated in step 1, it was then used for estimating minimum horizontal stress of the wells X2 and X3. Step 3: Developing a minimum horizontal stress interpolation model (Model I) by Kriging with original data set 2 of the three wells X1, X2, X3. Then, we interpole Shmin of the well X4 by the model I. Step 4: Developing a minimum horizontal stress interpolation model (Model II) by Kriging with Shmin of the well X1 is taken from the original data set 2 while Shmin of the wells X2 and X3 are generated by ANN model E from step 2. Then, we interpole Shmin of the well X4 by the model II. Step 5: Analysis the results obtained from model I and model II. Estimation of minimum horizontal stress using artificial neural networks After the process of network construction, training and reliability test, the ANN training results for models A, B, C and D are presented in Table 2, Table 3, Table 4 and Fig. 7, Fig. 8, Fig. 9 and Fig. 10 where performance of error values during network training of models A, B, C and D are presented respectively. These models were then used to estimate minimum horizontal stress for the well X4. The values of Shmin of the well X4, which were estimated by the models A, B, C and D are presented in Fig. 11, Fig. 12, Fig. 13 and Fig. 14 respectively. The results indicated a clear closed correspondence between the predicted and actual data. This observation can be confirmed not only visually by Fig. 11, Fig. 12, Fig. 13 Model D has MSE = 1.8952 x 10 -9 , which is 42.18 times better than model C. The uniformity of stratigraphy therefore greatly affects the results. So, it is better if the data can be divided according to each stratigraphy layer. Neural Network and Geostatistics As mentioned above in Section 2, two models I and II were developed with model I conducted purely Kriging interpolation while model II effectuated a combination of ANN and geostatistics. The results of 2D minimum horizontal stress interpolation map and the regression analysis are presented in Fig. 15 and Fig. 16 for model I, and in Fig. 17 and Fig. 18 for model II respectively. The results of the two models I and II showed no significant difference. Although using only one data well, the method of combining ANN and Kriging did successfully build a interpolation model similar to the one obtained using Kriging method alone which used three data wells. The MSE and R 2 analysis of the two models I and II is summarized in Table 7. These results indicated clearly that the combination of ANN and geostatistics worked well with acceptable estimated values of minimum horizontal stress in comparison with the model that performed geostatistics only. Hence, it is safe to say that using data of only one well to build a model combining ANN and geostatistics can be applied in practice instead of using at least three wells while using just geostatiscal methods. The number of wells with data for modeling does not significantly affect the forecast results. In fact, having a well with the necessary data is not easy. Instead of drilling multiple wells and making measurements to get data, engineers can use the optimal number of wells and build the right models and thereby estimate the required parameters with high accuracy and good performance compared to drilling new wells. Conclusion This paper proposed a workflow combining artificial neural networks and geostatistics to build the principal minimum horizontal stress for the four wells of Hai Thach -Moc Tinh field in Nam Con Son Basin. In comparison with geostatistical method which requires at least data of three wells to develop an accurate interpolation model, the proposed method combining ANNs and geostatistics needed only data from one well, which contributes hugely in real life practice as the number of wells with data is often limited and hence without the need to drill additional wells, with this new proposed workflow it is still possible to predict the data of an area with high accuracy.
4,008.2
2020-10-02T00:00:00.000
[ "Geology" ]
Phospholipid Fatty Acids as Physiological Indicators of Paracoccus denitrificans Encapsulated in Silica Sol-Gel Hydrogels The phospholipid fatty acid (PLFA) content was determined in samples of Paracoccus denitrificans encapsulated in silica hydrogel films prepared from prepolymerized tetramethoxysilane (TMOS). Immediately after encapsulation the total PLFA concentration was linearly proportional to the optical density (600 nm) of the input microbial suspension (R2 = 0.99). After 7 days this relationship remained linear, but with significantly decreased slope, indicating a higher extinction of bacteria in suspensions of input concentration 108 cells/mL and higher. trans-Fatty acids, indicators of cytoplasmatic membrane disturbances, were below the detection limit. The cy/pre ratio (i.e., ratio of cyclopropylated fatty acids (cy17:0 + cy19:0) to their metabolic precursors (16:1ω7 + 18:1ω7)), an indicator of the transition of the culture to a stationary growth-phase, decreased depending on co-immobilization of nutrients in the order phosphate buffer > mineral medium > Luria Broth rich medium. The ratio, too, was logarithmically proportional to cell concentration. These results confirm the applicability of total PLFA as an indicator for the determination of living biomass and cy/pre ratio for determination of nutrient limitation of microorganisms encapsulated in sol-gel matrices. This may be of interest for monitoring of sol-gel encapsulated bacteria proposed as optical recognition elements in biosensor construction, as well as other biotechnological applications. Despite optimization, different stresses are imposed on microorganisms both during the encapsulation procedure as well as in the final encapsulated state [1,9]. Determination of physiological state and viability of encapsulated microorganisms is therefore of significant importance; however the research in this field has been somewhat lagging behind the research of encapsulation procedures [1,10]. So far viability and stress of encapsulated microorganisms have been studied using many ways such as colony-forming units, various microscopy techniques, fluorescence, bioluminescence, metabolic and enzyme activities, or gene expression (see review [1] for details). The physiological state of the microorganisms is among others manifested in the composition of its cytoplasmatic membrane via fast turnover of phospholipids and changes to the phospholipid fatty acid (PLFA) profile. PLFA profiling is routinely used for determinative purposes of pure microbial cultures and for rough characterization of soil microbial communities [11][12][13][14]. Recently we have demonstrated the usefulness of PLFA determination for characterization of viability of encapsulated microorganisms namely in polyvinyl alcohol (PVA) during biotechnological processes [15]. This preliminary study was however focused on total PLFA indicator only. In this study we aimed to test the applicability of this approach to microorganisms encapsulated in rigid silica matrix prepared by sol-gel method and also evaluate a few possible stress indicators (ratios of trans/cis PLFA and cy/pre PLFA) in response to nutrient insufficiency. Encapsulation Paracoccus denitrificans, a strain used in previous studies [16][17][18][19], was obtained from LentiKat's a.s. (Prague, Czech Republic). Various types of silica films were prepared differing in the medium used for dilution of bacterial suspension prior to encapsulation and cell concentrations (Table 1). Cultivation was carried out on Bacterial Salt Medium (BSM [20]) with 1 g/L of glucose as a sole source of carbon and energy or in Luria Broth (Table 1) to an exponential growth-phase (OD600 = 0.2 to 0.5). Cells were harvested by centrifugation (4000 rpm, 10 min) and resuspended in an appropriate medium to the desired cell concentrations (Table 1). Encapsulation in silica gels was carried out as described previously [8,21,22]. Briefly the tetramethoxysilane (TMOS) was prepolymerized overnight under acidic conditions (molar ratio TMOS:deionized water:HCl 1:5:10 −2 ) at 4 °C. The formed sol (0.5 mL) was neutralized by NaOH (0.05 mol/L, 0.5 mL), mixed with cell suspension (2 mL) and poured on microscopy glass-slides. Slides were pretreated for better attachment of the films-wiped with toluene, acetone, and ethanol, immersed overnight in NaOH (1 mol/L), sonicated (20 min) in deionized water and dried (2 h, 120 °C). After gelification (~1 min) the gel was immersed in phosphate buffer (50 mmol/L, pH 7). The gels were prepared in quadruplicates and stored under refrigerating conditions (4 °C) immersed in fresh sterile phosphate buffer (0.05 mol/L, pH 7). Sampling and Analysis Sampling was carried out on day 1 and day 7 after encapsulation. Four entire gels were withdrawn at once. Approximately half of the gel was used for moisture content determination and the rest was frozen (−40 °C) in Eppendorf tubes (1.5 mL) for further PLFA determination. PLFA were determined as described previously [15]; a detailed procedure is provided in the Supplementary Material. Briefly, the total lipids from the sample of frozen gel were extracted using a single-phase mixture of chloroform, phosphate buffer, and methanol. Total lipids were fractionated to non-polar lipids, glycolipids and polar lipids. The polar lipid fraction was than subjected to mild alkaline methanolysis and the prepared fatty acids methyl esters (FAME) were determined by GC-MS using methyl nonadecanoate as internal standard. trans/cis PLFA indicator was calculated as the ratio (16:1ω7t + 18:1ω7t)/(16:1ω7 + 18:1ω7) [11]. Cy/pre indicator was calculated as the ratio (cy17:0 + cy19:0)/(16:1ω7 + 18:1ω7) [23]. Table 2 presents mutual correlations between input quantities of Paracoccus denitrificans in LB medium (measured as OD600 of the bacterial suspension used for encapsulation) and concentrations of total PLFA and several abundant FAMEs one day after encapsulation (gels A0 to A6). The highest correlation (r = 0.99) was obtained especially between OD600 and total PLFA which enabled linear regression (PLFAtot = 1.0813OD600 + 0.1123, n = 12, R 2 = 0.99). The intercept of this equation is significant (α = 0.05), i.e., even gels with no input bacteria contained small but significant amounts of PLFAs. This phenomenon is likely related to the small amount of phospholipids in yeast extract, an important admixture of the Luria Broth medium used for dilution of the bacterial suspension. Such co-encapsulation of nutrients is important prerequisite for long-term survival of encapsulated microorganisms [1,24]. This also explains non-zero gel concentrations, uncorrelated to input biomass, of methyl linoleate (cis18:2ω6, 9). This fatty acid is generally only produced by eukaryotes [11,25] and indeed it was not detected in a pure culture of the P. denitrificans strain as well as in further samples encapsulated without LB. Total PLFA Were Proportional to Input Biomass None of abundant single fatty acids was significantly correlated to input bacteria (α = 0.05) and the same was true even for their sum (Table 2). Such a result therefore disallows simple estimation of viable biomass concentrations based on a single FAME, leaving total PLFA concentration as a better indicator. Repeated sampling of silica films seven days after encapsulation gave similar results; however the slope of the OD600 versus PLFA relationship decreased significantly (PLFAtot = 0.0815×OD600 + 0.1113, n = 12, R 2 = 0.82). The slope remained significant (α = 0.05), but the decrease indicates substantial extinction of encapsulated bacteria at input cell concentrations of order 10 8 cells/g and higher; a phenomenon already observed in similar silica matrices [8,22,23]. Stress PLFA Indicators Microorganisms need to maintain their cytoplasmatic membrane optimally permeable and fluid. As a result, membrane phospholipid fatty acids have fast turnover rate and reflect changes of the environment as well as in the cell physiology [11]. This led to the development of several PLFA-based stress indicators such as the ratio of trans/cis PLFA, branched/linear PLFA or cy/pre PLFA, widely used especially in soil ecology for studying of soil microbial communities [11,23]. The popular trans/cis PLFA ratio, a stress indicator of ongoing membrane transformations of dominant cis fatty acids to the corresponding trans isomers, could not be evaluated because the concentrations of indicator trans fatty acids 16:ω7t and 18:1ω7t were below the detection limit in all samples. This isomerization occurs in response to membrane perturbations and related stresses [11]. Such a negative result however indicates limited membrane interactions of the used encapsulation procedure and confirms its good biocompatibility. Figure 1. Relationship between biomass of P. denitrificans (expressed as OD600 of the input bacterial suspension) and cy/pre PLFA indicator at day 1 (blue diamonds) and day 7 (red circles) after encapsulation. Used gels A0 to A6. In contrast, the evaluation of cy/pre ratio was successful. The abundance of cy17:0 and cy19:0 was high, as expected for the Paracoccus genus, since fatty acids with a cyclopropyl ring are typical for Gram-negative bacteria [12]. Increased transformation of monounsaturated fatty acids into cyclopropyl ones, known in Gram-negative bacteria, is observed upon transition of the culture into a stationary growth-phase as a response to nutrient insufficiencies and also stresses [11]. The relationship between OD600 of the input biomass and cy/pre ratio is depicted in Figure 1. While at day 1 after encapsulation there is a slightly increasing but very unclear relationship, at day 7 the cy/pre ratio shows a clear logarithmic dependence on bacterial concentration. This indicates a higher nutrition stress of higher bacterial concentrations. In order to evaluate whether the increased value of cy/pre indicator indeed indicates nutrition stress, P. denitrificans was encapsulated with three variants of media used for bacterial suspension dilution and encapsulation, i.e., phosphate buffer (imitating complete nutrient insufficiency, gels C), BSM (imitating lack of C-source but availability of mineral nutrients, gels B), and LB (imitating nutrients excess, gels A2). The values of cy/pre indicator increased in the order LB < BSM < phosphate buffer ( Figure 2). Average values for bacteria encapsulated in BSM and phosphate buffer were significantly higher compared to LB (t-test, pair comparisons, α = 0.05), however they were mutually comparable. For P. denitrificans and likely for other bacteria, this result confirms the applicability of cy/pre indicator to detection of nutrient availability, especially to the C-source content. Figure 2. Values of cy/pre indicator as a function of three different media used for suspension of encapsulated P. denitrificans determined at day 1 after encapsulation. Color indicates significant differences based on pair comparisons (t-test, α = 0.05). Error bars show 95%-confidence intervals. Applications in Biosensor Construction Sol-gel encapsulated microorganisms can find many biotechnological applications such as biocatalysis [26][27][28], production of hydrogen [29] or secondary metabolites [30,31]). Since such matrices, especially at low cell concentrations, are translucent, one of promising applications is the preparation of recognition elements for optical biosensors with encapsulated bioluminescent and fluorescent bioreporters [8,32,33]. Bioreporter response requires intact, metabolically active microbial cells and usually also supply of nutrients and oxygen. Failure to fulfill these conditions presents a risk of a false negative response. In such case PLFA analyses can provide useful information about the amount of living biomass as well as the nutrition stress (cy/pre ratio). The PLFA profile can be obtained in ~2 days, far slower compared to biosensor responses (minutes to hours), which disqualifies the method for routine confirmation of negative results. Nevertheless PLFA data can be very useful in the design phase of the bioreporter especially during the optimization of encapsulation conditions and long-term verification of the function and stability. Generalization The obtained results demonstrate the applicability of PLFA profiles for assessment of the biomass and physiology of bacteria encapsulated in rigid inorganic silica matrices. Together with our previous study focused on bacteria encapsulated in polyvinyl alcohol [15], this indicates a wider applicability of this approach for monitoring of immobilized bacteria. Further verification with a wider spectrum of microorganisms, matrices and encapsulation procedures is however required. Conclusions This study demonstrated utilization of analyses of phospholipid fatty acids (PLFA) for estimation of the amount of living biomass of Paracoccus denitrificans encapsulated in a silica matrix prepared by sol-gel route from prepolymerized tetramethoxysilane. In addition, the PLFA profile enabled estimation of the nutrition stress of encapsulated bacteria via cy/pre indicator. Accounting previous study on polyvinyl alcohol-encapsulated bacteria the results indicate wider applicability of PLFA profiling for assessment of encapsulated microorganisms.
2,683.8
2015-02-01T00:00:00.000
[ "Environmental Science", "Materials Science" ]
Application of strongly focused pulsed electron beam for the reaction wheels balancing In the given work the material removing possibility by the strongly focused pulsed electron beam was investigated. The optimal mode of flywheels balancing was found. At this mode the power density is 1.6 MW/cm2 and pulse duration is 0.65 s. At such parameters the evaporation rate is equal to 11 g/scm2. It is possible to vary the amount of remote material from 1 to 100 mg, that is sufficient to balance flywheel. It is found that treatment by an electron beam does not change the material structure. Introduction One of the satellite main systems is orientation system which lets stabilizing the position of its axes relative to the specified destinations. Attitude control system includes reaction wheel which allows controlling satellite movement around the center of mass. Wheel rotates on the high speed; thereby creates internal moments that allow changing the angular position of the satellite relative to the base reference system, without changing the center of mass. Usually three wheels are established in satellite. Their axes are aligned with the satellite major axis of inertia. Fig. 1 shows the structure of the reaction wheels. Figure 1. The construction of reaction wheels: 1 -ring of wheel, 2supports, 3 -electromotor, 4 -a base on which the rotor is fixed, 5protective cover, 6 -spindle. In the reaction wheel manufacturing process there is a number of problems, one of which is its mass unbalance. This problem can lead to the satellite avoidance from its orbit. Currently, there are several techniques for flywheel unbalance correction: mechanical technique and treatment by concentrated energy fluxes (precision work). The first method involves holing in the flywheel, and, if necessary, balancing it by placing the particular weights in the form of small bolts inside the holes (Fig. 2). Such laborious technique requires heavy expenses of time and money. Therefore there is a need to develop other flywheel balancing methods, for example treatment by concentrated energy fluxes that includes laser and electron beam technologies. In this case the main mechanism of material removing is evaporation, because metal temperature riches a high magnitude under the influence of high-energy particles. This technique is more precise and can be more effective than the mechanical method. Nowadays, laser technology of balancing is developing, but there are some disadvantages such as high energy reflection coefficient (~80%) and liquid melt ejection. Electron beam application allows to avoid these problems, because the energy reflection coefficient of the electron beam is much lower (~15 %) than for the laser beam, consequently the process efficiency is higher. Varying the electron beam parameters the liquid melt ejection can be avoided. In the present study the electron beam treatment is suggested as the method for flywheel balancing. Application of strongly focused pulsed electron beam allows local and short-time metal treatment. Electron beam treatment Electron-beam evaporation was carried out on the EBU -0,5 -6 electron-beam unit with plasma cathode (Fig.3). This unit is fitted out by a two-lens focusing system which gives an opportunity to reduce the beam divergence angle. The maximum diameter of the beam can reach the value of 0.8 mm, considering that the distance between the focusing system and the sample can be ~0.5 m. Such system allows producing a beam with the diameter less than 300 microns [1]. The investigated samples were made from steel AISI 321. Their thickness was 5 mm and diameter was 20 mm. During the experiment the acceleration voltage was 28 kV; working pressure was 10 -1 Pa. The distance to the focusing coil was 200 mm that corresponds to the beam diameter equal to 0.32 mm [1]. Such parameters as beam current, duration and number of pulses were variated to determine the optimal mode of evaporation. Methodology The amount of removed material was measured using the PA413 laboratory analytical weighing scales OHAUS Pioneer. The macrostructure was investigated by the TRIO -1044 microscope. The samples hardness was measured by Vickers method with the load of 50 grams by the KB 30S Pruftechnik GmbH Hardness tester. Results and discussion Experiments show that the minimum of the beam current while the evaporation process was 35 mA; maximum magnitude was 52 mA. Above this value there are some negative effects, such as liquid melt ejection and energy beam expenditure on the thermal conductivity, i.e. overheating of samples (Fig.4). The evaporation rate at different current (35, 40, 52 mA) was investigated (Fig.5). Almost all energy was used to heat the sample in case when the initial energy was equal to 28 kV (the secondary radiation yield is very small), and the power density was 10 6 W / cm 2 . Fig. 5 shows that the removal rate of material reaches the maximum at the pulse duration of ~ 0.15...0.3 s. However, at these pulse duration magnitudes, the evaporation process is non-stationary and the curve nature is periodic (Fig.4). It can be explained by the auto oscillations, which are presented during the electron beam treatment. When the pulse duration is 1 s a large amount of energy is coming to the sample consequently heats it. There is vapor formation near samples surface, which leads to loss of beam energy, and evaporation process becomes energy intense. From these observations it can be seen that the optimal pulse duration is 0.5…0.65 s. Besides, it can be seen from Fig.5 that the evaporation rate at 52 mA almost equals the rate at 40 mA, therefore it can be noted that the more energetically effective current beam is 40 mA. In addition, in this work it was necessary to investigate the macrostructure and mechanical properties of treated samples, because the presence of defects can lead to the reaction wheel damage in the process of operation. Hardness investigation shows that electron beam treatment does not affect the mechanical properties of material (Fig.6). It was shown that the depth from the beam thermal effect significantly depends on the pulse duration. When it was 1 s, the depth of molten zone was 5 mm (sample thickness), but at the value of 0.65 s the maximum width of the molten zone was 6 mm, and the depth was 4.4 mm (Fig.7). Such dimensions are negligible for the flywheels treatment, because its diameter varies from 30 to 35 cm. Also it can be seen that there are not any bulk defects (pores and cracks). From these results it may be concluded that the treatment by strongly focused pulsed electron beam does not lead to the defects appearance. Conclusions It was shown that strongly focused pulsed electron beam can be apply as the method of gyroscope flywheel balancing. The optimal mode for material removing from the steel surface includes following parameters: beam power density of 1.6·10 6 W/cm 2 , pulse duration of 0.65 s, evaporation rate of 11 g/s·cm 2 . The amount of removed metal may be varied from 1 to 100 mg per a cycle. The investigation of the samples macrostructure and hardness showed that the electron beam treatment does not change the metal structure significantly.
1,642.2
2016-11-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Improved approximate rips filtrations with shifted integer lattices and cubical complexes Rips complexes are important structures for analyzing topological features of metric spaces. Unfortunately, generating these complexes is expensive because of a combinatorial explosion in the complex size. For n points in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^d$$\end{document}Rd, we present a scheme to construct a 2-approximation of the filtration of the Rips complex in the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_\infty $$\end{document}L∞-norm, which extends to a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2d^{0.25}$$\end{document}2d0.25-approximation in the Euclidean case. The k-skeleton of the resulting approximation has a total size of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n2^{O(d\log k +d)}$$\end{document}n2O(dlogk+d). The scheme is based on the integer lattice and simplicial complexes based on the barycentric subdivision of the d-cube. We extend our result to use cubical complexes in place of simplicial complexes by introducing cubical maps between complexes. We get the same approximation guarantee as the simplicial case, while reducing the total size of the approximation to only \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n2^{O(d)}$$\end{document}n2O(d) (cubical) cells. There are two novel techniques that we use in this paper. The first is the use of acyclic carriers for proving our approximation result. In our application, these are maps which relate the Rips complex and the approximation in a relatively simple manner and greatly reduce the complexity of showing the approximation guarantee. The second technique is what we refer to as scale balancing, which is a simple trick to improve the approximation ratio under certain conditions. Introduction Context. Persistent homology (Carlsson 2009;Edelsbrunner and Harer 2010;Edelsbrunner et al. 2002) is a technique to analyze data sets using topological invariants. The idea is to build a multi-scale representation of data sets and to track its homological changes across the scales. A standard construction for the important case of point clouds in Euclidean space is the Vietoris-Rips complex (usually abbreviated as simply the Rips complex): for a scale parameter α ≥ 0, it is the collection of all subsets of points with diameter at most α. When α increases from 0 to ∞, the Rips complexes form a filtration, an increasing sequence of nested simplicial complexes whose homological changes can be computed and represented in terms of a barcode. The computational drawback of Rips complexes is their sheer size: the k-skeleton of a Rips complex (that is, where only subsets of size at most k + 1 are considered) for n points consists of (n k+1 ) simplices because every (k +1)-subset joins the complex for a sufficiently large scale parameter. This size bound makes barcode computations for large point clouds infeasible even for low-dimensional homological features 1 . This difficulty motivates the question of what we can say about the barcode of the Rips filtration without explicitly constructing all of its simplices. We address this question using approximation techniques. The space of barcodes forms a metric space: two barcodes are close if similiar homological features occur on roughly the same range of scales. More precisely, the bottleneck distance is used as a distance metric between barcodes. The first approximation scheme by Sheehy (2013) constructs a (1 + ε)-approximation of the k-skeleton of the Rips filtration using only n( 1 ε ) O(λk) simplices for arbitrary finite metric spaces, where λ is the doubling dimension of the metric. Further approximation techniques for Rips complexes (Dey et al. 2014) and the closely relatedČech complexes (Botnan and Spreemann 2015;Cavanna et al. 2015;Kerber and Sharathkumar 2013) have been derived subsequently, all with comparable size bounds. More recently, we constructed an approximation scheme (Choudhary et al. 2019) for theČech filtrations of n points in R d that had size n 1 ε O(d) 2 O(d log d+dk) for the k-skeleton, improving the size bound from previous work. In Choudhary et al. (2017b), we constructed an approximation scheme for Rips filtration in Euclidean space that yields a worse approximation factor of only O(d), but uses only n2 O(d log k+d) simplices. In Choudhary et al. (2017b), we also show a lower bound result on the size of approximations: for any ε < 1/ log 1+c n with some constant c ∈ (0, 1), any ε-approximate filtration has size n (log log n) . There has also been work on using cubical complexes to compute persistent homology, such as in Wagner et al. (2012). Cubical complexes are typically smaller than their simplicial counterparts, simply because they avoid triangulations. However, to our knowledge, there has been no attempt to utilize them in computing approximations of filtrations. Also, while there are efficient methods to compute persistence for simplicial complexes connected with simplicial maps (Dey et al. 2014; Schreiber 2017), we are not aware of such counterparts for cubical complexes. Our contributions. For the Rips filtration of n points in R d with distances taken in the L ∞ -norm, we present a 2-approximation whose k-skeleton has size at most n6 d−1 (2k+ denotes Stirling numbers of the second kind. This translates to a 2d 0.25 -approximation of the Rips filtration in the Euclidean metric and hence improves the asymptotic approximation quality of our previous approach (Choudhary et al. 2017b) with the same size bound. Our scheme gives the best size guarantee over all previous approaches. On a high level, our approach follows a straightforward approximation scheme: given a scaled and appropriately shifted integer grid on R d , we identify those grid points that are close to the input points and build an approximation complex using these grid points. The challenge lies in how to connect these grid points to a simplicial complex such that close-by grid points are connected, while avoiding too many connections to keep the size small. Our approach first selects a set of active faces in the cubical complex defined over the grid, and defines the approximation complex using the barycentric subdivision of this cubical complex. We also describe an output-sensitive algorithm to compute our approximation. By randomizing the aforementioned shifts of the grids, we obtain a worst-case running time of n2 O(d) log + 2 O(d) M in expectation, where is the spread of the point set (that is, the ratio of the diameter to the closest distance of two points) and M is the size of the approximation. Additionally, this paper makes the following technical contributions: -We follow the standard approach of defining a sequence of approximation complexes and establishing an interleaving between the Rips filtration and the approximation. We realize our interleaving using chain maps connecting a Rips complex at scale α to an approximation complex at scale cα, and vice versa, with c ≥ 1 being the approximation factor. Previous approaches (Choudhary et al. 2017b;Dey et al. 2014;Sheehy 2013) used simplicial maps for the interleaving, which induce an elementary form of chain maps and are therefore more restrictive. The explicit construction of such maps can be a non-trivial task. The novelty of our approach is that we avoid this construction by the usage of acyclic carriers (Munkres 1984). In short, carriers are maps that assign subcomplexes to subcomplexes under some mild extra conditions. While they are more flexible, they still certify the existence of suitable chain maps, as we exemplify in Sect. 2. We believe that this technique is of general interest for the construction of approximations of cell complexes. -We exploit a simple trick that we call scale balancing to improve the quality of approximation schemes. In short, if the aforementioned interleaving maps from and to the Rips filtration do not increase the scale parameter by the same amount, one can simply multiply the scale parameter of the approximation by a constant. Concretely, given maps interleaving the Rips complex R α and the approximation complex X α , we can define X α := X α/ √ c and obtain maps which improves the interleaving from c to √ c. While it has been observed that the same trick can be used for improving the worst-case distance between Rips anď Cech filtrations, 2 our work seems to be the first to make use of it in the context of approximations. -We extend our approximation scheme to use cubical complexes instead of simplicial complexes, thereby achieving a marked reduction in size complexity. To connect the cubical complexes at different scales, we introduce the notion of cubical maps, which is a simple extension of simplicial maps to the cubical case. While we do not know of an algorithm that can compute persistence for the case of cubical complexes with cubical maps, we believe that this is a first step towards advocating the use of cubical complexes as approximating structures. Our technique can be combined with dimension reduction techniques in the same way as in Choudhary et al. (2017b) (see Theorems 19, 21, and 22 therein), with improved logarithmic factors. We state the main results in the paper, while omitting the technical details. Updates from the conference version. An earlier version of this paper appeared at the 25th European Symposium on Algorithms (Choudhary et al. 2017a). In that version, we achieved a 3 √ 2-approximation of the L ∞ Rips filtration and correspondingly, a 3 √ 2d 0.25 -approximation of the L 2 case. In this version, we improve the weak interleaving of Choudhary et al. (2017a) to a strong interleaving to get improved approximation factors. We expand upon the details of scale balancing, among other proofs that were missing from the conference version. We add the case of cubical complexes in this version. There is a subtle yet important distinction between the approximation complexes used in the conference version and the current result. In the conference version, our simplicial complex was built using only active faces, while the current version uses both active and secondary faces (please see Sect. 4 for definitions). This makes it easier to relate the simplicial and the cubical complexes in the current version. On the other hand the complexes are different, hence the associated proofs have been adapted accordingly. Outline. We start by explaining the relevant topological concepts in Sect. 2. We give details of the integer grids that we use in Sect. 3. In Sect. 4 we present our approximation scheme that uses the barycentric subdivision, and present the computational aspects in Sect. 5. The extension to cubical complexes is presented in Sect. 6. We discuss practical aspects of our scheme and conclude in Sect. 7. Some details of the strong interleaving from Sect. 4 are deferred to Appendix A. Preliminaries We briefly review the essential topological concepts needed. More details are available in standard references (see Bubenik et al. 2015;Chazal et al. 2009;Edelsbrunner and Harer 2010;Hatcher 2002;Munkres 1984). Simplicial complexes. A simplicial complex K on a finite set of elements S is a collection of subsets {σ ⊆ S} called simplices such that each subset τ ⊂ σ is also in K . The dimension of a simplex σ ∈ K is k := |σ | − 1, in which case σ is called a k-simplex. A simplex τ is a sub-simplex of σ if τ ⊆ σ . We remark that, commonly a sub-simplex is called a "face" of a simplex, but we reserve the word "face" for a different structure. For the same reason, we do not introduce the common notation of of "vertices" and "edges" of simplicial complexes, but rather refer to 0-and 1-simplices throughout. The k-skeleton of K consists of all simplices of K whose dimension is at most k. For instance, the 1-skeleton of K is a graph defined by its 0-simplices and 1-simplices. Given a point set P ⊂ R d and a real number α ≥ 0, the (Vietoris-)Rips complex on P at scale α consists of all simplices σ = ( p 0 , · · · , p k ) ⊆ P such that diam(σ ) ≤ α, where diam denotes the diameter. In this work, we write R α for the Rips complex at scale 2α with the Euclidean metric, and R ∞ α when using the metric of the L ∞ -norm. In either way, a Rips complex is an example of a flag complex, which means that whenever a set { p 0 , · · · , p k } ⊆ P has the property that every 1-simplex { p i , p j } is in the complex, then the k-simplex { p 0 , · · · , p k } is also in the complex. A related complex is theČech complex of P at scale α, which consists of simplices of P for which the radius of the minimum enclosing ball is at most α. We do not studyČech complexes in this paper, but we mention them briefly while showing a connection with the Rips complex later in this section. Let L be a simplicial complex. Letφ be a map which assigns a vertex of L to each vertex of K . A simplicial map is a map ϕ : K → L induced by a vertex mapφ, such that for every simplex { p 0 , · · · , p k } in K , the set {φ( p 0 ), · · · ,φ( p k )} is a simplex of L. For K a subcomplex of K , the inclusion map inc : K → K is an example of a simplicial map. A simplicial map is completely determined by its action on the 0-simplices of K . Chain complexes. A chain complex C * = (C p , ∂ p ) with p ∈ Z is a collection of abelian groups C p and homomorphisms ∂ p : C p → C p−1 such that ∂ p−1 • ∂ p = 0. A simplicial complex K gives rise to a chain complex C * (K ) for a fixed base field F: define C p for p ≥ 0 as the set of formal linear combinations of p-simplices in K over F, and C −1 := F. The boundary of a k-simplex with k ≥ 1 is the (signed) sum of its sub-simplices of co-dimension one 3 ; the boundary of a 0-simplex is simply set to 1. The homomorphisms ∂ p are then defined as the linear extensions of this boundary operator. Note that C * (K ) is sometimes called augmented chain complex of K , where the augmentation refers to the addition of the non-trivial group C −1 . A chain map φ : C * → D * between chain complexes C * = (C p , ∂ p ) and D * = (D p , ∂ p ) is a collection of group homomorphisms φ p : For simplicial complexes K and L, we call a chain map φ : C * (K ) → C * (L) augmentation-preserving if φ −1 is the identity. A simplicial map ϕ : K → L between simplicial complexes induces an augmentation-preserving chain mapφ : C * (K ) → C * (L) between the corresponding chain complexes. This construction is functorial, meaning that for ϕ the identity function on a simplicial complex K ,φ is the identity function on C * (K ), and for composable simplicial maps ϕ, ϕ , we have that ϕ • ϕ =φ •φ . Homology. The p-th homology group H p (C * ) of a chain complex is defined as ker ∂ p /im ∂ p+1 . The p-th homology group of a simplicial complex K , H p (K ), is the p-th homology group of its induced chain complex C * (K ). Note that this definition is commonly referred to as reduced homology, but we ignore this distinction and consider reduced homology throughout. H p (C * ) is an F-vector space because we have chosen our base ring F as a field. Intuitively, when the chain complex is generated from a simplicial complex, the dimension of the p-th homology group counts the number of p-dimensional holes in the complex. We write H (C * ) for the direct sum of all H p (C * ) for p ≥ 0. A chain map φ : C * → D * induces a linear map φ * : H (C * ) → H (D * ) between the homology groups. Again, this construction is functorial, meaning that it maps identity maps to identity maps, and it is compatible with compositions. Acyclic carriers. We call a simplicial complex K acyclic, if K is connected and all homology groups H p (K ) are trivial. For simplicial complexes K and L, an acyclic carrier is a map that assigns to each simplex σ in K , a non-empty acyclic subcomplex (σ ) ⊆ L, and whenever τ is a sub-simplex of σ , then (τ ) ⊆ (σ ). We say that a chain c ∈ C p (K ) is carried by a subcomplex K , if c takes value 0 except for p-simplices in K . A chain map φ : C * (K ) → C * (L) is carried by , if for each simplex σ ∈ K , φ(σ ) is carried by (σ ). We state the acyclic carrier theorem (Munkres 1984, Thm 13.3), adapted to our notation: Theorem 1 Let : K → L be an acyclic carrier. Then, -There exists an augmentation-preserving chain map φ : C * (K ) → C * (L) carried by . -If two augmentation-preserving chain maps φ 1 , φ 2 : C * (K ) → C * (L) are both carried by , then φ * 1 = φ * 2 . 4 We remark that "augmentation-preserving" is crucial in the statement: without it, the trivial chain map (that maps everything to 0) turns the first statement trivial and easily leads to a counter-example for the second claim. Filtrations and towers. Let I ⊆ R be a set of real values which we refer to as scales. A filtration is a collection of simplicial complexes (K α ) α∈I such that K α ⊆ K α for all α ≤ α ∈ I . For instance, (R α ) α≥0 is a filtration which we call the Rips filtration. A (simplicial) tower is a sequence (K α ) α∈J of simplicial complexes with J being a discrete set (for instance J = {2 k | k ∈ Z}), together with simplicial maps ϕ α : K α → K α between complexes at consecutive scales. For instance, the Rips filtration can be turned into a tower by restricting to a discrete range of scales, and using the inclusion maps as ϕ. The approximation constructed in this paper will be another example of a tower. We say that a simplex σ is included in the tower at scale α , if σ is not in the image of the map ϕ α : K α → K α , where α is the scale preceding α in the tower. The size of a tower is the number of simplices included over all scales. If a tower arises from a filtration, its size is simply the size of the largest complex in the filtration (or infinite, if no such complex exists). However, this is not true in general for simplicial towers, because simplices can collapse in the tower and the size of the complex at a given scale may not take into account the collapsed simplices which were included at earlier scales in the tower. Barcodes and Interleavings. A collection of vector spaces (V α ) α∈I connected with linear maps λ α 1 ,α 2 : V α 1 → V α 2 is called a persistence module, if λ α,α is the identity on V α and λ α 2 ,α 3 • λ α 1 ,α 2 = λ α 1 ,α 3 for all α 1 ≤ α 2 ≤ α 3 ∈ I for the index set I . We generate persistence modules using the previous concepts. Given a simplicial tower (K α ) α∈I , we generate a sequence of chain complexes (C * (K α )) α∈I . By functoriality, the simplicial maps ϕ of the tower give rise to chain maps ϕ between these chain complexes. Using functoriality of homology, we obtain a sequence (H (K α )) α∈I of vector spaces with linear maps ϕ * , forming a persistence module. The same construction applies to filtrations as a special case. Persistence modules admit a decomposition into a collection of intervals of the form [α, β] (with α, β ∈ I ), called the barcode, subject to certain tameness conditions. The barcode of a persistence module characterizes the module uniquely up to isomorphism. If the persistence module is generated by a simplicial complex, an interval [α, β] in the barcode corresponds to a homological feature (a "hole") that comes into existence at complex K α and persists until it disappears at K β . Two persistence modules (V α ) α∈I and (W α ) α∈I with linear maps φ ·,· and ψ ·,· are said to be weakly (multiplicatively) c-interleaved with c ≥ 1, if there exist linear maps γ α : V α → W cα and δ α : W α → V cα , called interleaving maps, such that the diagram (1) commutes, that is, ψ = γ •δ and φ = δ •γ for all {. . . , α/c 2 , α/c, α, cα, . . . } ∈ I (we have skipped the subscripts of the maps for readability). In such a case, the barcodes of the two modules are 3c-approximations of each other in the sense of Chazal et al. (2009). We say that two towers are c-approximations of each other if their persistence modules are c-approximations. Under the more stringent conditions of strong interleaving, the approximation ratio can be improved. Two persistence modules (V α ) α≥0 and (W α ) α≥0 with respective linear maps φ ·,· and ψ ·,· are said to be (multiplicatively) strongly c-interleaved if there exist a pair of families of linear maps γ α : V α → W cα and δ α : W α → V cα for c > 0, such that Diagram (2) commutes for all 0 ≤ α ≤ α (the subscripts of the maps are excluded for readability). In such a case, the persistence barcodes of the two modules are said to be c-approximations of each other in the sense of Chazal et al. (2009). Finally, we mention a special case that relates equivalent persistence modules (Carlsson and Zomorodian 2005;Goodman et al. 2017). Two persistence modules V = (V α ) α∈I and W = (W α ) α∈I that are connected through linear maps φ, ψ respectively are isomorphic if there exists an isomorphism f α : V α → W α for each α ∈ I for which the following diagram commutes for all α ≤ β ∈ I : Isomorphic persistence modules have identical persistence barcodes. Scale balancing. Let V = (V α ) α∈I and W = (W α ) α∈I be two persistence modules with linear maps f v , f w , respectively. Let there be linear maps φ : V α/ε 1 → W α and ψ : W α → V αε 2 for 1 ≤ ε 1 , ε 2 such that all α, α/ε 1 , αε 2 ∈ I . Suppose that the following diagram commutes, for all α ∈ I . We define a new vector space V cα := V α , where c = ε 1 ε 2 and cα ∈ I . This gives rise to a new persistence module, V = (V cα ) α∈I . The maps φ and ψ can then be interpreted as φ : Then, Diagram (4) can be re-interpreted as . V and V have the same barcode up to a scaling factor. This scaling trick also works when V and W are strongly interleaved. If we have the following commutative diagrams: (where we have skipped the maps for readability): then V and W are max(ε 1 , ε 2 )-approximations of each other. By defining V as before, the following diagrams commute for d = cε 2 = √ ε 1 ε 2 , so we can improve a max(ε 1 , ε 2 )-approximation to an √ ε 1 ε 2 -approximation. We end the section by discussing a basic but important relation betweenČech and Rips filtrations. It is well-known that for any α ≥ 0, C α ⊆ R α ⊆ C √ 2α (Edelsbrunner and Harer 2010). This gives a strong interleaving between the towers (C α ) α≥0 and (R α ) α≥0 with ε 1 = 1 and ε 2 = √ 2. Applying the scale balancing technique, we get that Lemma 1 The scaledČech persistence module (H (C 4 √ 2α )) α≥0 and the Rips persistence module (H (R α )) α≥0 are 4 √ 2-approximations of each other. Shifted integer lattices In this section, we take a look at simple modifications of the integer lattice. We denote by I := {α s := λ2 s | s ∈ Z} with λ > 0, a discrete set of scales. For each scale in I , we define grids which are scaled and translated (shifted) versions of the integer lattice. Definition 1 (Scaled and shifted grids) For each scale α s ∈ I , we define the scaled and shifted grid G α s inductively as: -For s = 0, G α s is simply the scaled integer grid λZ d , where each basis vector has been scaled by λ. where the signs of the components of the last vector are chosen independently and uniformly at random (and the choice is independent for each s). where the last vector is chosen as in the case of s ≥ 0. Equations (8) and (9) are consistent at s = 0. A simple example of the above construction is the sequence of grids with G α s := α s Z d for even s, and G α s := α s Z d + α s−1 2 (1, · · · , 1) for odd s. Next, we motivate the shifting of the grids. Let Vor G s (x) denote the Voronoi cell of any point x ∈ G s with respect to the point set G s . It is clear that the Voronoi cell is a cube of side length α s centered at x. The shifting of the grids ensures that each x ∈ G α s lies in the Voronoi region of a unique y ∈ G α s+1 . Using an elementary calculation, we show a stronger statement: Proof Without loss of generality, we can assume that α s = 2 and x is the origin, using an appropriate translation and scaling. Also, we assume for the sake of simplicity that G α s+1 = 2G α s + (1, · · · , 1); the proof is analogous for any other translation vector. In that case, it is clear that y The claim follows. For an example look to Fig. 1. Cubical complex of Z d The integer grid Z d naturally defines a cubical complex, where each element is an axis-aligned, k-dimensional cube with 0 ≤ k ≤ d. To define it formally, let denote the set of all integer translates of faces of the unit cube [0, 1] d , considered as a convex polytope in R d . We call the elements of faces of Z d . Each face has a dimension k; the 0-faces, or vertices are exactly the points in Z d . The facets of a k-face E are the (k − 1)-faces contained in E. We call a pair of facets of E opposite facets, if they are disjoint. Naturally, these concepts carry over to scaled and shifted versions of Z d , so we define α s as the cubical complex defined by G α s . We define a map g α s : α s → α s+1 as follows: for vertices of α s , we assign to }; the next lemma shows that this is a well-defined map. In this paper, we sometimes call g α s a cubical map, since it is a counterpart of simplicial maps for cubical complexes. Lemma 3 Let f be k-face of α s with vertices -if e 1 , e 2 are any two opposite facets of e, then there exists a pair of opposite facets f 1 , f 2 of f such that g α s ( f 1 ) = e 1 and g α s ( f 2 ) = e 2 . Proof First claim: We prove the first claim by induction on the dimension of faces of G α s . Base case: for vertices, the claim is trivial using Lemma 2. Induction case: let the claim hold true for all (k − 1)-faces of G α s . We show that the claim holds true for all k-faces of G α s . Let f be a k-face of G α s . Let f 1 and f 2 be opposite facets of f , along the m-th coordinate. Let us denote the vertices of f 1 by ( p 1 , · · · , p 2 k−1 ) and those of f 2 by ( p 2 k−1 +1 , · · · , p 2 k ) taken in the same order, that is, p j and p 2 k−1 + j differ in only the m-th coordinate for all 1 ≤ j ≤ 2 k−1 . By definition, all vertices of f 1 share the m-th coordinate, and we denote coordinate of these vertices by z. Then, the m-th coordinate of all vertices of f 2 equals z + α s . Then g α s ( p j ) and g α s ( p 2 k−1 + j ) have the same coordinates, except possibly the m-th coordinate. By induction hypothesis, e 1 = g α s ( f 1 ) and e 2 = g α s ( f 2 ) are two faces of G s+1 . This implies that e 2 is a translate of e 1 along the m-th coordinate. There are two cases: if e 1 and e 2 share the m-th coordinate, then e 1 = e 2 and therefore g α s ( f ) = e 1 = e 2 = e, so the claim follows. On the other hand, if e 1 and e 2 do not share the m-th coordinate, then they are two faces of α s+1 which differ in only one coordinate by α s+1 . So they are opposite facets of a co-dimension one face e of G α s+1 . Using induction, the claim follows. Second claim: We prove the claim by induction over the dimension of e 1 . Base case: e 1 is a vertex. The vertices of f in Voronoi region of e 1 form f 1 . Since f is an axis parallel face and the Voronoi region is also axis-parallel, it is immediate that f 1 is a face of f . Assume that the claim is true up to dimension i. For e 1 a face of dimension i + 1, consider opposite facets e a and e b of e. By the induction claim, there exist faces would be common to both e a and e b , a contradiction. If e a is a translate of e b along the m-th coordinate, then f a is also a translate of f b along the same coordinate. Therefore f a and f b are opposite faces of a face f 1 and g α s ( f 1 ) = e 1 . Third claim: Without loss of generality, assume that x 1 is the direction in which e 2 is a translate of e 1 . Using the second claim, let h denote the maximal face of f such that g α s (h) = e 1 . Clearly, h = f , since that would imply g α s ( f ) = e 1 = e, which is a contradiction. Suppose h has dimension less than k − 1. Let h be the facet of f that contains h and has the same x 1 coordinates for all vertices. Then g α s (h ) = e 1 , which contradicts the maximality of h. Therefore, the only possibility is that h is a facet f 1 of f such that g α s ( f 1 ) = e 1 . Let f 2 be the opposite facet of f 1 . From the proof of the first claim, it is easy to see that g α s ( f 2 ) = e 2 . The claim follows. Barycentric subdivision We discuss a special triangulation of The barycentric subdivision of α s , denoted by sd α s , is the (infinite) simplicial complex whose simplices are the flags of α s (Munkres 1984). In particular, the 0-simplices of sd α s are the faces of α s . An equivalent geometric description of sd α s can be obtained by defining the 0-simplices as the barycenters of the faces in sd α s , and introducing a k-simplex between (k + 1) barycenters if the corresponding faces form a flag. For a simple example, see Figs. 2 and 3. It is easy to see that sd α s is a flag complex. Given a face f in α s , we write sd( f ) for the subcomplex of sd α s consisting of all flags that are formed only by faces contained in f . Approximation scheme with simplicial complexes We define our approximation complex for a finite set of points in R d . Recall from Definition 1 that we can define a collection of scaled and shifted integer grids G α s over a collection of scales I := {α s = 2 s | s ∈ Z} in R d . To make the exposition simple, we define our complex in a slightly generalized form. Barycentric spans Fix some s ∈ Z and let V denote any non-empty subset of G α s . Vertex span. We say that a face f ∈ α s is spanned by V , if the set of vertices -is non-empty, and -not contained in any facet of f . Trivially, the vertices of α s which are spanned by V are precisely the points in V . Any face of α s which is not a vertex must contain at least two vertices of V in order to be spanned. We point out that the set of spanned faces of α s is not closed under taking sub-faces. For instance, if V consists of two antipodal points of a d-cube, the only faces spanned by V are the d-cube and the two vertices; all other faces of the d-cube contain at most one vertex and hence are not spanned. It is simple to test whether any given k-face f ∈ α s is spanned by the set of Furthermore, for any non-empty subset W ⊆ V , the faces of α s that are spanned by W are also spanned by V . Consequently, the barycentric span of W is a subcomplex of the barycentric span of V . Approximation complex We denote by P ⊂ R d a finite set of points. We define two maps: a α s : P → G α s : for each point p ∈ P, we let a α s ( p) denote the grid point in G α s that is closest to p, that is, p ∈ Vor G αs (a α s ( p)). We assume for simplicity that this closest point is unique, which can be ensured using well-known methods (Edelsbrunner and Mücke 1990). We define the active vertices of G α s as that is, the set of grid points that have at least one point of P in their Voronoi cells. b α s : V α s → P: the map b α s takes an active vertex of G α s to its closest point in P. By taking an arbitrary total order on P to resolve multiple assignments, we ensure that this assignment is unique. Recall that the map g α s : α s → α s+1 takes grid points of G α s to grid points of G α s+1 . Using Lemma 2, it follows at once that: Recall that R ∞ α denotes the Rips complex at scale α for the L ∞ -norm. The next statement is a direct application of the the triangle inequality; let diam ∞ () denote the diameter in the L ∞ -norm. Lemma 5 Let Q ⊆ P be a non-empty subset such that diam ∞ (Q) ≤ α s . Then, the set of grid points a α s (Q) is contained in a face of α s . Proof We prove the claim by contradiction. Suppose that the set of active vertices a α s (Q) is not contained in a face of α s . Then, there exists at least one pair of points {x, y} ∈ Q such that a α s (x), a α s (y) are not in a common face of α s . By the definition of the grid G α s , the grid points a α s (x), a α s (y) therefore have L ∞ -distance at least 2α s . Moreover, x has L ∞ -distance less than α s /2 from a α s (x), and the same is true for y and a α s (y). By the triangle inequality, the L ∞ -distance of x and y is more than α s , which is a contradiction to the fact that diam ∞ (Q) ≤ α s . We now define our approximation tower. For any scale α s , we define X α s as the barycentric span of the active vertices V α s ⊂ G α s . See Figs. 4, 5 and 6 for a simple illustration. To simplify notation, we denote -the faces of α s spanned by V α s as active faces, and -the faces of active faces that are not spanned by V α s as secondary faces. To complete the description of the approximation tower, we need to define simplicial maps of the formg α s : X α s → X α s+1 , which connect the simplicial complexes at consecutive scales. We show that such maps are induced by g α s . Lemma 6 Let f be any active face of α s . Then, g α s ( f ) is an active face of α s+1 . Proof Using Lemma 3, e := g α s ( f ) is a face of α s . If e is a vertex, then it is active, because f contains at least one active vertex v, and g α s (v) = e in this case. If e is not a vertex, we assume for a contradiction that it is not active. Then, it contains a facet e 1 that contains all active vertices in e. Let e 2 denote the opposite facet of e 1 in e. By Lemma 3, f contains opposite facets f 1 , f 2 such that g α s ( f 1 ) = e 1 and g α s ( f 2 ) = e 2 . Since f is active, both f 1 and f 2 contain active vertices; in particular, f 2 contains an active vertex v. But then the active vertex g α s (v) must lie in e 2 , contradicting the fact that e 1 contains all active vertices of e. The generated approximation complex, whose vertices consist of those of the cubical complex and the blue vertices (small dots), which are the barycenters of active and secondary faces As a result, g is well defined for each face e ∈ α s , since there exists some active face e ∈ α s with e ⊆ e , and g(e) ⊆ g(e ). By definition, a simplex σ ∈ X α s is a flag ( f 0 ⊆ · · · ⊆ f k ) of faces in α s . We set where (g α s ( f 0 ) ⊆ · · · ⊆ g α s ( f k )) is a flag of faces in α s+1 by Lemma 6, and hence is a simplex in X α s+1 . It follows thatg s : X α s → X α s+1 is a simplicial map. This completes the description of the simplicial tower X α s s∈Z . Interleaving with the Rips module First, we show that our tower is a constant-factor approximation of the the L ∞ -Rips filtration of P. We then show the relation between our approximation tower and the Euclidean Rips filtration of P. We start by defining two acyclic carriers. First, we set λ = 1 and abbreviate α := α s = 2 s to simplify notation. The barycentric span of any subset of U is a subcomplex of the barycentric span of U , so C α 1 is a carrier. Therefore, C α 1 is an acyclic carrier. -C α 2 : X α → R ∞ α : let σ be any flag of X α and let E be the smallest active face of α that contains σ (we break ties by making use of an arbitrary global order on P) 5 . We collect all the points of P that map to vertices of E under the map a α and set C α 2 (σ ) as the simplex on this set of points. By an application of the triangle inequality, we see that the L ∞ -diam of C α 2 (σ ) is at most 2α, so C α 2 (σ ) ∈ R ∞ α and is acyclic. It is also clear that C α 2 (τ ) ⊆ C α 2 (σ ) for each τ ⊆ σ , so C α 2 is an acyclic carrier. Using the acyclic carrier theorem (Theorem 1), there exist augmentation-preserving chain maps between the chain complexes, which are carried by C α 1 and C α 2 respectively, for each α ∈ I . We obtain the following diagram of augmentation-preserving chain maps: where inc corresponds to the chain map for inclusion maps, andg denotes the chain map for the corresponding simplicial map g (we removed indices of the maps for readability). The chain complexes give rise to a diagram of the corresponding homology groups, connected by the induced linear maps c * 1 , c * 2 , inc * ,g * : Lemma 7 For all α ∈ I , the linear maps in the lower triangle of Diagram (11) commute, that is,g * = c * 1 • c * 2 . Proof We look at the corresponding triangle in Diagram (10). We show that the (augmentation-preserving) chain mapsg and c 1 • c 2 are both carried by an acyclic carrier D : X α → X 2α . The claim then follows from the acyclic carrier theorem. Let σ ∈ X α be any flag and let E ∈ α denote the minimal active face containing σ . Let {q 1 , . . . , q k } be the active vertices of E. Let { p 1 , . . . , p m } be the set of points of P that map to {q 1 , . . . , q k } under the map a α . Since the L ∞ -diameter of { p 1 , . . . , p m } is at most 2α, using Lemma 5 we see that {a 2α ( p 1 ), . . . , a 2α ( p m )} is a face of 2α . We set D(σ ) as the barycentric span of {a 2α ( p 1 ), . . . , a 2α ( p m )}. It follows that D is an acyclic carrier. Lemma 8 For all α ∈ I , the linear maps in the upper triangle of Diagram Proof The proof technique is analogous to the proof of Lemma 7. We define an acyclic carrier D : R ∞ α → R ∞ 2α which carries inc and c 2 •c 1 , both of which are augmentationpreserving. Let σ = ( p 0 , · · · , p k ) ∈ R ∞ α be any simplex. The set of active vertices lie in a face f of G 2α , using Lemma 5. We can assume that f is active, as otherwise, we argue about a facet of f that contains U . We set D(σ ) as the simplex on the subset of points in P, whose closest grid point in G 2α is any vertex of f . Using the triangle inequality we see that D(σ ) ∈ R ∞ 2α , so D is an acyclic carrier. The vertices of σ are a subset of D(σ ), so D carries the map inc. Showing that D carries c 2 • c 1 requires further explanation. Using Lemmas 7 and 8, we see that the two persistence modules H (X α s ) s∈Z and H (R ∞ α ) α≥0 are weakly 2-interleaved. With elementary modifications in the definition of X andg, we can get a tower of the form (X α ) α≥0 . Furthermore, with minor changes in the interleaving arguments, we show that the corresponding persistence module is strongly 4-interleaved with the L ∞ -Rips module. Using scale balancing, this result improves to a strong 2-interleaving (see Lemma 16). Since the techniques used in the proof are very similar to the concepts used in this section, for the sake of brevity we defer all further details to Appendix A. Using the strong stability theorem for persistence modules and taking scale balancing into account, we immediately get that: Theorem 2 The scaled persistence module H (X 2α ) α≥0 and the L ∞ -Rips persistence module H (R ∞ α ) α≥0 are 2-approximations of each other. For any pair of points p, p ∈ R d , it holds that This in turn shows that the L 2 -and the L ∞ -Rips filtrations are strongly √ d-interleaved. Using the scale balancing technique for strongly interleaved persistence modules, we get: Using Theorem 2, Lemma 9 and the fact that interleavings satisfy the triangle inequality (Bubenik and Scott 2014, Theorem 3.3), we see that the module (H (X 2α )) α≥0 is strongly 2d 0.25 -interleaved with the scaled Rips persistence module (H (R α/d 0.25 )) α≥0 . We can remove the scaling in the Rips filtration simply by multiplying the scales on both sides with d 0.25 and obtain our final approximation result: Computational complexity In this section, we discuss the computational aspects of constructing the approximation tower. In Sect. 5.1 we discuss the size complexity of the tower. An algorithm to compute the tower efficiently is presented in Sect. 5.2. Range of relevant scales. Set n := |P| and let C P(P) denote the closest pair distance of P. At scale α 0 := C P(P) 3d and lower, no two active vertices lie in the same face of the grid, so the approximation complex consists of n isolated 0-simplices. At scale α m := diam(P) and higher, points of P map to active vertices of a common face (by Lemma 5), so the generated complex is acyclic. We inspect the range of scales [α 0 , α m ] to construct the tower, since the barcode is explicitly known for scales outside this range. For this, we set λ = α 0 in the definition of the scales. The total number of scales is log 2 α m /α 0 = log 2 diam(P)3d C P(P) = log 2 + log 2 3d = O(log + log d), where = diam(P) C P(P) is the spread of the point set. Size of the tower The size of a tower is the number of simplices that do not have a preimage, that is, the number of simplex inclusions in the tower. We start by counting the number of active faces used in the tower. Lemma 10 The number of active faces without pre-image in the tower is at most n3 d . Proof At scale α 0 , there are n inclusions of 0-simplices in the tower, due to n active vertices. Using Lemma 2, g is surjective on the active vertices of (for any scale). Hence, no further active vertices are added to the tower. It remains to count the maximal active faces of dimension ≥ 1 without preimage. We will use a charging argument, charging the existence of such an active face to one of the points in P. We show that each point of P is charged at most 3 d − 1 times, which proves the claim. For that, we first fix an arbitrary total order ≺ on P. Each active vertex on any scale has a non-empty subset of P in its Voronoi region; we call the maximal such point with respect to the order ≺ the representative of the active vertex. For each active face f of dimension at least one, we define the signature of f as the set of representatives of the active vertices of f . If for any set of active vertices u 1 , . . . , u k we have that v = g(u 1 ) = · · · = g(u k ), then the representative of v is one of the representatives of u 1 , . . . , u k , using Lemma 2. Therefore, the signatures of the active faces that are images of f under g are subsets of the signature of f . This implies that each maximal active face that is included has a unique maximal signature. We bound the number of maximal signatures to get a bound on the number of maximal active face inclusions. We charge the addition of each maximal signature to the lowest ordered point according to ≺. Each signature contains representatives of active vertices from a face of α . Since each active vertex v has 3 d − 1 neighboring vertices in the grid that lie in a common face, the representative p of v can be charged 3 d − 1 times. There is a canonical isomorphism between the neighboring vertices of v at each scale. Then, for p to be charged more times, the image of v and some neighboring vertex u must be identical under g at some scale. But then, the representative of g(v) = g(u) is not p anymore, since p was the lowest ranked point in its neighborhood, hence the representative changes when the Voronoi regions are combined. So, p could not have been charged in such a case. Therefore, each point p ∈ P is indeed charged at most 3 d − 1 times. There are n active faces of dimension 0 and at most n(3 d − 1) active faces of higher dimension. The upper bound is n + n(3 d − 1) = n3 d , as claimed. Theorem 4 The k-skeleton of the tower has size at most where a b denotes the Stirling number of the second kind. Proof Each k-simplex that is included in the tower at any given scale α is a part of the barycentric subdivision of an active face that is also included at α. Therefore, we can account for the inclusion of this simplex by including the barycentric subdivision of its parent active face. From Lemma 10 at most n3 d active faces are included in the tower over all dimensions. We bound the number of k-simplices in the barycentric subdivision of a d-cube. Multiplying with n3 d gives the required bound. Let c be any d-cube of α . To count the number of flags of length (m +1) contained in c that start with some vertex and end with c, we use similar ideas as in Edelsbrunner and Kerber (2012): first, we fix any vertex v of c and count the flags of the form v ⊆ · · · ⊆ c. Every -face in c incident to v corresponds to a subset of coordinate indices, in the sense that the coordinates not chosen are fixed to the coordinates of v for the face. With this correspondence, a flag from v to c of length (m +1) corresponds to an ordered m-partition of {1, · · · , d}. The number of such partitions is known as m! times the quantity d m , which is the Stirling number of second kind (Rennie and Dobson 1969), and is upper bounded by 2 O(d log m) . Since c has 2 d vertices, the total number of flags v ⊆ · · · ⊆ c of length (m + 1) with any vertex v is hence 2 d m! d m . We now count the number of flags of length k + 1. Each such flag is (k + 1)-subset of some flag of length m = k + 3 that start with a vertex and end with c. There such flags and each of them has k+3 k+1 = (k + 3)(k + 2)/2 subsets of size (k + 1). The number of (k + 1)-flags is upper bounded by Computing the tower From Sect. 3, we know that G α s+1 is built from G α s by making use of an arbitrary translation vector (±1, · · · , ±1) ∈ Z d . In our algorithm, we pick the components of this translation vector uniformly at random from {+1, −1}, and independently for each scale. The choice behind choosing this vector randomly becomes more clear in the next lemma. From the definition, the cubical maps g α s : α s → α s+1 can be composed for multiple scales. For a fixed α s , we denote by g ( j) : α s → α s+ j the j-fold composition of g, that is, for j ≥ 1. Lemma 11 For any k-face f ∈ α s with 1 ≤ k ≤ d, let Y denote the minimal integer j such that g ( j) ( f ) is a vertex, for a given choice of the randomly chosen translation vectors. Then, the expected value of Y satisfies which implies that no face of α s survives more than 3 log d scales in expectation. Proof Without loss of generality, assume that the grid under consideration is Z d and f is the k-face spanned by the vertices {{0, 1}, · · · , {0, 1} k , 0, · · · , 0}, so that the origin is a vertex of f . The proof for the general case is analogous. We say that the x 1 -coordinate collapses in the first case and survives in the second. Both events occur with the same probability 1/2. Because the shift is chosen uniformly at random for each scale, the probability that x 1 did not collapse after j iterations is 1/2 j . f spans k coordinate directions, so it must collapse along each such direction to contract to a vertex. Once a coordinate collapses, it stays collapsed at all higher scales. As the random shift is independent for each coordinate direction, the probability of a collapse is the same along all coordinate directions that f spans. Using the union bound, the probability that g j ( f ) has not collapsed to a vertex is at most k/2 j . With Y as in the statement of the lemma, it follows that Hence, As a consequence of the lemma, the expected "lifetime" of k-simplices in our tower with k > 0 is rather short: given a flag e 0 ⊆ · · · ⊆ e , the face e will be mapped to a vertex after O(log d) steps, and so will be all its sub-faces, turning the flag into a vertex. It follows that summing up the total number of k-simplices with k > 0 over X α for all α ≥ 0 yields an upper bound of n2 O(d log k+d) as well. Algorithm description Recall that a simplicial map can be written as a composition of simplex inclusions and contractions of vertices (Dey et al. 2014;Kerber and Schreiber 2017). That means, given the complex X α s , to describe the complex at the next scale α s+1 , it suffices to specify -which pairs of vertices in X α s map to the same image underg, and -which simplices in X α s+1 are included at scale X α s+1 . The input is a set of n points P ⊂ R d . The output is a list of events, where each event is of one of the three following types: -A scale event defines a real value α and signals that all upcoming events happen at scale α (until the next scale event). -An inclusion event introduces a new simplex, specified by the list of vertices on its boundary (we assume that every vertex is identified by a unique integer). -A contraction event is a pair of vertices (i, j) from the previous scale, and signifies that i and j are identified as the same from that scale. In a first step, we estimate the range of scales that we are interested in. We compute a 2-approximation of diam(P) by taking any point p ∈ P and calculating max q∈P p − q . Then we compute C P(P) using a randomized algorithm in n2 O(d) expected time (Khuller and Matias 1995). Next, we proceed scale-by-scale and construct the list of events accordingly. On the lowest scale, we simply compute the active vertices by point location for P in a cubical grid, and enlist n inclusion events (this is the only step where the input points are considered in the algorithm). For the data structure, we use an auxiliary container S and maintain the invariant that whenever a new scale is considered, S consists of all simplices of the previous scale, sorted by dimension. In S, for each vertex, we store an id and a coordinate representation of the active face to which it corresponds. Every -simplex with > 0 is stored just as a list of integers, denoting its boundary vertices. We initialize S with the n active vertices at the lowest scale. Let α < α be any two consecutive scales with , the respective cubical complexes and X , X the approximation complexes, withg : X → X being the simplicial map connecting them. Suppose we have already constructed all events at scale α. -First, we enlist the scale event for α . -Then, we enlist the contraction events. For that, we iterate through the vertices of X and compute their value under g, using point location in a cubical grid. We store the results in a list S (which contains the simplices of X ). If for a vertex j, g( j) is found to be equal to g(i) for a previously considered vertex i, we choose the minimal such i and enlist a contraction event for (i, j). -We turn to the inclusion events: -We start with the case of vertices. Every vertex of X is either an active face or a secondary face of . Each active face must contain an active vertex, which is also a vertex of X . We iterate through the elements in S . For each active vertex v encountered, we go over all faces of the cubical complex that contain v as a vertex, and check whether they are active. For every active face E encountered that is not in S yet, we add it to S and enlist an inclusion event of a new 0-simplex. Additionally, we go over each face of E, add it to S and enlist a vertex inclusion event, thereby enumerating the secondary faces that are in E. At termination, all vertices of X have been detected. -Next, we iterate over the simplices of S of dimension ≥ 1, and compute their image underg using the pre-computed vertex map; we store the result in S . -To find the simplices of dimension ≥ 1 included at X , we exploit our previous insight that they contain at least one vertex that is included at the same scale (see the proof of Theorem 4). Hence, we iterate over the vertices included in X and find the included simplices inductively in dimension. Let v be the current vertex under consideration; assume that we have found all ( p − 1)-simplices in X that contain v. Each such ( p − 1)-simplex σ is a flag of length p in . We iterate over all faces e that extend σ to a flag of length p + 1. If e is active, we have found a p-simplex in X incident to v. If this simplex is not in S yet, we add it and enlist an inclusion event for it. We also enqueue the simplex in our inductive procedure, to look for ( p + 1)-simplices in the next round. At the end of the procedure, we have detected all simplices in X without preimage, and S contains all simplices of X . We set S ← S and proceed to the next scale. This ends the description of the algorithm. O(d log k+d) and the space is bounded by n2 O(d log k+d) . Proof In the analysis, we ignore the costs of point locations in grids, checking whether a face is active, and searches in data structures S, since all these steps have negligible costs when appropriate data structures are chosen. Computing the image of a vertex of X costs O(2 d ) time. Moreover, there are at most n2 O(d) vertices altogether in the tower in expectation (using Lemma 10), so this bound in particular holds on each scale. Hence, the contraction events on a fixed scale can be computed in n2 O(d) time. Finding new active vertices requires iterating over the cofaces of a vertex in a cubical complex. There are 3 d such cofaces for each vertex. This has to be done for a subset of the vertices in X , so the running time is also n2 O(d) . Further, for each new active face, we go over its 2 O(d) faces to enlist the secondary faces, so this step also consumes n2 O(d) time. Since there are O(log + log d) scales considered, these steps require n2 O(d) log over all scales. Computing the image ofg for a fixed scale costs at most O(2 d |X |). M is the size of the tower, that is, the simplices without preimage, and I is the set of scales considered. The expected bound for α∈I |X α | = O(log d M), because every simplex has an expected lifetime of at most 3 log d by Lemma 11. Hence, the cost of these steps is bounded by 2 O(d) M. In the last step of the algorithm, we find the simplices of X included at α . We consider a subset of simplices of X , and for each, we iterate over a collection of faces in the cubical complex of size at most 2 O(d) . Hence, this step is also bounded by 2 O(d) |X | per scale, and hence bounded 2 O(d) M as well. For the space complexity, the auxiliary data structure S gets as large as X , which is clearly bounded by M. For the output complexity, the number of contraction events is at most the number of inclusion events, because every contraction removes a vertex that has been included before. The number of inclusion events is the size of the tower. The number of scale events as described is O(log + log d). However, it is simple to get rid of this factor by only including scale events in the case that at least one inclusion or contraction takes place at that scale. The space complexity bound follows. Dimension reduction When the ambient dimension d is large, our approximation scheme can be combined with dimension reduction techniques to reduce the final complexity, very similar to the application in Choudhary et al. (2017b). For a set of n points P ⊂ R d , we apply the dimension reduction schemes of Johnson-Lindenstrauss (JL) (Johnson et al. 1986), Matoušek (MT) (Matoušek 1990), and Bourgain's embedding (BG) (Bourgain 1985). We then compute the approximation on the lower-dimensional point set. We only state the main results in Table 1, leaving out the proofs since they are very similar to those from Choudhary et al. (2017b). Approximation scheme with cubical complexes We extend our approximation scheme to use cubical complexes in place of simplicial complexes. We start by detailing a few aspects of cubical complexes. Cubical complexes We now briefly describe the concept of cubical complexes, essentially expanding upon the contents of Sect. 3.1. For a detailed overview of cubical homology, we refer to Kaczynski et al. (2004). Definition We define cubical complexes over the grids G α s . For any fixed α s , the grids G α s defines a natural collection of cubes. An elementary cube γ is a product of intervals γ = I 1 × I 2 × · · · × I d , where each interval is of the form I j = (x j , x j + m j ), such that the vertex (x 1 , · · · , x m ) ∈ G α s and each m j is either 0 or α s . That means, an (elementary) cube is simply a face of a d-cube of the grid. An interval I j is said to be degenerate if m j = 0. The dimension of γ is the number of non-degenerate intervals that defines it. We define the boundary of any interval as the two degenerate intervals that form its endpoints and denote this by ∂( Taking the boundary of any fixed subset of the intervals defining γ consecutively gives a sum of faces of γ . A cubical complex of G α s is a finite collection of cubes of G α s . We define chain complexes for the cubical case in the same way as in simplicial complexes. The chain complexes are connected by boundary homomorphisms, where the boundary of a cube is defined as: where (I 1 × · · · × ∂(I j ) × · · · × I d ) denotes the sum It can be quickly verified that for each cube γ , ∂ • ∂(γ ) = 0 since each term appears twice in the expression and the addition is over Z 2 . Cubical maps and induced homology Let T α s and T α t denote the cubical complexes defined by the grids G α s and G α t , respectively, for s ≤ t. We use the vertex map g : G α s → G α t to define a map between the cubical complexes. Note that if (a, b) are vertices of a cube of T α s that differ in one coordinate, then (g(a), g(b)) are vertices of a cube of T α t that differ in at most one coordinate. A cubical map is a map f : T α s → T α t defined using g, such that for each spans a cube of T α t . The cubical map can also be restricted to sub-complexes of T α s and T α t , provided that the image f (γ ) is well-defined. Each cubical map also defines a corresponding continuous map between the underlying spaces of the respective complexes. Let x ∈ |γ | be a point in γ . Then, the coordinates of x can be uniquely written as The cubical map f gives rise to a chain map f # : C p (T α s ) → C p (T α t ) between the p-th chain groups of the complexes, for each p ∈ [0, · · · , d]. For each cube γ , so this gives a homomorphism between the chain groups. Moving to the homology level, we get the respective homology groups H (T α s ) and H (T α t ) and the chain map from above induces a linear map between them. The concept of reduced homology and augmentation maps is also applicable to the cubical chain complexes. For a sequence of cubical complexes connected with cubical maps, this generates a persistence module. Cubical filtrations and towers are defined in a similar manner to the simplicial case. A cubical filtration is a collection of cubical complexes (T α ) α∈I such that T α ⊆ T α for all α ≤ α ∈ I . A (cubical) tower is a sequence (T α ) α∈J of cubical complexes with J being an index set together with cubical maps between complexes at consecutive scales. A cubical tower can be written as a sequence of inclusions and contractions, where an inclusion refers to the addition of a cube and a contraction refers to collapsing a cube along a coordinate direction to either of the endpoints of the interval. Description We choose the simplest possible cubical complex to define our approximation cubical tower: for each scale α s , we define the cubical complex U α s as the set of active faces and secondary faces spanned by V α s . Hence the cubical complex is closed under taking faces and is well-defined. See Fig. 5 for a simple example. Recall from Sect. 4 that for each s ∈ Z, U α s and U α s+1 are related by a cubical map g α s , which gives rise to the cubical tower U α s s∈Z . We extend this to a tower (U α ) α≥0 by using techniques from Appendix A. In Sect. 4 we saw that the tower (X α ) α≥0 gives an approximation to the Rips filtration. The relation between the simplicial and cubical towers is trivial: X α s is simply a triangulation of |U α s |. Hence X α s and U α s have the same homology (Munkres 1984). Moreover, the simplicial map is derived from an application of the cubical map. In particular, the continuous versions of both maps are the same. For any 0 ≤ α ≤ β, let f 1 : H * (U α ) → H * (U β ) denote the homomorphism induced by the cubical map, f 2 : H * (X α ) → H * (X β ) denote the homomorphism induced by the simplicial map, and f 0 : H * (|X α | = |U α |) → H * (|X β | = |U β |) denote the homomorphism induced by the common continuous map. It is well-established that f 1 = f 0 (Kaczynski et al. 2004, Chapter. 6) and f 2 = f 0 (Munkres 1984, Chapter. 2 To compute the cubical tower, we simply re-use the algorithm for the simplicial case, with small changes: -In the simplicial case, we used a container S to hold the simplices from the previous scale. We alter S to store the cubes from the previous scale. For each interval, we store an id and its coordinates. Each cube is stored as the set of ids of the intervals that define it. -At each scale, we enumerate the image of the cubical map by computing the image of each interval, and then use this pre-computed map to compute the image of (≥ 1)-dimensional cubes. -For the inclusions, we find all the active and secondary faces but do not compute the simplices. The inclusions in the cubical tower correspond exactly to the inclusions of active and secondary faces in the simplicial tower, so this enumerates all inclusions correctly. Practicality We now touch upon the practical aspects of our constructions. An implementation of our approximation scheme would be a tool that computes the (approximate) persistence barcode for any input data set. For any scheme to be useful in practice, it should be able to compute sufficiently close approximations using a reasonable amount of resources. Our cubical tower consists of cubical complexes connected via cubical maps. To our knowledge, there are no algorithms to compute barcodes in this setting where the cubical maps are more than just trivial inclusions. As such, although our cubical scheme has exponentially lower theoretical guarantees compared to the simplicial tower, we can not hope to test it in practice unless the appropriate primitives are available. It could be an interesting research direction to develop this primitive and in particular investigate whether the techniques used in computing persistence barcodes for a simplicial tower allow a generalization to the cubical case. It makes more sense to inspect the simplicial tower. We saw in Theorem 4 that the size of the tower is n6 d−1 (2k + 4)(k + 3)! d k + 2 . Unfortunately, this bound is already too large so that the storage requirement of the Algorithm (Theorem 7) explodes exponentially. Let us assume a conservative bound of 1 Byte of memory requirement per simplex. For a point set in d = 8 dimensions and k = 4, the complexity bound is already at least 4000 Terabytes, before factoring in n. For a point set in d = 10 dimensions and k = 5, this explodes to 10 20 Terabytes. While these are upper bounds, in practice the complexity will still need to be many orders of magnitude smaller to be feasile, which is unlikely. Even with conservative estimates our storage requirement is impractical. Therefore we are not very hopeful that implementing the scheme in its current state will provide any useful insight for high dimensional approximations. Making it implementation-worthy demands more optimizations and tools at the algorithmic level. This is worth another Algorithmic engineering project in its own right. We plan to pursue this line of research in the future. Since our focus in this paper was geared towards theoretical aspects of approximations, we exclude experimental results in the current work. We hope that a more careful implementation-focussed approach may prove more practical. On the other hand, the upper bound for the cubical case is simply n6 d . Even for d = 10, the storage requirement would be less than 100 Megabytes before factoring in n. This is far more attractive than the simplicial case. As such, it may make more sense to invest time and effort in developing tools to compute barcodes in the cubical setup. Summary We presented an approximation scheme for the Rips filtration, with improved approximation ratio, size and computational complexity than previous approaches for the case of high-dimensional point clouds. In particular, we are able to achieve a marked reduction in the size of the approximation by using cubical complexes in place of simplicial complexes. This is in contrast to all other previous approaches that used simplicial complexes as approximating structures. An important technique that we used in our scheme is the application of acyclic carriers to prove interleaving results. An alternative would to be explicitly construct chain maps between the Rips and the approximation towers; unfortunately, this make the interleaving analysis significantly more complex. While the proof of the interleaving in Sect. 4.3 is still technically challenging, it greatly simplifies by the usage of acyclic carriers. There is also no benefit in knowing the interleaving maps because they are only required for the analysis of the interleaving, and not for the actual computation of the approximation tower. We believe that this technique is of general interest for the construction of approximations of cell complexes. Our simplicial tower is connected by simplicial maps; there are (implemented) algorithms to compute the barcode of such towers (Dey et al. 2014;Kerber and Schreiber 2017). It is also quite easy to adapt our tower construction to a streaming setting (Kerber and Schreiber 2017), where the output list of events is passed to an output stream instead of being stored in memory. -for all α ∈ [α s , α s+1 ), we set X α = X α s , for any α s ∈ I . That means, the complex stays the same in the interval between any two scales of I , so we defineg as the identity within this interval. These give rise to the tower (X α ) α≥0 , that is connected with the simplicial mapg. This modification helps in improving the interleaving with the Rips persistence module. First, we extend the acyclic carriers C 1 and C 2 from before to the new case: -C α 1 : R ∞ α → X 4α , α > 0: we define C 1 as before, simply changing the scales in the definition. It is straightforward to see that C 1 is still a well-defined acyclic carrier. -C α 2 : X α → R ∞ α , α ≥ 0: this stays the same as before. It is simple to check that C 2 is still a well-defined acyclic carrier. These give rise to augmentation-preserving chain maps between the chain complexes: using the acyclic carrier theorem as before (Theorem 1). Lemma 12 The diagram commutes on the homology level, for all 0 ≤ α ≤ α . Proof Consider the acyclic carrier C 1 • inc : R ∞ α → X 4α . It is simple to verify that this carrier carries both c 1 • inc andg • c 1 , so the induced diagram on the homology groups commutes, from Theorem 1. Lemma 13 The diagram commutes on the homology level, for all 0 ≤ α ≤ α . Proof We construct an acyclic carrier D : X α → R ∞ α which carries inc•c 2 and c 2 •g, thereby proving the claim (Theorem 1). Consider any simplex σ ∈ X α and let E ∈ α be the minimal active face of containing σ . We set D(σ ) as the simplex on the set of input points of P, which lie in the Voronoi regions of the vertices of g(E). By the triangle inequality, D(σ ) is a simplex of R ∞ α , so that D is a well-defined acyclic carrier. It is straightforward to verify that D carries both c 2 •g and inc • c 2 . Lemma 14 The diagram commutes on the homology level, for all 0 ≤ α ≤ α . Proof The diagram is essentially the same as the lower triangle of Diagram 10, with a change in the scales. As a result, the proof of Lemma 7 also applies for our claim directly. Lemma 15 The diagram commutes on the homology level, for all 0 ≤ α ≤ α . Proof The diagram can be re-interpreted as: The modified diagram is essentially the same as the upper triangle of Diagram 10, with a change in the scales and a replacement of c 1 withg • c 1 , that is equivalent to the chain map at the scale α . Hence, the proof of Lemma 8 also applies for our claim directly.
18,799.2
2021-05-11T00:00:00.000
[ "Mathematics" ]
Expanding the Study of the Cytotoxicity of Incomptines A and B against Leukemia Cells Heliangolide-type sesquiterpene lactones (HTSLs) are phytocompounds with several pharmacological activities including cytotoxic and antitumor activity. Both bioactivities are related to an α-methylene-γ-lactone moiety and an ester group on carbon C-8 in the sesquiterpene lactone (SL) structure. Two HTSLs, incomptines A (AI) and B (IB) isolated from Decachaeta incompta, were evaluated for their cytotoxic activity on three leukemia cell lines: HL-60, K-562, and REH cells. Both compounds were subjected to a molecular docking study using target proteins associated with cancer such as topoisomerase IIα, topoisomerase IIβ, dihydrofolate reductase, methylenetetrahydrofolate dehydrogenase, and Bcl-2-related protein A1. Results show that IA and IB exhibit cytotoxic activity against all cell lines used. The CC50 value of IA was 2–4-fold less than etoposide and methotrexate, two anticancer drugs used as positive controls. The cytotoxic activity of IB was close to that of etoposide and methotrexate. The molecular docking analysis showed that IA and IB have important interaction on all targets used. These findings suggest that IA and IB may serve as scaffolds for the development of new treatments for different types of leukemia. Introduction Medicinal plants are a renewable source for phytopharmaceutical agents with important potential to treat several diseases such as cancer [1][2][3][4]. In this sense, research into vegetal extracts of medicinal plants has led to the discovery of anticancer drugs including parthenolide, etoposide, taxol, and vincristine [5][6][7][8]. Plants contain several bioactive secondary metabolites including alkaloids, flavonoids, phenols, sterols, quinones, and terpenoids. Among the terpenoids, the HTSLs are a large group of secondary metabolites with a 15-carbon skeleton and a low molecular weight, and contain oxygenated groups such as alcohols, ketones, aldehydes, epoxides, acids, and/or an a-methylene-γ-lactone. Sesquiterpene lactones (SLs) are secondary metabolites that accumulate in the leaves, roots, and stem of plants belonging to diverse Asteraceae genera including Artemisia, Arnica, Tanacetum, and Decachaeta. These compounds have several biological properties such as antimicrobial, anti-inflammatory, cytotoxic, and anti-tumor [8,9]. In general, the biological properties of SLs like cytotoxic and antitumor agents have been associated with the presence of an a-methylene-γ-lactone in their structure; this moiety has the capacity to act as a Michel acceptor and react with sulfhydryl residues of proteins. In addition, SLs exercise their anticancer properties by changing the redox cell balance, as well as reducing glutathione depletion, preventing NF-kB activation, and increasing intra-cellular reactive oxygen species levels. In addition, they can inhibit glycolytic enzymes and downregulate Bcl-2 antiapoptotic proteins [8,9]. Decachaeta incompta (DC) R. M. King and H. Robinson (Figure 1) belongs to the Asteraceae family in the Asterales order; it is an erect subshrub that grows up to 3 m tall. The plant has glands in the leaves, reddish or yellowish corollas and styles, straight stems, and alternate and ovate leaves. It has white florets that are gradually more densely reddish or yellowish glandular distally. The plant is native to Guatemala and to Mexican states such as Oaxaca, Michoacan, Jalisco, Puebla, and Veracruz [10]. The aerial parts of this species are used in Oaxacan traditional medicine to treat diarrhea. Bioassay-guided fractionation of dichloromethane extraction of the aerial parts of D. incompta led to isolation of four heliangolide-type SLs named incomptines A-D. Regarding their biological properties, they have been reported to possess antiamoebic, antigiardial, antibacterial, antipropulsive, trypanocidal, phytotoxic, spermatic, cytotoxic, and antitumor effects. In this sense, we hypothesized that the antiprotozoal and phytotoxic activities of these SLs are associated with the presence of an acetate moiety at C-8 of the germacrane framework. In contrast, the antipropulsive and antibacterial properties may be associated with the presence of a hydroxy group at the same position of this backbone [11,12]. Isolation and Cytotoxic Activity of Incomptine A (IA) and Incomptine B (IB) In the present study, the aerial parts of D. incompta, collected in the State of Oaxaca, Mexico, were extracted exhaustively with dichloromethane. An amount of 2.12 g of brown Leukemia is a cancer of the blood-forming tissues characterized by an increase in uncontrolled growth of immature white blood cells or leukocytes in the blood, spleen, and bone marrow. There are many classes of leukemia; it is classified in agreement to the course of disease and the dominant type of white blood cell involved in the disease. Examples include acute myeloid leukemia, acute lymphocytic leukemia, chronic myeloid leukemia, and chronic lymphocytic leukemia. Symptomatic patients present anemia, fever, bleeding, bone pain, tenderness, fatigue, weakness, excessive sweating, headaches, nausea, and swelling of the lymph nodes. In 2018, around 430,000 cases worldwide were reported with leukemia, which accounted for 300,000 deaths. The anticancer drugs commonly used, alone or combined, for treating leukemias include cyclophosphamide, fludarabine, prednisone, chlorambucil, etoposide, methotrexate, and doxorubicin. All these drugs present significant side effects, in some cases multidrug resistance, and in the case of etoposide or methotrexate secondary leukemia may develop. In an effort to improve cancer therapy, it was proposed to employ secondary metabolites isolated out of Asteraceae plants for developing new treatments against malignancies, including non-solid tumors such as leukemias [8,[13][14][15][16]. This study forms part of our research on the cytotoxic properties of incomptines A (IA) and B (IB). In this work, the cytotoxic activity of IA and IB was evaluated against three cell lines causing leukemia (HL-60, K-562, and REH). In addition, a molecular docking study was carried out to understand the potential mechanism of action on five molecular targets involved in survival and proliferation of cancer cells: topoisomerase IIα (TIIα), topoisomerase IIβ (TIIβ) dihydrofolate reductase (DHFR), methylenetetrahydrofolate dehydrogenase (MTHFD), and Bcl-2-related protein A1 (BCL-2). Isolation and Cytotoxic Activity of Incomptine A (IA) and Incomptine B (IB) In the present study, the aerial parts of D. incompta, collected in the State of Oaxaca, Mexico, were extracted exhaustively with dichloromethane. An amount of 2.12 g of brown extract was yielded, which had a pasty consistency (DCE, 7.85%). The DCE from D. incompta was purified by column chromatography to give two germacrane-type sesquiterpene lactones named incomptine A (IA) and incomptine B (IB) 2 ( Figure 2); both compounds were identified through comparison of retention times in HPLC-DAD (Figures 3 and 4) and NMR spectra. The cytotoxic activity of the incomptines A (IA) and B (IB) were tested on three leukemia cell lines (HL-60, K-562, and REH) and one lymphoma cell line (U-937) by MTT assay. Lymphoma cell line (U-937) was used as a positive control considering that incomptine A has been demonstrated to exhibit strong cytotoxic activity on this cell line. To compare the cytotoxic effects of the HTSLs analyzed in this work, incomptines A (IA) and B (IB), we included etoposide (ET) and methotrexate (MTX) as positive controls, i.e., two antileukemia drugs. The cytotoxicity assays revealed that both heliangolide-type sesquiterpene lactones have cytotoxic activity against all assayed leukemia and lymphoma cell lines ( Table 1). The most potent compound was the incomptine A (IA) with CC 50 values from 0.3, 0.6, 0.3, and 0.4 µM on U-937, HL-60 K-562, and REH cell lines, respectively. In all the cases, their cytotoxic activities were better than ET (1.2, 1.4, 0.7, and 1.1 µM, respectively) and MTX (1.5, 0.65, 3.4, and 2.7 µM, respectively). Incomptine B (IB) was less potent, displaying CC 50 values from 1.9, 1.0, 1.9, and 2.1 µM, respectively. Nevertheless, its cytotoxic activities were close to etoposide and methotrexate. All compounds tested exhibited dose-dependent cytotoxic effects ( Figures 5-8) on all cell lines used. The results revealed that all leukemia and lymphoma cell lines used were sensitive to the cytotoxic effects of the tested HTSLs. Although the data are limited, a structure-cytotoxicity relationship was established from the CC 50 values in the tested leukemia cells and revealed that the presence of an acetate moiety at C-8 favors the cytotoxicity, as occurs for antiprotozoal and phytotoxic activities [11,12]. The cytotoxic activity of the incomptines A (IA) and B (IB) were tested on three leukemia cell lines (HL-60, K-562, and REH) and one lymphoma cell line (U-937) by MTT assay. Lymphoma cell line (U-937) was used as a positive control considering that incomptine A has been demonstrated to exhibit strong cytotoxic activity on this cell line. To compare the cytotoxic effects of the HTSLs analyzed in this work, incomptines A (IA) and B (IB), we included etoposide (ET) and methotrexate (MTX) as positive controls, i.e., two anti-leukemia drugs. The cytotoxicity assays revealed that both heliangolide-type sesquiterpene lactones have cytotoxic activity against all assayed leukemia and lymphoma cell lines ( Table 1). The most potent compound was the incomptine A (IA) with CC50 values from 0.3, 0.6, 0.3, and 0.4 µM on U-937, HL-60 K-562, and REH cell lines, respectively. In all the cases, their cytotoxic activities were better than ET (1.2, 1.4, 0.7, and 1.1 µM, respectively) and MTX (1.5, 0.65, 3.4, and 2.7 µM, respectively). Incomptine B (IB) was less potent, displaying CC50 values from 1.9, 1.0, 1.9, and 2.1 µM, respectively. Nevertheless, its cytotoxic activities were close to etoposide and methotrexate. All compounds tested exhibited dose-dependent cytotoxic effects ( Figures 5-8) on all cell lines used. The results revealed that all leukemia and lymphoma cell lines used were sensitive to the cytotoxic effects of the tested HTSLs. Although the data are limited, a structure-cytotoxicity relationship was established from the CC50 values in the tested leukemia cells and revealed that the presence of an acetate moiety at C-8 favors the cytotoxicity, as occurs for antiprotozoal and phytotoxic activities [11,12]. Table 1. Cytotoxic activities of incomptines A (IA) and B (IB) isolated from dichloromethane extract of the aerial parts of Decachaeta incompta. Sample Leukemia and Lymphoma Cell Lines (CC 50 µM) a U-937 HL-60 K-562 REH a U-937 (histiocytic lymphoma), HL-60 (acute promyelocytic leukemia), K-562 (chronic myeloid leukemia), and REH (acute lymphocytic leukemia); CC 50 was defined as the treatment concentration at which 50% reduction in cellular proliferation was observed. Data were analyzed using Graph Pad Prism, (n = 3), p < 0.05. This was calculated graphically using the curve-fitting algorithm of the computer software Prism 5.03 (GraphPad, La Jolla, CA, USA). Values were calculated as means ± S.E.M from three independent experiments, each performed in triplicate. Molecular Docking Studies of Incomptine A (IA) and Incomptine B (IB) on Five Selected Pharmacological Receptors Associated to Cancer Considering previous in vitro results and the targets reported for the two anti-leukemia drugs used as positive controls, we decided to study five potential pharmacological receptors associated to cancer: topoisomerase IIα (TIIα), topoisomerase IIβ(TIIβ) dihydrofolate reductase (DHFR), methylenetetrahydrofolate dehydrogenase (MTHFD), and Bcl-2-related protein A1 (BCL-2) [17][18][19]. Residues of interaction between incomptines A (IA) and B (IB), methotrexate (MTX), and etoposide (ET) against the five molecular targets (Figures 9-11; Table 2) showed that incomptine B (IB) had greater affinity on the proteins TIIαDHFR and THFD with ∆G values of −7.4 kcal/mol, −8.1 kcal/mol, and −7.9 kcal/mol, respectively. In contrast, incomptine A (IA) exhibited the best affinity on the protein TPIIβ, with a ∆G value of −6.5 kcal/mol. Both incomptines showed the same affinity to BCL2 protein with a ∆G value of −7.3 kcal/mol. However, incomptines A (IA) and B (IB) exhibited less affinity than MTX and ET ( Figure 11). Molecular Docking Studies of Incomptine A (IA) and Incomptine B (IB) on Five Selected Pharmacological Receptors Associated to Cancer Considering previous in vitro results and the targets reported for the two anti-leukemia drugs used as positive controls, we decided to study five potential pharmacological receptors associated to cancer: topoisomerase IIα (TIIα), topoisomerase IIβ(TIIβ) dihydro- The specific interactions of incomptine A (IA) with all targets are displayed in Figure 9. Initially, the interaction of IA with human topoisomerase IIα is formed by two hydrogen bonds. First, the amine group of deoxyadenosine (DA12) bonds as a donor with the carbonyl group of the acetoxy; the second bond is between the amine group of deoxycytidine (DC11) as a donor and the oxygen atom of the lactone ( Figure 9A,B). Subsequently, the interaction with human topoisomerase IIβ is formed by only one hydrogen bond, between the amine group of deoxyguanosine (DG13) as a donor and the same carbonyl group of the acetoxy (Figure 9C,D). The binding of IA with the dihydrofolate reductase (DHFR) is formed by two hydrogen bonds (H bonds); one of these bonds is between the oxygen atom of the epoxide group with Ala9, and the second non-conventional interaction (carbon hydrogen bond) is between the carbonyl group of the acetoxy moiety and the Gly116 residue ( Figure 9E,F). Additionally, IA establishes three hydrogen bonds with methylenetetrahydrofolate dehydrogenase (MTHFD); two of these bonds are between the carbonyl group and oxygen atom of lactone with Gln100, and the last one is between Leu101 as a donor and the oxygen atom of the carbonyl group as the acceptor ( Figure 9G,H). Finally, the binding of IA with the B-cell lymphoma 2 protein (BCL2) is by two hydrogen bonds between the amide group of the Trp141 and Gly142 residues (the donor) and the oxygen atom of the hydroxyl group is the acceptor (Figure 9I,J). The interactions of incomptine B (IB) with all targets are displayed in Figure 7. However, the acetoxy group is not present in IB and the interaction with human topoisomerase IIα (TIIα) is formed by two hydrogen bonds. The amine group of Gly488 bonds as an acceptor and the hydroxyl group; the second bond is between the amine group of deoxyguanosine (DG13) as a donor and the oxygen atom (carbonyl group) of the lactone ( Figure 10A,B). Subsequently, the interaction of IB with human topoisomerase IIβ(TIIβ) is formed by only one hydrogen bond, between the amine group of deoxyadenosine (DA12) as a donor and the carbonyl group of the lactone ( Figure 10C,D). The interaction of IB with dihydrofolate reductase (DHFR) established two hydrogen bonds; one of these bonds is between the oxygen atom of the carbonyl group of lactone with Ala9, and the second hydrogen bond is between the hydroxyl group and the Val115 residue ( Figure 10E,F). In addition, IB establishes four hydrogen bonds with methylenetetrahydrofolate dehydrogenase (MTHFD); two of these bonds are between the carbonyl group of lactone with Gln100 and Lys56, and the last two are between Leu101 as a donor and acceptor, with the hydroxyl group as the donor ( Figure 10G,H). Finally, IB missed polar interactions with any residues of the B-cell lymphoma 2 protein (BCL2, Figure 10I,J). Discussion Leukemia is a cancer that is present worldwide and affects people of all ages, although predominantly in childhood. In 2018, nearly 430,000 people were diagnosed with any type of leukemia, which accounted for 300,000 deaths around the world [13,14]. In this context, there has been renewed interest during the last years in phytochemicals as potential chemopreventive and chemotherapeutic agents against cancer. These secondary metabolites include flavonoids, alkaloids, acetogenins, and terpenoids. In the latter group, sesquiterpene lactones (SLs) have received considerable attention during the last 17 years [8,9,[20][21][22]. Here, we have reported the cytotoxic activity and docking analysis on five molecular targets relevant for cancer treatment of the two major heliangolide-type sesquiterpene lactones from Decachaeta incompta. Incomptine A (IA) exhibited the best cytotoxic activity; its effects were better than etoposide (ET) and methotrexate (MTX), two antitumor agents used currently for treating cancer, which were used as positive controls. Incomptine A showed a dose-dependent cytotoxic effect ( Figure 5) on all cell lines used. These results suggest that IA could be considered promising antitumor compound. However, although the data are limited, the structure-effect correlation revealed that cytotoxic activity on U-937, HL-60, K-562, and REH human cell lines seems to be related to the presence of an 8-acetyl group of the heliangolide framework. Incomptine B (IB) that has a free hydroxyl at C-8 exhibited less cytotoxic activity. In addition, it is in agreement with the observed antiprotozoal and phytotoxic activity [11,12]. Notwithstanding that IB was less potent than IA, its cytotoxic activity was close to that of etoposide and methotrexate, suggesting it is also a good candidate for the development of new anticancer drugs. Further studies are needed in order to elucidate the mechanism of action of these compounds and to evaluate their bioavailabilities [17,23,24]. To our knowledge, this is the first report of comparative cytotoxic properties of incomptines A (IA) and B (IB) on leukemia human cell lines HL-60, K-562, and REH. It is important to point out that the biological properties of SLs are associated with alkylation of nucleophiles through their a-methylene-γ-lactone moiety. This moiety can react readily to form adducts with nucleophiles such as sulfhydryl groups or free thiols by Michel-type addition [23]. In this sense, cysteine residues in proteins, as well as the free intracellular GSH, are the main targets of SLs. These interactions cause reduction or inhibition of enzyme activity or disruption of GSH metabolism and the vitally important intracellular cell redox balance. In addition, extensive research suggests that SLs exercise their antitumor effects by reacting on proteins and enzymes or interfering with some key biological processes, including the sarco/endoplasmatic reticulum calcium ATPase pump, proteases, transferrin receptors, nuclear factor-kappa B, and E3 ubiquitin-protein ligase Mdm2, as well as the p53 gene, angiogenesis, and metastasis in cancer cells [8,9]. In this context, parthenolide (PTL), a sesquiterpene lactone isolate of Tanacetum parthenium, has the a-methylene-γ-lactone moiety; it has shown potent anticancer activity and is currently being tested in cancer clinical trials. PTL is the first small molecule found to be selective on cancer stem cell lines. In vitro and in vivo anticancer properties of PTL can be associated to its important inhibition of nuclear factor kappa B (NF-kB), which is aberrantly and stably activated in various cancers. Other SLs that possess an a-methylene-γ-lactone moiety and demonstrated significant anticancer potential include eupatolide, deoxyelephantopin, and dehydrocostus lactone [25][26][27]. In relation to docking studies, incomptines A and B showed similar energy of interaction values (∆G) but these were slightly lower than the control drugs. However, it is important that the acetoxy group of incomptine interacts as an acceptor group, and in this position, it is desirable to interact with all targets, indicating an increase in recognition with the proteins involved in cancer. These results correlate with in vitro activity studies. In addition, the molecular docking analysis suggests that cytotoxic activity of IA and IB against all leukemia cell lines used in this study may be associated with the effects on the five targets used: human topoisomerase IIα, human topoisomerase IIβ, human dihydrofolate reductase, human methylenetetrahydrofolate dehydrogenase, and human B-cell lymphoma 2 protein. Preparation of the Aerial Parts Extract Air-dried plant material (25 g) was ground and extracted by percolation at room temperature with dichloromethane (350 mL). After filtration, the solvent was evaporated under vacuum to yield 2 g of extract of brown color and a pasty consistency. HPLC-DAD Analysis Dichloromethane extract (100 mg) or incomptines A (IA, 2 mg) and B (IB, 2 mg) were dissolved in methanol (10 mL or 2 mL, respectively) and 20 µL of the sample was injected to HPLC. HPLC separations were performed on a Waters 2795 Alliance equipped with a Waters 996 detector photodiode array collecting data at 240 nm; a column Waters Spherisorb S5 ODS2 (4.5 × 250 mm, 5 µm) was used with a logarithmic gradient from 96% of aqueous acetic acid 2% and 4% of CH 3 CN to 50% of aqueous acetic acid 2% and 96% of CH 3 CN over a period of 60 min at a flow rate of 1 mL min −1 . All the solvents were HPLC grade. Leukemia Cell Lines The human pro-monocyte myeloid leukemia U-937 (ATCC: CRL 1593.2, Middlesex, UK), the human acute myeloid leukemia HL-60 (ATCC: CRL 3306, Middlesex, UK), the human chronic myeloid leukemia K-562 (ATCC: CRL 3344, Middlesex, UK), and the human acute lymphocytic leukemia REH (ATCC: CRL 8286, Middlesex, UK) were provided by UIM en Genética Humana del Hospital de Pediatría de CMN S XXI, IMSS. Cell cultures were tested for mycoplasma contamination using MycoAlert mycoplasma detection kit (Lonza Walkersville, Inc., Walkersville, MD, USA). The cell lines were cultured in RPMI 1640 medium containing 10% (v/v) fetal bovine serum and maintained at 0.5−1.0 × 10 6 cell mL −1 in a humidified atmosphere containing 5% CO 2 at 37 • C. Cell viability was determined by the trypan blue exclusion test. Cells were resuspended in fresh medium 24 h before treatments to ensure the exponential growth. Cytotoxic Activity The cytotoxic activity of incomptines A (IA) and B (IB) against leukemia human cell lines was assessed using the colorimetric 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyltetrazolium bromide (MTT) test. Exponentially growing cells of U-937, HL-60, K-562, and REH cell lines were seeds in 96-well plates at a density of 5.0 x 10 3 cells per well in 100 µL and were treated with five serial concentrations between 0.1 µM and 5.0 µM of HTSLs or etoposide or methotrexate for 24 h under 5% CO 2 and 95% O 2 at 37 • C. The compounds were dissolved in DMSO; the final concentration of DMSO used was 0.1% (v/v) for each sample. Cells (U-937 or HL-60 or K-562 or REH) treated with 0.1% DMSO served as the control group. After incubation for specified times, MTT reagent (10 µL, 5 mg dissolved in 1 mL of PBS) was added to each well and incubated for 4 h. The plates were centrifuged (10 min at 350 × g) and the purple formazan crystals of metabolized yellow tetrazolium salt by viable cells were dissolved in 150 µL of DMSO. Absorbance was quantified at 570 nm using the ELISA plate reader. Results were expressed as a percentage of viability, with 100% representing control cells treated with 0.1% DMSO alone. Then, the CC 50 was determined. This was defined as the treatment concentration at which a 50% reduction in cellular proliferation was observed. This was calculated graphically using the curve-fitting algorithm of the computer software Prism 5.03 (GraphPad, La Jolla, CA, USA). Values were calculated as means ± S.E.M from three independent experiments, each performed in triplicate. Molecular Docking of Incomptine A (IA), Incomptine B (IB), Etoposide (ET), and Methotrexate (MTX) Incomptine A (IA), incomptine B (IB), etoposide (ET), and methotrexate (MTX) structure was created and prepared using MOE software [28]. Several three-dimensional structures involved in treating cancers were retrieved from the Protein Data Bank (https: //www.rcsb.org, accessed on 16 February 2022) with the following access codes: human topoisomerase IIα (PDB id: 5GWK, Resolution: 3.15 Å), human topoisomerase IIβ (PDB id: 3QX3, Resolution: 2.16 Å), human dihydrofolate reductase (PDB id: 3EIG, Resolution: 1.70 Å), human methylenetetrahydrofolate dehydrogenase (PDB id: 6ECQ, Resolution: 2.70 Å), and human B-cell lymphoma 2 protein (PDB id: 4LVT, Resolution: 2.05 Å). Molecular targets and ligands (incomptine A, incomptine B, etoposide, and methotrexate) were submitted to MOE software. All water molecules and cocrystal ligands were removed from the crystallographic structures. Then, each one of the hydrogen atoms was added, non-polar hydrogen atoms were merged, and Gasteiger charges were assigned to all molecules (ligands and proteins). Next, the torsions from compounds were allowed to rotate during the docking study. The molecular docking experiments were carried out using AutoDock Vina with 20 modes and an exhaustiveness value of 16 [29,30]. The active site of each target was covered with the proper size of the grid. The grid was centered at the following coordinates: Molecular Docking Validation The molecular docking protocol was validated through a re-docking of co-crystal ligands into the binding site of both pharmacological targets. The conditions to reproduce the binding mode of co-crystallographic ligands were established after re-docking; we found that the root-mean-square deviations (RMSDs) between the co-crystal ligands and the re-docked structures were less than 2.0 Å, for all targets. These conditions allowed us to obtain good predictions in the compounds of interest. Statistical Analysis Inhibition percentage was plotted against concentration; the best straight line was determined by regression analysis, and the CC 50 was calculated. All data were expressed as mean ± standard deviation of nine measurements. Statistical analysis of data was performed using one-way ANOVA. A maximum probability value of p < 0.05 was considered statistically significant and pairwise differences were analyzed by Bonferroni post hoc test. Analyses were performed using GraphPad Prism Version 5.03 (GraphPad Software Inc., La Jolla, CA, USA). Conclusions The heliangolide-type sesquiterpene lactones, incomptines A (IA) and B (IB), isolated from the dichloromethane extract of aerial parts of Decachaeta incompta, showed significant cytotoxic activity against four leukemia human cell lines: U-937, HL-60, K-562, and REH cells. In addition, molecular docking studies suggest that cytotoxic effects may be explained by the affinity of these compounds for the five proteins used as targets and associated with different cancer processes including topoisomerase IIα (TIIα), topoisomerase IIβ (TIIβ) dihydrofolate reductase (DHFR), tetrahydrofolate synthase (MTHFD), and Bcl-2-related protein A1 (BCL-2). Although more research must be carried out to discover how both HTSLs cause cell death, these compounds have pharmacological potential as anticancer agents, specifically against leukemia. Both HTSLs, IA and IB, should be considered for additional preclinical and in vivo studies. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented or additional data on this study are available on request from corresponding author.
5,825.4
2022-03-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Decreased K5 receptor expression in the retina, a potential pathogenic mechanism for diabetic retinopathy. Purpose Plasminogen kringle 5 (K5) is a potent angiogenic inhibitor and specifically binds to the voltage-dependent anion channel believed to function as the K5 receptor (K5R). To investigate the role of K5R in diabetic retinopathy, the present study measured the expression levels of K5R in the retina of diabetic retinopathy models. In cultured retinal Müller cells, K5 inhibited vascular endothelial growth factor (VEGF) expression as shown with enzyme-linked immunosorbent assay and western blot analysis, suggesting that K5 has a direct effect on Müller cells. Methods To identify K5R in retinal Müller cells, ligand binding and competition assays as well as real-time reverse transcriptional polymerase chain reaction were performed in Müller cells. 125I-K5 showed saturable binding to cultured Müller cells. The binding can be competed off by an excess amount of unlabeled K5 but not by angiostatin, demonstrating the specificity of the K5 binding to Müller cells. Consistent with the binding assay, reverse transcriptional polymerase chain reaction using voltage-dependent anion channel–specific primers detected the K5R mRNA in the Müller cells. Results Interestingly, K5R mRNA expression in Müller cells was downregulated by diabetic conditions including hypoxia and high glucose medium. Incubation with K5 ligand prevented hypoxia-induced downregulation of K5R. Furthermore, K5R expression was also downregulated in the retina of the oxygen-induced retinopathy model, a model of ischemia-induced retinal neovascularization. In a type 1 diabetic rat model, K5R expression in the retina was significantly suppressed in rats that had diabetes for 5 and 8 weeks. Conclusions These results suggest that K5R is expressed in retinal Müller cells, which may mediate the inhibitory effect of K5 on VEGF expression. In diabetes conditions, K5R expression levels are decreased in the retina, which could contribute to the VEGF overexpression in diabetic retinopathy. These findings suggest that the decreased levels of K5R may also play a pathogenic role in diabetic retinopathy. Proliferative diabetic retinopathy (PDR) is characterized by retinal neovascularization (NV), a major cause of blindness [1]. Several endogenous angiogenic inhibitors as well as proangiogenic factors exist in ocular tissues such as the retina and vitreous [2,3]. Endogenous angiogenic inhibitors and proangiogenic factors are in a delicate balance that regulates angiogenesis. Under certain pathological conditions such as PDR, endogenous angiogenic inhibitors have been found to decrease while the proangiogenic factors increase in the retina and vitreous, leading to a disturbed balance in angiogenic control and consequently retinal NV [4][5][6]. Plasminogen kringle 5 (K5), a proteolytic fragment of plasminogen, is a potent angiogenic inhibitor [7]. Previous Correspondence to: Xia Yang, Department of Biochemistry, Zhongshan School of Medicine, Sun Yat-sen University, 74 Zhongshan 2nd Road, Guangzhou 510080, China. Phone: 86-20-87332020, FAX: 86-20-87332020, email<EMAIL_ADDRESS>studies have shown that K5 inhibits endothelial cell (EC) proliferation and migration and induces EC apoptosis [7,8]. Intravitreal injection of K5 inhibited retinal NV and reduced retinal vascular leakage in diabetic retinopathy (DR) models [9]. Furthermore, K5 and its deletion fragment also displayed inhibitory effects on tumor growth via blocking angiogenesis [10,11]. Adeno-associated virus-mediated delivery of K5 was found to inhibit growth of ovarian cancer and tumor NV [12]. A recent study reported that nanoparticle-mediated K5 gene delivery had sustained inhibitory effects on retinal vascular leakage in diabetic rats and ischemia-induced retinal NV [13]. Therefore, K5 is believed to be a promising therapeutic peptide for treating DR as well as solid tumor [13]. Our previous studies showed that K5 attenuates hypoxiainduced vascular endothelial growth factor (VEGF) overexpression in vascular cells [14]. However, the molecular mechanism for K5′s antiangiogenic effects has not been elucidated. Researchers reported previously that voltage-dependent anion channels (VDACs) are expressed on the cell surface of EC [15]. Furthermore, K5 has been shown to bind to VDAC with high specificity and affinity in EC, correlating with K5′s antiproliferative activities in cultured EC. The binding of K5 to EC can be blocked by antibodies specific for VDAC. Based on these findings, VDAC has been proposed as the K5 receptor (K5R), mediating its antiangiogenic effects [15]. However, the role of K5R in angiogenic disorders and PDR has not been established. The cellular localization of K5R in the retina has not been defined. Retinal Müller cells are the major producer of proangiogenic factors such as VEGF in DR and play a key role in retinal NV [3,16]. Since K5 inhibits VEGF expression under hypoxia, we hypothesize that K5 directly inhibits angiogenic factor production from retinal Müller cells in diabetic conditions. The present study identified K5R in retinal Müller cells and determined the expression levels of K5R under diabetic conditions. Materials and cell line: Expression and purification of recombinant K5 were performed as described previously [14]. The K5/pET22 construct was introduced into E. coli strain BL-21/DE3 (Novagen, Madison, WI). This vector provides a signal peptide that enables the recombinant protein to enter the periplasmic space. The expression and purification followed the protocol recommended by Novagen with some modifications. Briefly, expression was induced by the addition of isopropylthio-β-galactoside (IPTG) and performed for 10 h at 25 °C. Periplasmic proteins were released by digestion with lysozyme and separated from cells by centrifugation. K5 was purified by passing through the His-Bind column (Novagen). The purity and identity of recombinant K5 was examined by SDS-PAGE and western blot analysis using an antibody specific to His-tag (Oncogene Research Products, Cambridge, MA).The rat Müller cell line, rMC-1, a kind gift from Dr. Vijay Sarthy at Northwestern University, was cultured in low glucose Dulbecco's Modified Eagle's Medium (DMEM; Gibco BRL, Gaithersburg, MD) containing 10% fetal bovine serum (FBS; Invitrogen, Carlsbad, CA) [17]. The cultured cells were exposed to media containing 1% FBS for 4 h before proteins were added. Enzyme-linked immunosorbent assay and western blot analysis of vascular endothelial growth factor: Müller cells were seeded in 100-mm dishes and cultured in a CO2 incubator to reach 60%-70% confluence. The cells were washed three times with PBS (pH 7.4, 8g NaCl, 0.2g KCl, 3.628 g Na2HPO4•12H2O, 0.24 g KH2PO4 metered volume to 1,000 ml with distilled water), and the growth medium replaced with a serum-free DMEM and exposed to 1% oxygen. K5 was added to the medium in various concentrations (50, 100, 200, 400, and 800 nM) and incubated with the cells for 24 h. The cells cultured under normoxia were used for control. The cells were harvested, and the protein concentration was measured with the BioRad protein assay (Bio-Rad, Hercules, CA). Equal amounts of total cellular protein (35 μg) from each group were used for western blot analysis using an anti-VEGF antibody with the ECL Detection System (Amersham International plc, Piscataway, NJ). The same membrane was stripped and reblotted with an antibody specific to β-actin. VEGF secreted into the culture medium was measured with an enzyme-linked immunosorbent assay (ELISA) kit (R&D Systems, Inc., Minneapolis, MN) specific for VEGF. Binding of 125 I-labeled K5 to Müller cells: K5 was labeled with 125 I using the Chloromine T 125 I Labeling Kit (ICN Pharmaceuticals, Inc. Costa Mesa, CA) following a protocol recommended by the manufacturer. For the binding assay, Müller cells were seeded in 12-well plates and cultured until confluence. Cells were washed, and the culture medium was replaced with binding buffer (PBS containing 3 mM CaCl2, 1 mM MgCl2, and 5 mg/ml BSA). 125 I-K5 was added to various concentrations (0, 6.25, 12.5, 25, 50, 75, 100, 150, 200, and 250 nM) and incubated with the cells for 1 h with gentle shaking at 4 °C. The medium was removed, and cells washed three times with PBS. The cells were then lysed by adding 0.35 ml 10% sodium dodecyl sulfate. The cell lysates were collected, and the 125 I-K5 bound to Müller cells was quantified with a gamma counter (Perkin Elmer, Watham, MA). Competition with 125 I-K5 for binding to Müller cells by unlabeled K5 or angiostatin: The cells were incubated with 50 nM 125 I-K5 in the presence of increasing concentrations (0, 50, 250, 1,250, and 6,250 nM) of unlabeled K5 or angiostatin as described above. After washing, the bound 125 I-K5 was quantified. Streptozotocin-induced diabetes: Brown Norway (BN) rats were purchased from Harlan (Indianapolis, IN). Care, use, and treatment of all animals in this study were in strict agreement with the Association for Research in Vision and Ophthalmology Statement for the Use of Animals in Ophthalmic and Vision Research. Eight-week-old BN rats were given a single intravenous injection of STZ (50 mg/kg in 10 mmol/l of citrate buffer, pH 4.5) after an overnight fasting. Control rats received injections of citrate buffer alone. Serum glucose levels were checked 24 h after STZ injection and every 2 days thereafter, and only the animals with glucose levels higher than 350 mg/dl were considered diabetic [18,19]. Oxygen-induced retinopathy: Induction of retinal neovascularization was performed as described by Smith et al. [20], with minor modifications. Briefly, newborn BN rats at P7 were exposed to hyperoxia (75% O2) for 5 days (P7-12) and then returned to normoxia (room air) to induce retinal neovascularization. Control rats were maintained in constant room air. Real-time reverse transcription-polymerase chain reaction: Real-time reverse transcription-polymerase chain reaction (RT-PCR) was performed as described previously [21]. The primers used for human VDAC1 (5′-AAC ACT CGC TTT GGA ATA AC-3′ and 5′-AGT CCT AAA CCA AGC TTG TG-3′) amplified a 180-bp single-band product. The 18S rRNA was amplified using primers (5′-TGC TGC AGT TAA AAA GCT CGT-3′, and 5′-GGC CTG CTT TGA ACA CTC TAA-3′) to normalize the K5R mRNA levels. Statistical analysis: The Student t test was used in all statistical analyses. A p value of less than 0.05 was considered statistically significant. K5 attenuated the hypoxia-induced vascular endothelial growth factor overexpression in cultured Müller cells: Overexpression of VEGF in the retina induced by hypoxia plays a key role in retinal NV [6]. Since retinal Müller cells are the major source of VEGF in DR, we determined whether K5 has a direct inhibitory effect on VEGF expression in Müller cells. The rat Müller cell line, rMC-1, was exposed to 1% oxygen to induce hypoxia. As shown with enzyme-linked immunosorbent assay (ELISA) specific for VEGF, the secreted VEGF in the conditioned medium was significantly induced by hypoxia, and this induction was attenuated by K5 in a concentration-dependent manner ( Figure 1A). Western blot analysis showed that K5 also blocked the increase in cellular VEGF levels induced by hypoxia in Müller cells ( Figure 1B). Detection of K5R expression in retinal Müller cells: To test the hypothesis that the effect of K5 in Müller cells is mediated by a receptor for K5, we used a K5 binding assay. The cultured rMC-1 cells were incubated with increasing concentrations of 125 I-K5. After unbound 125 I-K5 was removed by washing, the cells were lysed, and 125 I-K5 bound to the cells were quantified using a gamma counter, which showed a concentrationdependent and saturable binding of 125 I-K5 to Müller cells ( Figure 2A). We calculated the Kd as 31 nM, comparable to that reported by Gonzalez-Gronow et al. [15]. To further demonstrate the specificity of the K5 binding, the Müller cells were incubated with 50 nM 125 I-K5 in the presence of excess amounts of unlabeled K5 or unlabeled angiostatin, kringles 1-4 of plasminogen [22]. The results showed that binding of 125 I-K5 on the Müller cells was competed off by excess amounts of unlabeled K5 but not by angiostatin ( Figure 2B,C). The unlabeled K5 inhibited the binding of 125 I-K5 in a concentration-dependent manner, demonstrating the specificity of the binding of 125 I-K5 to Müller cells ( Figure 2B). These results suggest that K5R is expressed in retinal Müller cells. K5R expression was downregulated by hypoxia and high glucose in Müller cells: Previous studies have shown that the expression of VEGF is upregulated in retinal Müller cells [16]. Since K5 inhibits VEGF expression under hypoxia in Müller cells, we determined whether K5R expression is altered under diabetic conditions, which may contribute to the overexpression of VEGF. We compared expression levels of the K5R mRNA in the cells exposed to high glucose medium (25 mM glucose) with those in low glucose medium (5 mM glucose + 20 mM mannitol control). Real-time RT-PCR using specific primers for K5R showed that K5R mRNA levels were significantly decreased by high glucose exposure, compared to that in the low glucose medium (Figure 3). Similarly, the cells were exposed to 1% oxygen to induce hypoxia, and the K5R mRNA was quantified and compared to that in the normoxia control. Real-time RT-PCR showed that K5R mRNA levels were also significantly decreased by hypoxia (Figure 3). K5 attenuated the downregulation of K5R expression: To determine the impact of K5 on the expression of K5R, Müller cells were exposed to hypoxia in the presence of purified recombinant K5. As shown with real-time RT-PCR, hypoxia significantly decreased K5R mRNA levels, while adding the K5 peptide attenuated the hypoxia-induced downregulation of K5R ( Figure 4). This observation suggests that preventing the Figure 1. K5 inhibited the hypoxia-induced vascular endothelial growth factor overexpression in cultured Müller cells. rMC-1 cells were exposed to normoxia (20% oxygen) and hypoxia (1% oxygen) with different concentrations of K5 for 24 h. A: VEGF secreted to the conditioned medium was measured using ELISA specific for VEGF, normalized by total protein concentration and expressed as a picogram of VEGF per milligram of total protein (mean±SD, n=4). Values significantly lower than control are indicated (*p<0.05). B: VEGF levels in cell lysates were measured with western blot analysis using 50 µg cellular proteins. The membrane was stripped and then reblotted with an anti-β-actin antibody. decrease in K5R by K5 may represent a new mechanism of its action. K5R expression was downregulated in the retina of the oxygen-induced retinopathy rat model: To investigate whether K5R is also downregulated in the retina of DR models, we employed rats with oxygen-induced retinopathy (OIR), a commonly accepted model for PDR [20,23]. At age of P14 and P16, the OIR rats showed significantly decreased K5R mRNA levels in the retina, compared to those of agematched rats at constant normoxia ( Figure 5). This finding suggests that the K5R decline in this model may be another pathogenic mechanism for retinal NV in OIR rats. K5R expression is suppressed in the retina of diabetic rats: K5R expression was also measured in the retina of STZinduced diabetic rats, a type 1 diabetes model. Real-time RT-PCR showed that K5R mRNA levels in the retina declined in rats that had diabetes for 5 and 8 weeks, compared to that in the non-diabetic control. The decrease of the K5R mRNA levels appeared to be dependent on the duration of diabetes ( Figure 6), suggesting that the decrease in K5R may also contribute to retinal vascular leakage in this model. DISCUSSION Previous studies have shown that endogenous angiogenic inhibitors are downregulated in the vitreous from patients with DR and in the retina of DR models [4,5]. The decrease of angiogenic inhibitors and overexpression of proangiogenic Figure 3. K5R expression was down-regulated by hypoxia and high glucose. Müller cells were exposed to hypoxia (1% oxygen) or normoxia (20% oxygen) at 37 °C for 12 h. The cells were cultured in high glucose (25 mM glucose) and low glucose (5 mM glucose + 20 mM mannitol) media for 24 h. Total RNA was isolated from the cells and used for real-time RT-PCR of K5R, normalized by the 18s RNA level and expressed as % of respective control (mean±SD, n=3). Values significantly lower than control are indicated (*p<0.05). Hypoxia and high glucose significantly downregulated the expression of the K5R mRNA in cultured Müller cells. factors disturb the balance of angiogenesis regulation, leading to DR [23,24]. The present study showed for the first time that the receptor for the angiogenic inhibitor K5 is also downregulated in the retina of DR models, suggesting that decreased expression of receptors for angiogenic inhibitors may weaken the antiangiogenic action and, thus, represents a new pathogenic mechanism for DR. Several groups have independently demonstrated that K5 is a potent angiogenic inhibitor, as it inhibits EC proliferation and migration [7,9]. Toward its mechanism of action, Gonzalez-Gronow et al. [15] identified VDAC1 as the receptor for K5, which is expressed on the surface of EC. The expression of K5R on EC can explain the direct inhibitory effect of K5 on EC proliferation and migration [15]. Our previous studies have shown that K5 inhibits VEGF overexpression in the retina of DR models [14]. Retinal Müller cells play a key role in DR, as they are the major source of inflammatory and angiogenic factors such as VEGF [16,25]. Our results showed that K5 directly inhibits VEGF expression in cultured Müller cells. The mechanism for the direct effect of K5 on VEGF expression in Müller cells is unclear. To investigate whether the receptor mediates the K5 effects on Müller cells, we performed a 125 I-K5 binding assay. The assay demonstrated that K5 has saturable and specific Figure 4. K5 induced K5R expression in Müller cells. Müller cells were exposed to 1% oxygen in the presence of 200 nM K5 at 37 °C for 12 h. Total RNA was isolated from the treated cells. The K5R mRNA was quantified with real-time RT-PCR and normalized by 18s RNA levels (mean±SD, n=3). The copies of K5R mRNA in hypoxia group were significantly lower than that in normoxia group (*p<0.05). K5 prevented the hypoxia-induced downregulation of K5R mRNA expression in cultured Müller cells. binding to cultured Müller cells. The binding of K5 is reversible and can be competed off by excess amounts of unlabeled K5. RT-PCR using K5R-specific primers detected K5R mRNA in cultured Müller cells. These results suggest that K5R is expressed in Müller cells. Interestingly, the K5 binding to K5R is not competed off by angiostatin, kringles 1-4 of plasminogen, suggesting that K5R is specific for K5 and not the receptor for plasminogen or for kringles 1-4. Expression of K5R in Müller cells could explain the direct effect of K5 on Müller cells in downregulation of VEGF expression. To study whether K5R expression is altered in retinal Müller cells under diabetic conditions, we exposed the cells to hypoxia and high glucose medium, as hypoxia and high glucose are the major causative factors for retinal inflammation and NV in DR, as both have been shown to induce expression of proangiogenic and proinflammatory factors [20]. Our results showed that hypoxia and high glucose concentration downregulate K5R expression. The weakened effect of K5 due to the decrease in its receptor in diabetic conditions may contribute to the overexpression of VEGF in Müller cells under diabetic conditions. In this study, rMC-1 was used as an in vitro model to study the expression of K5R. rMC-1 is a widely used Müller cell Figure 5. K5R expression was down-regulated in the retina of the oxygen-induced retinopathy rat model. Rats were exposed to 75% oxygen from P7 to P12 of age and then returned to room air at P12. Total RNA was isolated from the retina of the OIR rats at P14 and P16. mRNA levels of K5R were quantified with real-time RT-PCR and normalized with 18s RNA levels and expressed as % of that in age-matched normal rats (mean±SD, n=3). Values significantly lower than control are indicated (*p<0.05). K5R mRNA levels were significantly decreased in the retina of the OIR models, compared to that in normal rats. line, since this line is derived from rat Müller cells and expresses Müller cell markers [26,27]. Using the cloned cell line excludes potential confusion from contamination of other cell types in primary Müller cell culture. However, the changes in K5R observed in this cell line remain to be confirmed in Müller cells in the retina in the future. STZ-induced diabetes is a commonly used model for nonproliferative DR, as this model develops retinal inflammation and vascular leakage but not NV [18]. The OIR model develops preretinal NV induced by ischemia [20]. Although the OIR model is not a diabetic model, this model is commonly used as a model for PDR, as the pathological changes and pathogenic mechanism are similar to PDR [20]. Our in vivo results showed that K5R levels in the retina decreased in both models. Since K5 has anti-inflammatory, antipermeability, and antiangiogenic activities, the decreased K5R may contribute to retinal vascular leakage and NV in these models. The decreased K5R may represent a new mechanism leading to the weakened antiangiogenic actions of the endogenous angiogenic inhibitor and disturbing the balance between proangiogenic and antiangiogenic systems. Therefore, upregulation of K5R may become a new, promising therapeutic strategy for DR. Figure 6. K5R expression is suppressed in the retina of diabetic rats. Diabetes was induced in adult rats with an injection of STZ and monitored by blood glucose levels. The rats with blood glucose levels higher than 350 mg/dl were used as diabetic rats. Retinas were isolated from diabetic rats at 2, 5, and 8 weeks after the STZ injection. Retinal mRNA levels of K5R were quantified with real-time RT-PCR and normalized by the 18s RNA level. The normalized K5R mRNA levels were expressed as % of that in non-diabetic control (mean±SD, n=3). Values significantly lower than control are indicated (*p<0.05). Diabetic rats showed significantly reduced K5R mRNA levels in the retina, compared to non-diabetic rats.
4,930.8
2012-02-04T00:00:00.000
[ "Biology", "Medicine" ]
The Use of a Technology Acceptance Model (TAM) to Predict Patients’ Usage of a Personal Health Record System: The Role of Security, Privacy, and Usability Personal health records (PHR) systems are designed to ensure that individuals have access and control over their health information and to support them in being active participants rather than passive ones in their healthcare process. Yet, PHR systems have not yet been widely adopted or used by consumers despite their benefits. For these advantages to be realized, adoption of the system is necessary. In this study, we examined how self-determination of health management influences individuals’ intention to implement a PHR system, i.e., their ability to actively manage their health. Using an extended technology acceptance model (TAM), the researchers developed and empirically tested a model explaining public adoption of PHRs. In total, 389 Saudi Arabian respondents were surveyed in a quantitative cross-sectional design. The hypotheses were analysed using structural equation modelling–partial least squares (SEM-PLS4). Results indicate that PHR system usage was influenced by three major factors: perceived ease of use (PEOU), perceived usefulness (PU), and security towards intention to use. PHR PEOU and PHR intention to use were also found to be moderated by privacy, whereas usability positively moderated PHR PEOU and PHR intention to use and negatively moderated PHR PU and PHR intention to use. For the first time, this study examined the use of personal health records in Saudi Arabia, including the extension of the TAM model as well as development of a context-driven model that examines the relationship between privacy, security, usability, and the use of PHRs. Furthermore, this study fills a gap in the literature regarding the moderating effects of privacy influence on PEOU and intention to use. Further, the moderating effects of usability on the relationship between PEOU, PU, and intention to use. Study findings are expected to assist government agencies, health policymakers, and health organizations around the world, including Saudi Arabia, in understanding the adoption of personal health records. Concerning the above, individuals should have access to their health information and be able to control it through the use of personal health records (PHR) so that they can actively participate in the management of their healthcare and eliminate the role of the passive patient [36,38,40,50,51]. However, users must continuously invest effort into keeping up-to-date accounts to ensure that the system can effectively support them. This effort will reduce the likelihood of inaccurate, incomplete, and outdated records in the system, as these may lead to erroneous decisions [31]. An example of an emerging class of information system is the patient health record system, which offers access to and monitoring of useful information that is accompanied by the requirement for ongoing maintenance (for example, regular updates of a patient's health records), thereby supporting an individual's active role within the context for which the information system was designed [32,36,38,[43][44][45]. Healthcare system users should have the ability to be more proactive in managing the information systems, must reflect suitable personal traits, and take support from the factors in the environment to promote their active role [8]. This facilitates a sufficient motivation level towards system use regardless of continuous maintenance [52]. However, despite industry predictions about increasing consumer interest and government commitment to PHR technologies, their adoption has yet to peak and continues to fall short of expectations. The expression "PHR paradox" has been used to explain the disconnection between active interest and low usage rates of PHRs [53]. Several reasons have been proposed for the lag in adoption in the literature, which are often contradictory with intuition; often, the results are mixed [38,54,55]. As a result, authors have urged more studies in the consumer adoption PHR area [34,37,38,56,57]. More specifically, little information about health technologies in Saudi Arabia is available due to the lack of research [58]. Several studies have overlooked the perspective of healthcare consumers (users) concerning implementing and using an integrated PHR system at the national level [39]. Therefore, this study extends TAM by examining the factors influencing healthcare consumers' usage of personal health records. Literature Review PHR Adoption The literature on PHR has indicated that adoption barriers may be linked to technology factors such as security concerns, system usability, and ineffective healthcare provider system integration [37,41,[59][60][61]. Several personal factors have also been articulated as barriers to adopting these technologies, such as lack of technology awareness, competency, chronic medical conditions, and unrealistic expectations [41,57,[61][62][63][64][65]. Although several of these factors have been empirically validated, there is often a lack of consistency in the results between studies [41,55,61,[66][67][68][69]. The reviewed relevant studies show that chronically ill or disabled patients and their caregivers and older people's caregivers have a higher likelihood of adopting and using PHR technologies [70][71][72][73]. This user group often views PHR technologies as useful in communicating with the correct personnel to obtain personalised care [37,71,74]. According to a recent study, several factors contribute to PHR adoption, namely computer anxiety, concerns about privacy and security, and perceptions of usefulness, among others [54,55,68,[75][76][77][78]. Meanwhile, studies focusing on several adoption factors, including health literacy among consumers, user self-efficacy, and usability perceptions, have shown mixed or contradictory results when evaluating adoption [54,55,79]. Regarding major areas that need more investigation, the review showed that PHR adoption has yet to be thoroughly examined. In this regards, the study aim was to explain factors such as privacy, security, and usability (exogenous predictors of TAM) to offer insight into the utilization and adoption of and personal health records. The current study contributes significantly to the technologyacceptance literature in two distinct manners. Firstly, is that it is the first to examine the use of PHR in Saudi Arabia, including the extension of the TAM model, and second, it creates a context-driven model that focuses on the associations among privacy, security, and usability and personal health records utilization. Additionally, this study fills a gap in the literature regarding the moderating effects of privacy influence on the relationship between perceived ease of use and intention to use. Further, the moderating effects of usability on the relationship between perceived ease of use, perceived usefulness, and intention to use were investigated. The proposed model enriches information and knowledge regarding the acceptance of PHRs in developing nations, and by doing so, it helps satisfy calls for contextual theorising in the information systems field. The next section of the article presents a description of the proposed model and the constructs and hypotheses that are relevant to it. Theoretical Foundation This study adopted TAM as the underpinning theory owing to its influential and effective nature in shedding light on technology usage behaviour [16,18,[80][81][82][83][84][85][86]. TAM posits that technology use behaviour, referred to as the behaviour inclination towards accepting technology, can be measured through a user's attitude towards using technology [87]. Two main attitude predictors towards usage have been proposed: perceived usefulness and easiness [87,88]. The first refers to the belief of an individual that using technology can promote performance of task; the second defines the perception of an individual that technology use is free from effort [87,88]. Additionally, perceived easiness indirectly influences perceived usefulness attitudes [16,18]. Studies have found that TAM can effectively explain differences in technology use behaviour in different contexts and situations, including the health context, for eHealth records (EHRs) [89], telehealth [90], mobile health technologies [16,18], cloud-based services [91], medical devices and telemonitoring tools [92,93], and assistive technology [94]. However, despite the comprehensive inspection and validation of models in terms of health information systems among health professional staff, such examinations do not address consumers' acceptance of health information systems [16,18,[95][96][97], and based on the provided evidence, such acceptance may vary from that of professionals with self-efficacy and experience, as a result of which challenges may be faced during system use [12,16,95,98]. Hence, searching for ways to enhance PHR acceptance among consumers is pertinent. Additionally, TAM has the same weaknesses as other technology acceptance models, the first of which is that it depends on other factors to determine the attitude of individuals. In general, TAM has been widely employed to investigate internal motivations rather than external ones, as its focus is on the outcomes of IT use. The use process has been largely overlooked, highlighting the need to include external factors in the model. Consequently, a TAM extension with new variables may be able to explain PHR adoption. This study included privacy, security, and usability to extend TAM. Both privacy and security have been researched in literature, with increasing evidence validating their influence [99][100][101][102][103][104][105]. Based on a systematic review of PHR privacy policies, users are not provided with detailed descriptions of the security issues and adherence to standards and regulations when it comes to a PHR system [104,106]. This may be exemplified by the significant advantages of PHR use and systems privacy risks, with emphasis confined to general privacy and trust issues [99,101,104]. Both security and privacy are major challenges in protecting health information systems, and even though the system's success depends on various factors such as organisational, technical, and political issues, the authentication and cryptographic management (privacy and trust issues) for prevention of hacker attacks and unauthorised use is of major importance [99,104]. Added to the above, results show that new system usability and its design and user experience contributes to influence system acceptance, and in this regard, the usability of a system can be defined as the amount of effort that must be expended to use it. In general, usability is the degree to which users can effectively and efficiently use a product and the extent to which they are satisfied that they will achieve specific goals through employing a product. It is key to the use of acceptance of PHR, as evidenced by prior studies [32,41,66], to facilitate PHR's ease of use through the user interface and patient support [38,107,108]. Thus, PHR stakeholders, including designers and developers, should focus on usability aspects. The degree to which an individual believes that a particular technology will improve his or her job performance to an extent that includes enhancing efficiency and effectiveness [88] can be determined by the perceived usefulness of the technology. Based on TAM studies [88,109], perceived usefulness is one of the top technology adoption determinants [110,111]. It is therefore expected that the perceived usefulness of PHR systems will serve a key role in deciding whether they are adopted. Past studies of this calibre have confirmed the key role of perceived usefulness in adoption prediction [55,68,79,112]. In this regard, the first hypothesis is reported: Hypothesis 1 (H1). The intent to use PHR is positively influenced by PU. Perceived Ease of Use (PEOU) The level to which an individual believes that using a specific technology will be effort-free is known as PEOU [88]. In this study, PEOU is described as a user's belief that PHR use is free from mental and physical effort. Studies in the literature dedicated to the PEOU-intention to use PHR relationship generally confirmed the relationship [113,114]. Further, PEOU's significant influence over PU and intention towards using PHR [27,113] was reported. In this regard, the second and third hypotheses are reported: Hypothesis 2 (H2). The PU of PHR is positively affected by PEOU. Hypothesis 3 (H3). The intent to utilize PHR is positively affected by PEOU. Intention to Use New technology acceptance is primarily set by intention towards using such technology, defined as an individual's desire to engage in a particular behaviour [115]. When referring to the use of PHRs, the intention is a plan towards using it, and according to Hsieh et al. [114], intention towards PHR usage significantly relates to its actual use. In this regards, the fourth hypothesis is reported: Hypothesis 4 (H4). PHR usage is positively influenced by the intention to use it. Privacy and Security An essential research topic relative to technology acceptance is the role of privacy and security and the related empirical findings [99][100][101][102]. More specifically, information privacy is the ability of an individual to manage their personal information in light of interactions and exchanges with others [116,117]. Healthcare providers generally manage users' personal data and provide it to other personnel; owing to this sharing, there is the utmost concern for privacy [118]. Currently, using electronic communication has become common, adding to the privacy, confidentiality, standardisation, and accuracy of PHR [119,120]. A related study by Kaelber et al. [121] indicated that the top concern among patients regarding electronic healthcare applications of every type is security and privacy, which holds true for PHR. In another study, Featherman and Fuller [122] stated that privacy concerns are the focus of potential e-services adopters. Moreover, based on a systematic review of PHR privacy policies, most such policies failed to provide users with detailed descriptions of security issues and adherence to standards and regulations [106]. In the case of perceived benefits that can be reaped from PHR, the highlights are placed on privacy and trust issues rather than the potential systemrelated privacy risks [101]. It was found that 67% of people were concerned with their personal medical records privacy (Bishop et al. [123]), indicating the importance of privacy from the patient's viewpoint [124]. Privacy negatively influences adopting an eHealthcare system, according to Angst and Agarwal [118], while Li et al. [101] revealed that privacy could not completely explain the intention to adopt. Nevertheless, other studies such as that by Whetstone and Goldsmith [125] found that healthcare innovativeness, privacy concerns, and perceived usefulness were the top predictors of adoption intent. According to Sabnis and Charles [126], security is a determining factor in the decision to adopt web-based PHR, along with confidentiality and privacy. If people are convinced that their personal information is shared privately and is stored in a way that unauthorised parties will not be privy to it [127], their concerns will be assuaged. However, the more individuals who adopt web-based PHR, the higher the risk of breach the information; therefore, privacy and security are main concerns for protecting health systems. Successful systems depend on various factors (organisational, technical, and even political), but authentication and cryptographic management are of top importance for preventing unauthorised use and attacks made by hackers [99]. In this regards, the fifth hypothesis is reported: Hypothesis 5 (H5). Security has a positive influence on the intention to use a PHR. In connection to the above, patients will be more inclined to use PHR due to its ease of use and the PHR providers' assurance that the system is credible and capable of minimising privacy risk, which would lead to higher intention towards PHR usage. Hence, the sixth hypothesis is reported: Hypothesis 6 (H6). Privacy moderates the relationship between PEOU and the intention to use a PHR. Usability The usability concept may be defined as the effort needed towards using a computer system. According to Nielsen [128], usability is associated with the ease with which a user can learn to manage a system, the ease of learning the fundamental system functions, the level of efficiency with which the site has been developed, the level of error avoidance, and the general user satisfaction when it comes to system management. On the whole, usability reflects how users can use a particular system [105], and thereby, high system usability is related to lower difficulty levels of managing its functions [88]. Usability has always been considered a major predictor of intentions towards system usage [129]. The following statements can summarise website usability: • Easy understanding of the system structure, functions, interface, and content by a user; • Simple use of the initial stages of a website, • Speedy search for required information, • Ease of browsing in light of the time and work required to obtain the expected results, • User's ability to control and navigate the system at any time. Regarding health information systems, usability issues have garnered significance in system rejection/acceptance, as evidenced in computerised patient records that depend on the system's usability [130][131][132][133][134]. Evidence points to the fact that issues surrounding usability directly influence patient outcomes, including opportunity cost, while other issues that indirectly impact usability include coping strategies in dealing with software problems and limitations and complexity and that entail dealing with complexity strategies, breach of communication and usability of software, oversight of bias, and usability on patient safety [131][132][133][134][135]. In this day and age, consumers are faced with an extensive array of personal health information-management tools at their convenience, and PHR's ability to satisfy their needs depends at some level on the way product designers focus on users' needs and a user's involvement in the design, testing, and system re-design. Usability of PHR indicates the perceived ease of managing a site or accessing and keeping track of health information online, and this is deemed a major factor in PHR development. Patients' willingness to accept depends on the user-friendliness of the PHR system and ease of learning usage and browsing. Meanwhile, a complicated system could only lead to human error and dissatisfaction among the users, and eventually, rejection rather than acceptance will be the outcome [136]. Additionally, patients will be convinced that PHR usage is easy when they can easily learn system management and memorise fundamental system functions. This would lead to higher intention towards PHR usage; in other words, PEOU and PHR intention towards use will be correlated more strongly with higher PHR usability, and in this regard, the seventh hypothesis is reported: Hypothesis 7 (H7). The relationship between PEOU and intention to use personal health records is moderated by usability. Additionally, if patients are convinced that PHR use will enhance their health status and quality of health services through its efficient functions and design, they will readily accept it, with higher intention towards its usage. The higher the usability of PHR, the higher the relationship between PU and intention towards PHR usage. In this regards, the eighth hypothesis is reported: Hypothesis 8 (H8). The relationship between PU and intention to use personal health records is moderated by usability. bility directly influence patient outcomes, including opportunity cost, while other issues that indirectly impact usability include coping strategies in dealing with software problems and limitations and complexity and that entail dealing with complexity strategies, breach of communication and usability of software, oversight of bias, and usability on patient safety [131][132][133][134][135]. In this day and age, consumers are faced with an extensive array of personal health information-management tools at their convenience, and PHR's ability to satisfy their needs depends at some level on the way product designers focus on users' needs and a user's involvement in the design, testing, and system re-design. Usability of PHR indicates the perceived ease of managing a site or accessing and keeping track of health information online, and this is deemed a major factor in PHR development. Patients' willingness to accept depends on the user-friendliness of the PHR system and ease of learning usage and browsing. Meanwhile, a complicated system could only lead to human error and dissatisfaction among the users, and eventually, rejection rather than acceptance will be the outcome [136]. Additionally, patients will be convinced that PHR usage is easy when they can easily learn system management and memorise fundamental system functions. This would lead to higher intention towards PHR usage; in other words, PEOU and PHR intention towards use will be correlated more strongly with higher PHR usability, and in this regard, the seventh hypothesis is reported: Hypothesis 7 (H7). The relationship between PEOU and intention to use personal health records is moderated by usability. Additionally, if patients are convinced that PHR use will enhance their health status and quality of health services through its efficient functions and design, they will readily accept it, with higher intention towards its usage. The higher the usability of PHR, the higher the relationship between PU and intention towards PHR usage. In this regards, the eighth hypothesis is reported: Research Context The launching of the Kingdom of Saudi Arabia's national eHealth strategy in 2011 is in line with Vision 2030, which is a roadmap for the country's economic growth and development [137]. The strategy covers the National Transformation Program, among which are eight themes of enhancing healthcare services quality and efficiency through a patientcentred healthcare culture and enhanced patient involvement using technology [138]. With the introduction of eHealth to the Kingdom of Saudi Arabia healthcare, researchers have initiated research into its different aspects [39,51,[139][140][141][142][143][144][145][146][147]. Most studies have examined the influencing factors on intention towards PHR use at the pre-adoption stage, while a few focused on the influencing factors at the usage stage. For instance, Al-Sahan [143] examined the perceived hindrance or challenges towards PHR adoption in the Ministry of National Guard Health Affairs (MNGHA) based on two perspectives: technical and social. Based on the results garnered using 424 patients, a positive perception of PHR adoption existed, constituting 96.7%, which shows the avid interest of the patients in PHR usage, and the majority of them, constituting 73.3%, expressed no concerns for confidentiality when accessing their healthcare information online. Similarly, Saudi Arabian patients' perspectives and expectations were tackled by Alhur [51] in his study of PHR, which found participants to be highly interested in the system compared to other studies in developed nations. Most were inclined towards PHR use, perceiving them as valuable to health, albeit some expressed security concerns regarding online records. Overall, the patients were generally optimistic in their view of PHR in enhancing their privacy online. The two studies were descriptive without the employment of TAM, UTAUT, or other IS theories. Hence, in the present study, the objective was to examine privacy, security, and usability (exogenous predictors of TAM) and their influence on PHR use. The research seeks to contribute to the literature concerning technology acceptance in two major ways: the first of which is to be among the few studies that used TAM to examine PHR use in Saudi Arabia, and the second of which is developing a context-driven model to investigate the relationship between privacy, security, and usability and using PHR. The hope is that the proposed model provides insight into the PHR acceptance domain in the context of a developing nation, responding to the need to carry out contextual theorisation in IS studies. Sample and Data Collection This quantitative study uses a cross-sectional design to examine the proposed model. The study used a questionnaire survey as the main data collection instrument, and copies were distributed to King Abdul-Aziz University faculty members, employees, and students that own a personal healthcare record (Shifaa platform). Surveys were developed in English and then translated into Arabic, as Arabic is the mother tongue of the Saudi people. After translation, an online-based questionnaire was distributed through a survey link to selected respondents via a university email distribution group. A social media platforms were used to share the survey link with the involvement of university communities. The data collection lasted 2 months, from 30 June to 30 August 2022. Following Krejcie and Morgan's [148] table, it was determined that 384 respondents would be an appropriate size. The study retrieved 389 survey results; upon scrutiny, they were all found to be complete and useable for analysis. As mentioned, the English version of the original survey was translated into Arabicthis was necessary, as adopting validated instruments from past studies saves time and effort compared to developing a survey from scratch. A 5-point Likert scale was employed to measure survey items, and the scales were adopted from past literature (see Appendix A for measurement items). The demographic information analysis showed that the respondents were mostly male, totalling 221 of the population (56.9%), and the majority of their ages ranged from the 17-to 25-years category (216 respondents), constituting 55.5% of the total respondents with bachelor degrees (49.1%). Moreover, 257 respondents, constituting 66% of the total respondents, had experience using the system for less than a year. Table 1 tabulates the demographic characteristics analysis results, covering the respondents' age, education level, gender, and experience using the system. Results This study employed partial least squares (PLS)-SEM to test the proposed framework, enabling the measurement and structural models to be examined simultaneously [149][150][151]. As well as being effective in addressing complex models with hierarchical structures, PLS is also highly effective in dealing with models with multiple relationships, indicators, and constructs [152][153][154][155][156][157]. In addition, PLS can be used to deal with problems that may arise because of small sample sizes and errors, as it only relies on a few rigid assumptions regarding the normal distribution of data to deal with such problems [84,152,[158][159][160]. PLS Version 4.0.8.4 was employed to test the proposed model, with the first step involving testing the measurement model's reliability and validity [159,161,162]. AVE, indicator reliability, internal consistency, and discriminant validity, which other authors previously proposed, were employed to determine whether the study had convergent validity [8,16,84,[163][164][165][166]. Table 2 contains the composite reliability (CR) values, item loadings, Cronbach's alpha (CA), and constructs AVE. The table shows that all CA values exceeded 0.60, which were all acceptable based on Pallant [167] and Nunnally and Bernstein [168], and CR values exceeded 0.70 throughout the constructs, which confirmed internal consistency and appropriate nature of constructs based on Hair et al. [166,169,170]. According to the results, the items of the constructs had a reliability of greater than 0.40, which was sufficient for them to be considered acceptable [170]. In terms of convergent validity, all AVE values exceeded 0.50, which was the threshold value [170]. Based on the squared AVE values of the constructs, the conclusion can be reached that the discriminant validity of the constructs exceeded the threshold value. Based on the fact that all the values were higher than the correlation construct values, the discriminant validity of the constructs was confirmed by the fact that all the values were higher than the correlation construct values [170] (See Table 3). The structural model and hypotheses analysis was conducted using a main effect model, whereas an analysis of the moderation effect was carried out using an interaction model [163,170]. To generate the path coefficient and evaluate the significance of the effects of the study models, the PLS path algorithm was applied to the study models' outputs. Based on past studies' recommendations [159,169,170], 5000 bootstrapping resamples were applied (refer to Figure 1), and the path coefficients significance was determined for direct effects (refer to Table 4) and moderating effects (refer to Table 5). Consistent with the study's hypotheses, the results in the above table show the significant and positive impact of perceived usefulness on intention to use PHR (β = 0.161, t = 2.595, p < 0.005), supporting H1. As for perceived ease of use, a significant and positive effect was found on perceived usefulness to use PHR (β = 0.702, t = 21.406, p < 0.00), supporting H2, and perceived ease of use was also found to significantly influence PHR intention to use (β = 0.578, t = 10.527, p < 0.00), and thus, H3 was also supported. Intention to use PHR positively and significantly influenced PHR actual use (β = 0.642, t = 18.831, p < 0.00), confirming and supporting H4. Lastly, security positively and significantly influenced the intention to use a PHR (β = 0.109, t = 2.877), thus supporting H5. The moderating hypotheses (H6, H7, and H8) concerning privacy and usability were examined using an interaction model. The study created three latent interaction constructs to depict the interaction between PU and PEAU (TAM-related factors and the moderating constructs (privacy and usability) and their influence on PHR intention to use, i.e., the criterion variable. A bootstrapping procedure with 5000 resampling was used, and Table 5 contains the detailed results. Based on the positive moderation path coefficient of the interaction term between privacy and perceived ease of use (β = 0.075, t = 2.008, p < 0.020), privacy positively moderated the PEOU-intention to use the PHR relationship, supporting H6. In essence, privacy moderated the PEOU-PHR intention to use relationship. As shown in Figure 2 and Table 6, privacy is a moderator in maintaining the association between perceived ease of use and intention to use PHR. Privacy strengthens the positive association between perceived ease of use and intention to use PHR. Moving on to the moderating effect of usability, the moderation path coefficient result (β = 0.095, t = 1.903, p < 0.029) shows that usability moderates the association between perceived ease of use and PHR intention to use, confirming H7. The result is presented in Figure 3 and Table 7. The moderating effect supports the positive association between perceived ease of use and PHR intention to use. Moving on to the moderating effect of usability, the moderation path coefficient result (β = 0.095, t = 1.903, p < 0.029) shows that usability moderates the association between perceived ease of use and PHR intention to use, confirming H7. The result is presented in Figure 3 and Table 7. The moderating effect supports the positive association between perceived ease of use and PHR intention to use. Moving on to the moderating effect of usability, the moderation path coefficient result (β = 0.095, t = 1.903, p < 0.029) shows that usability moderates the association between perceived ease of use and PHR intention to use, confirming H7. The result is presented in Figure 3 and Table 7. The moderating effect supports the positive association between perceived ease of use and PHR intention to use. The negative moderation path coefficient of the interaction term between usability and perceived usefulness (β = −0.102, t = 1.747, p < 0.04) is indicative of the negative moderating effect of usability on the association between perceived usefulness and PHR intention to use, and as such, H8 is also supported. The results are demonstrated in Figure 4 and Table 8. The results show that the usability of PHRs reduces the positive association between perceived usefulness and PHR intention to use. Finally, the PLS structure model was evaluated using the criterion coefficient of determination (R2). According to Sarstedt et al. [159], the rule of thumb when it comes to R2 values is such that 0.67 is deemed substantial, 0.33 is moderate, and 0.19 is weak. These findings revealed that TAM integration was successful in predicting PHR use, and with such addition, the model's predictive power increased and managed to explain 0.492 of the variances in patients' PHR usage and 0.548 of the variance of intention towards use. The study model succeeded in explaining 0.492 of the perceived usefulness perception of patients towards PHR. The negative moderation path coefficient of the interaction term between usability and perceived usefulness (β = −0.102, t = 1.747, p < 0.04) is indicative of the negative moderating effect of usability on the association between perceived usefulness and PHR intention to use, and as such, H8 is also supported. The results are demonstrated in Figure 4 and Table 8. The results show that the usability of PHRs reduces the positive association between perceived usefulness and PHR intention to use. Finally, the PLS structure model was evaluated using the criterion coefficient of determination (R2). According to Sarstedt et al. [159], the rule of thumb when it comes to R2 Discussion This research validates the accuracy of TAM in predicting the use of PHR among patients by supporting its assumptions with additional variables, thus strengthening the model's predictive abilities. The results found supported the significant association between perceived usefulness and consumers' intention to use PHR (p < 0.05), which is in line with past reported studies on eHealth adoption, including Alsyouf et al. [16,18], as well as PHR adoption, including Noblin et al. [171], Abdekhoda et al. [103], and Liu [27]. These studies supported PU's role in driving users' behavioural intention towards PHR use. In other words, if patients believe that PHR can provide benefits, they will use it to enhance healthcare services. Healthcare quality is enhanced through this technology by eliminating waiting times, and through health profile management, users can also maintain a higher rate of health profile usage and management. Added to the above, a significant association was found between perceived ease of use and perceived usefulness (p < 0.00), as with the other prior literature on eHealth adoption such as that of Alsyouf et al. [16,18] and on PHR adoption such as that of Abdekhoda et al. [103], Liu [27], and Noblin et al. [171]. Stated clearly, people who find PHRs easy to use are more likely to use them frequently, supporting their perception of their value and importance. Moreover, the study findings supported a significant association between PEOU and PHR intention to use (p < 0.00), as proposed in H3 and as revealed by past literature, including Alsyouf et al. [16,18] in the eHealth context and Abdekhoda et al. [103] and Elsafty et al. [172] in PHR adoption context. Based on this result, perceived ease of use among patients concerning PHR could result in increased intention towards using the system and ultimately their actual use. This result can be attributed to the importance of PEOU in PHR among patients. According to the literature, consumers' acceptance of health informatics applications differs from that of health professionals [16,95,98]. As a result of the challenges they have experienced in using the system, consumers have a low level of self-efficacy and a negative perception of the system's usability. Therefore, it is necessary to assist patients in accepting PHRs. Moving on to the fourth hypothesis, which proposed PHR intention influence over actual PHR use, a positive influence was found (p < 0.00), which is in line with past studies' findings [8,11,103]. In other words, users' behavioural intention indicates their acceptance and actual use of technology-their intention towards PHR use is a predictor of their actual use, similar to the finding in the eHealth context reported by past literature. Security was found to have a significant association with PHR intention to use (p < 0.04), supporting H5, and this significant relationship was also found by Saigi-Rubio et al. [173]. If people are convinced that their personal information is shared safely, far from manipulation by unauthorised individuals [127], their adoption of PHR will increase. In the sixth hypothesis (H6), privacy was proposed to moderate the association between perceived ease of use and intention to use PHR, and the hypothesis was supported, indicating that privacy heightens the influence of perceived ease of use on PHR intention to use. This may be attributed to the patient's belief that using PHRs will become easier when providers of the system can minimise privacy risks and their effects, thus contributing to higher intention to use the system. Hypothesis 7 proposed the moderating effect of usability on the perceived ease of use-intention to use PHR relationship, and the findings supported it. Privacy with PEOU determines the level of PHR use among patients in that patients who are convinced that PHR use is easier when they can learn system management and memorise the basic functions would intend to use it. The higher the PHR usability, the stronger the PEOUintention to use PHR relationship. Finally, in the eighth hypothesis (H8), usability was posited to moderate the PU-intention to use the PHR relationship. Usability was found to influence the relationship between the two negatively. This result may be attributed to the notion that if patients believe that the PHR system falls short of meeting their needs owing to deactivated services or improper working of services, this would be perceived as weak system functionality; eventually, usability would harm the PU-intention to use PHR relationship. Conclusions In literature, TAM has often been used and adopted to examine various eHealth application types in different contexts. In this study, the focus is placed on PHR use, assuming that if users have a positive intention towards PHR usage in light of its usefulness, ease of use, usability, privacy, and security, they will increase their use and acceptance of it. TAM was adopted as the underpinning model to examine the study variables and to understand why eHealth applications in general, with PHR in particular, have not been extensively adopted in Saudi Arabia. Three exogenous variables, namely privacy, security, and usability, were added to TAM to examine existing values, past experiences, and needs of potential users. SEM analysis showed that the model could explain the predictive ability of the variables of PHR intention to use and actual use. Perceived usefulness, perceived ease of use, and security were found to be relevant in their direct influence on intention towards PHR use, while privacy is relevant in terms of its moderating effect on the PHR PEOU and PHR intention to use relationship. Usability was also relevant in positively moderating the PHR PEOU and PHR intention to use relationship. However, usability had a negative moderating effect on the PHR PU-PHR intention to use relationship. Limitations and Future Research This study has several limitations. The first limitation is the nature of the study. A cross-sectional survey requires accounting for the differences among the relationships across divisions, locations, contexts, and countries, as the meaning may disappear over time. In this case, future studies may adopt a longitudinal study instead. The second limitation is the data collected through email distribution to university members, specifically to one of the biggest Saudi universities-but a single university nonetheless, which limits the outcome's generalizability. While the study's target population (students, employees, and faculty members) limited the generalizability of the findings, it does provide insight into how PHR are used by a very large segment of society, which drives IT adoption in a community. Furthermore, PHRs are not only used by people who are ill but also by healthy individuals. This study is expected to pave the way for future studies that will include other segments of society. By doing so, we will be able to gain a deeper understanding of the adoption of information systems, specifically PHRs. To this end, future studies may take different settings and employ large-sized samples representing the same context. Moreover, future studies may adopt data collection methods other than the survey questionnaire to enable comparative studies or assessments of pre-adoption and postadoption behaviours that are valuable for health applications. A qualitative approach would also enable the acquisition and observation of life experiences, which are crucial to positional analysis-this is possible through the elicitation of narrative analysis or the explanation of the phenomenon. Future studies may also extend TAM through other external variables not examined in the study, such as the self-efficacy of technology, quality factors (the quality of service, the quality of the system, and the quality of the information), as well as satisfaction with the technologies, which are all important factors. Additionally, age, gender, and other demographic characteristics may be addressed. Notably, this study adopted TAM solely without its integration with other theories and models-future studies may integrate them and re-examine the study findings to enrich the literature and, ultimately, practice. Accordingly, this study recommends that the Population-Intervention-Environment-Transfer Model of Transferability (PIET-T) be integrated with the TAM in order to develop a wider understanding of user acceptance of official systems as well as other key elements of the transferability concept. Informed Consent Statement: In the introduction part of the questionnaire, a consent form was added including information about the researchers and research institution, research purpose, duration to complete the questionnaire, contact information, and a statement that the agreement is voluntary as given below: I have read and understand the provided information and have had the opportunity to ask questions. I understand that my participation is voluntary and that I am free to withdraw at any time, without giving a reason and without cost. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Table A1. Variables with measurement items factors. Security Items Reference SE1 I think PHR system (Shifaa platform) has mechanisms to ensure the safe transmission of its users' information. [174,175] SE2 I think PHR system (Shifaa platform) shows great concern for the security of any transactions. SE3 I think PHR system (Shifaa platform) has sufficient technical capacity to ensure that no other organisation will supplant its identity on the Internet. SE4 I am sure of the identity of PHR system (Shifaa platform) when I establish contact via the Internet. SE5 When I send data to PHR system (Shifaa platform), I am sure that they will not be intercepted by unauthorised third parties. SE6 I think PHR system (Shifaa platform) has sufficient technical capacity to ensure that the data I send will not be intercepted by hackers. SE7 When I send data to PHR system (Shifaa platform), I am sure they cannot be modified by a third party. SE8 I think PHR system (Shifaa platform) has sufficient technical capacity to ensure that the data I send cannot be modified by a third party. Privacy Items Reference PRIV1 I think PHR system (Shifaa platform) shows concern for the privacy of its users. [175,176] PRIV2 I feel safe when I send personal information to PHR system (Shifaa platform). PRIV3 I think PHR system (Shifaa platform) abides by personal data protection laws. PRIV4 I think PHR system (Shifaa platform) only collects user personal data that are necessary for its activity. PRIV5 I think PHR system (Shifaa platform) respects the user's rights when obtaining personal information. PRIV6 I think that PHR system (Shifaa platform) will not provide my personal information to other companies without my consent. USAB2 PHR system (Shifaa platform) is simple to use, even when using it for the first time. USAB3 It is easy to find the information I need from PHR system (Shifaa platform). USAB4 The structure and contents of PHR system (Shifaa platform) are easy to understand. USAB5 It is easy to move within PHR system (Shifaa platform). USAB6 The organisation of the contents of PHR system (Shifaa platform) makes it easy for me to know where I am when navigating it. USAB7 When I am navigating PHR system (Shifaa platform), I feel that I am in control of what I can do. Usage Items Reference Usage1 I have an account on PHR system (Shifaa platform) to manage my health. Usage3 Use the PHR system (Shifaa platform) frequently to manage my health. PHR Intention Items References Intention to use 1 I intend to continue using PHR system (Shifaa platform) to manage my health in the future. [16,18,27] Intention to use 2 My willingness to use PHR system (Shifaa platform) is high. Intention to use 3 I plan to continue to use PHR system (Shifaa platform) in the future.
9,871.4
2023-01-01T00:00:00.000
[ "Medicine", "Computer Science", "Psychology" ]
Integrating Activity-Guided Strategy and Fingerprint Analysis to Target Potent Cytotoxic Brefeldin A from a Fungal Library of the Medicinal Mangrove Acanthus ilicifolius Mangrove-associated fungi are rich sources of novel and bioactive compounds. A total of 102 fungal strains were isolated from the medicinal mangrove Acanthus ilicifolius collected from the South China Sea. Eighty-four independent culturable isolates were identified using a combination of morphological characteristics and internal transcribed spacer (ITS) sequence analyses, of which thirty-seven strains were selected for phylogenetic analysis. The identified fungi belonged to 22 genera within seven taxonomic orders of one phyla, of which four genera Verticillium, Neocosmospora, Valsa, and Pyrenochaeta were first isolated from mangroves. The cytotoxic activity of organic extracts from 55 identified fungi was evaluated against human lung cancer cell lines (A-549), human cervical carcinoma cell lines (HeLa), human hepatoma cells (HepG2), and human acute lymphoblastic leukemia cell lines (Jurkat). The crude extracts of 31 fungi (56.4%) displayed strong cytotoxicity at the concentration of 50 μg/mL. Furthermore, the fungus Penicillium sp. (HS-N-27) still showed strong cytotoxic activity at the concentration of 25 µg/mL. Integrating cytotoxic activity-guided strategy and fingerprint analysis, a well-known natural Golgi-disruptor and Arf-GEFs inhibitor, brefeldin A, was isolated from the target active strain HS-N-27. It displayed potential activity against A549, HeLa and HepG2 cell lines with the IC50 values of 101.2, 171.9 and 239.1 nM, respectively. Therefore, combining activity-guided strategy with fingerprint analysis as a discovery tool will be implemented as a systematic strategy for quick discovery of active compounds. Introduction Cancer stands in the frontline among leading killers worldwide and the annual mortality rate is expected to reach 16.4 million by 2040 [1,2]. The marine environment has the potential to produce candidate compounds (structures) as leads to drugs, or actual drugs, as has been actively discussed for the last 50 or so years [3][4][5]. Nowadays, several compounds have led to drugs, especially in the area of cancer, such as trabectedin, and eribulin, which were discovered under the cytotoxic activity-guided approach [3][4][5][6]. Brefeldin A (BFA), a well-known natural Golgi-disruptor and Arf-GEFs inhibitor, was first isolated from Penicillium decumbens in 1958 [7,8] and subsequently identified only from the marine-derived genus Penicillium [9]. Previous studies reported that BFA showed strong anticancer activity in a variety of cancers, including colorectal, prostate, lung, and breast cancers [10,11]. BFA is considered as a promising leading molecule for developing anticancer drugs. The mangrove forests are a complex ecosystem growing in tropical and subtropical intertidal estuarine zones and nourish a diverse group of microorganisms [12,13]. Microorganisms associated with mangrove environments are a major source of antimicrobial agents and also produce a wide range of important medicinal compounds, including enzymes, antitumor agents, insecticides, vitamins, immunosuppressants, and immune modulators [13][14][15][16][17]. Among the mangrove microbial community, mangrove associated fungi were the second-largest ecological group of the marine fungi [13,14]. Up to December 2020, at least 1387 new structures have been isolated and identified from a diverse range of mangrove-derived fungi (325 strains), which belong to about 69 genera. Furthermore, about 40.7% (530) of the 1300 new compounds displayed a wide range of pharmacological activities, and the antitumor (mainly cytotoxicity) function is noteworthy and visible, accounting for 34% (196 compounds) of the active compounds. Therefore, mangrove associated fungi are a rich source of structurally unique and diverse bioactive secondary metabolites [13]. Acanthus ilicifolius is widely distributed from India to southern China, tropical Australia and the Western Pacific islands, throughout Southeast Asia [18]. Various classes of bioactive compounds including alkaloids, benzoxazinoids, lignans, flavanoids, triterpenoids and steroids have been obtained from A. ilicifolius [18][19][20]. In addition, up to December 2020, a total of 22 strains belonging to 9 genera have been reported, which produced 95 new secondary metabolites. The endophytic fungi derived from A. ilicifolius are one of the most favored to be studied [13], yet little attention has been paid to the fungal communities associated with A. ilicifolius. Investigating new bioactive natural products from marine fungi is a major and constant research focus in our laboratory [21][22][23][24]. Natural product researchers also face the challenge of targeting the discovery of bioactive compounds from a microbial resource library. The present work aims to integrate activity-guided strategy and fingerprint analysis to target the potent cytotoxic compounds from a fungal library of the medicinal mangrove A. ilicifolius ( Figure 1). The cultivable fungi associated with the medicinal mangrove A. ilicifolius from the South China Sea were firstly systematic evaluated for their diversity. Furthermore, integrating the cytotoxic activity-guided strategy, the target active strains were quickly identified. Combined with fingerprint analysis, a potent cytotoxic activity compound, brefeldin A, was isolated from the target active strains. The combination of activityguided strategy and fingerprint analysis could improve the efficiency of discovering active compounds in crude extracts from a complex and diverse fungal library. Cultivable Fungi's Phylogeny and Diversity A total of 102 fungal isolates were obtained from Acanthus ilicifolius using the PDA medium with four salt gradients of 3%, 5%, 7% and 10%. Duplicated strains were removed using a detailed morphological approach. Consequently, eighty-four independent strains were selected for sequencing and identification based on ITS sequences. According to the sequences deposited into NCBI, the 84 strains belonged to the phylum Ascomycota including seven taxonomic orders: Hypocreales, Xylariales, Diaporthales, Eurotiales, Pleosporales, Capnodiales, Botryosphaeriaceae and 22 genera: Trichoderma, Hypocrea, Acremonium, Verticillium, Fusarium, Neocosmospora, Pestalotiopsis, Diaporthe, Phomopsis, Valsa, Colletotrichum, Penicillium, Eupenicillium, Aspergillus, Talaromyces, Pyrenochaeta, Pleosporales, Curvularia, Alternaria, Cladosporium, Phyllosticta, and Lasiodiplodia (Table 1). These identified fungi and their best matches in the NCBI database are summarized in Table S1. Most of the isolates matched their closest relatives with 98 to 100% similarity, except for HS-G-02 (97%) and HS-G-06 (95%), which indicated that they were new species. Both of the fungi HS-G-06 and HS-G-02 further enriched the diversity of mangrove fungi. Further analysis of the isolated fungi showed that Eurotiales was the dominant group with identified fungi, followed by Hypocreales. The fungal community was dominated by Penicillium, comprising 21 isolates, followed by Fusarium, Aspergillus, and Eupenicillium with 15, 14, and 10 isolates, respectively. Some of the genera, such as Trichoderma, Phomopsis and Cladosporium obtained six, five and five, respectively. Most of the remaining genera occurred as singletons or doubletons. In addition, the species of fungi isolated from different parts of A. ilicifolius were quite different ( Figure 2). The results showed that some genera of fungi were isolated only from one part. For example, Phomopsis and Acremonium were isolated only from the stem. Colletotrichum, Curvularia, and Alternaria were isolated only from the leaf. Valsa, Hypocrea, and Neocosmospora were isolated only from the soil. Diaporthe, Talaromyces, and Pyrenochaeta were isolated only from the leaf. Further phylogenetic analysis was carried out on 37 strains. These 37 independent individuals were selected as the representative strains because they belong to different fungal species after we aligned the sequences with the BioEdit software ( Figure S1). The phylogenetic tree of fungi in the order Hypocreales based on ITS gene sequence is presented in Figure S2. Furthermore, the fingerprints of secondary metabolites of fungi from different species and genera were analyzed ( Figure S4). The Cytotoxicity of Cultivable Fungal Extracts The organic extracts of 55 identified fungi were evaluated for their cytotoxic activities against human lung cancer cell line (A-549), human cervical carcinoma cell (HeLa), human hepatoma cells (HepG2) and Jurkat tumour cell lines at the concentration of 50 µg/mL ( Figure 3a). To identify active strains for further research as potential cytotoxic strains, the relative inhibition rate of A-549, HeLa and HepG2 cell lines should greater than 70%, and the relative inhibition rate of Jurkat cell line should greater than 60%. The results showed that these fungi showed different inhibition rates to different cell lines. The number of the fungi showing activity against A-549, HeLa, HepG2, and Jurkat tumour cell lines were 17, 17, 19 and 24, respectively ( Figure S3). The crude extracts of 31 fungi displayed cytotoxicity against the test cell lines, of which 21 fungi showed selective inhibitory activity on different tested cell lines,; for example, Fusarium sp. showed selective inhibitory activity on HeLa cell lines. Most fungi showed strong selective inhibitory activity on Jurkat cell lines. Interestingly, the remaining 10 fungi belonging to the two orders Eurotiales and Hypocreales, displayed a broad-spectrum strong cytotoxic activity, such as Penicillium sp. The crude extracts were further reduced in concentration for the activity test. The results showed that only the two active strains of Penicillium sp. (HS-N-27 and HS-N-29) still showed strong inhibitory activity against all the texted cell lines at the concentration of 25 µg/mL. Cytotoxic metabolites were isolated from the endophytic fungus Penicillium chermesinum, leading to the discovery of a cysteine-targeted Michael acceptor as a pharmacophore for fragment-based drug discovery, bioconjugation and click reactions [25]. The heteroatom-containing new compounds 2-hydroxyl-3-pyrenocine-thio propanoic acid and 5,5-dichloro-1-(3,5-dimethoxyphenyl)-1,4-dihydroxypentan-2-one, which were isolated from a deep-sea Penicillum citreonigrum XT20-134, showed potent cytotoxicity to the human hepatoma tumor cell Bel7402 [26]. Isolation and Identifcation of Compounds 1-7 As the two active strains Penicillium sp. (HS-N-27 and HS-N-29) showed strong cytotoxic activity against all the tested cell lines at the concentration of 25 µg/mL, both of the Penicillium sp. fungi were selected as the target strains. Combining cytotoxic activityguided strategy with fingerprint analysis, compound 1 was obtained from the fermentation broth of the two active strains HS-N-27 and HS-N-29. By comparison of NMR data with the reported literature, the structure was identified as brefeldin A ( Figure 5), which was a 13membered macrolactone with a cyclopentane substituent [7]. BFA is a well-known natural Golgi-disruptor and Arf-GEFs inhibitor [8]. Combining morphological characteristics and fingerprint analysis of metabolites ( Figure S5), the two fungi HS-N-27 and HS-N-29 were identified as different individuals of the same Penicillium sp. species. The neighbor-joining of the phylogenetic tree of the target active strain Penicillium sp. (HS-N-27) in Hypocreales order fungi from A. ilicifolius based on ITS sequences is shown in Figure 4. Discussion Mangrove-associated fungi are rich in diversity and can produce impressive quantities of metabolites with promising biological activities that may be useful to humans as novel physiological agents [13][14][15][16][17]. The phylogenetic diversity of culturable fungi derived mangrove species Rhizophora stylosa and R. mucronata collected from the South China Sea has been reported [32]. The endophytic fungi derived from A. ilicifolius areamong the most favored to be studied. Up to December 2020, only 22 strains associated with A. ilicifolius belonging to 9 genera have been reported [13]. Investigation on phylogenetic diversity of A. ilicifolius associated fungi is relatively rare. In this study, 84 of the 102 isolates were successfully classified at the genus level based on ITS sequences with relatives in the NCBI database (Table S1). The identified fungi belonged to 22 genera, of which four genera Verticillium, Neocosmospora, Valsa, and Pyrenochaeta were first isolated from mangroves. (Table S1). Two strains HS-G-02 (97%) and HS-G-06 (95%) with low similarity indicated that they should be new species, which further enriched the diversity of mangrove fungi. The new strains may produce a variety of commercially interesting and potentially useful products. The above results indicated that a high diversity of fungi can be recovered from A. ilicifolius in the South China Sea. Further analysis of the isolated fungi showed that Eurotiales was the dominant group with identified fungi accounted for 45.1%, followed by Hypocreales. The fungal community comprising Penicillium accounted for 20.6%, followed by Fusarium, and Aspergillus. It was reported that Penicillium (283, 20%), Aspergillus (246, 18%), and Pestalotiopsis (88, 6%) are the dominant producers of new natural products (1384) isolated from mangrove-associated fungi, comprising more than 45% of the total molecules [13]. The fungi obtained from A. ilicifolius could provide abundant microbial resources for the discovery of new compounds. Natural product researchers face the challenge of maximizing the discovery of new or potent compounds from a microbial resource library. Combining activity-guided strategy with fingerprint analysis as a discovery tool will be implemented as a systematic strategy for quick discovery of active compounds. The crude extracts of 56.4% fungi displayed strong cytotoxicity. Interestingly, the remaining 10 fungi belonging to the two orders Eurotiales and Hypocreales, displayed a broad-spectrum strong cytotoxic activity. Furthermore, integrating cytotoxic activity-guided strategy and fingerprint analysis, a strong cytotoxic active compound brefeldin A was isolated from the target active strain HS-N-27. Brefeldin A is a well-known natural Golgi-disruptor and Arf-GEFs inhibitor, and shows strong anticancer activity in a variety of cancers [8][9][10][11]. BFA is considered as a promising leading molecule for developing anticancer drugs. As the metabolites of the fungi Penicillium sp. (HS-N-27) are relatively simple and BFA is easily separated and purified, this provides the source of compounds for the study of the medicinal properties of BFA. A series of BFA derivatives with antileukemia activity had been reported in terms of the semi-synthesis, cytotoxic evaluation, and structure-activity relationships [9]. This method, combining activity-guided strategy with fingerprint analysis, could improve the efficiency of discovering active compounds. Sampling Site and Plant Material The medicinal mangrove A. ilicifolius, which was authenticated by Prof. Isolation of Cultivable Fungi To obtain the fungi associated with medicinal mangrove A. ilicifolius within different parts of the plant, the surface sterilization of each part from A. ilicifolius was carried out following an isolation as Qin et al. described with some modifications [33]. The root, stem and leaf of A. ilicifolius samples were washed with sterile artificial seawater for three times to remove the microorganisms and sediment attached to the surface. Appropriate samples were taken, using scissors or scalpel to cut all parts, including root, stem and leaf, with attention to the integrity of sampling. Then, the sample was soaked in 75% alcohol for 30 s, and the water on the sample was sucked up with sterile filter paper. The sample was cut into 1 cm 3 pieces for fungal isolation. The methods of tissue sectioning and tissue homogenization were used to isolate fungi. Tissue sectioning method: The tissues of 1 cm 3 pieces were inoculated into PDA medium (200.0 g of potato extract, 20.0 g of glucose in 1 L of seawater with four salinities of 3%, 5%, 7% and 10% respectively) in a sterile environment. In order to improve the utilization of the plate and to separate more microorganisms, the medium plate was generally divided into three areas, and 2-3 pieces of tissue were placed in each area of the PDA medium with four salt gradients of 3%, 5%, 7% and 10%. Tissue homogenization method: The tissue was ground in 2 mL of sterile artificial seawater with a mortar in a sterile environment, and then the resulting homogenate was diluted with sterile artificial seawater at three dilutions (1:10, 1:100, and 1:1000). 100 µL of each dilution was plated in quadruplicate onto corresponding medium for fungal cultivation. The inoculated plates were cultured at 25 • C for 2 days. The fungi were replated onto new PDA plates several times until the morphology of the fungi could be distinguished. The obtained fungal strains were deposited at the Key Laboratory of Marine Drugs, the Ministry of Education of China, School of Medicine and Pharmacy, Ocean University of China, Qingdao, China. Genomic DNA Extraction, PCR Amplifcation, Sequencing and Phylogenetic Analysis The genomic DNA extraction was conducted using the Fungal DNA kits (E.Z.N.A., Omega, Norcross, GA, USA) according to the manufacturer's protocol. The internal transcribed spacer (ITS1-5.8S-ITS2) regions of the fungi were amplified with the universal ITS primers, ITS1F (5 -CTTGGTCATTTAGAGGAAGTAA-3 ) and ITS4 (5 -TCCTCCGCTTATTG ATATGC-3 ) using the polymerase chain reaction (PCR) [34]. The PCR was performed through the following cycle: initial denaturation at 94 • C for 5 min, 30 cycles of 94 • C denaturation for 40 s min, 52 • C annealing for 40 s, 72 • C extension for 1 min; with a final extension at 72 • C for 10 min. Finally, the amplified products were submitted for sequencing (Invitrogen, Shanghai, China) and a BLASTN search was used to search for sequences of the closest match in the GenBank by Basic Local Alignment Search Tool (BLAST) programs database. The sequences of fungal ITS regions obtained from A. ilicifolius were compared with the related sequences in the National Center for Biotechnology Information (NCBI). Each of these sequences was then aligned to sequences available in the NCBI database to determine the identity of the sequence, which further determined the species and genera of fungi. All fungal ITS sequences were aligned using the BioEdit software, applying the default parameters. The phylogenetic tree was generated using neighbor-joining (NJ) algorithms in the MEGA 7 software (version 7.0, Mega Limited) combined with bootstrap analysis using 1000 replicates incorporating fungal sequences showing the highest homology to sequences amplified. General Experimental Procedures The Agilent DD2 NMR spectrometer (JEOL, Tokyo, Japan) at 500 MHz and 125 MHz frequency was used for 1 H and 13 C NMR spectra respectively. The vacuum column chromatography silica gel (200-300 mesh, Qing Dao Hai Yang Chemical Group Co, Qingdao, China), silica gel plates for thin layer chromatography (G60, F-254, and Yan Tai Zi Fu Chemical Group Co, Yan Tai, China), and reverse phase octadecylsilyl silica gel column were used for the separation of compounds. UPLCMS spectra were measured on Waters UPLC®system (Waters Ltd., Milford, MA, USA) using a C 18 column (ACQUITY UPLC®BEH C 18 , 2.1 × 50 mm, 1.7 µm; 0.5 mL/min) and ACQUITY QDA ESIMS scan from 150 to 1000 Da was used for the analysis of fungal extracts and ESI-MS spectra of the compounds. Semipreparative HPLC was performed on a Hitachi L-2000 system (Hi-tachi Ltd., Tokyo, Japan) using a C 18 column (Kromasil 250 × 10 mm, 5 µm, 2.0 mL/min). Fungal Fermentation and Chemical Extraction The 55 fungal isolates were fermented in a 500 mL conical flask containing 250 mL PDA liquid medium. The fungi were shaken at 28 • C, 120 rpm for 7 days. Each exper-iment was conducted in three parallels. The fermentation broth was extracted three times with an equal volume of EtOAc and the whole EtOAc solutions were evaporated under reduced pressure to give the dried extracts. Conclusions This is the first systematic report on the phylogenetic diversity of fungi from mangrove A. ilicifolius. Four genera Verticillium, Neocosmospora, Valsa, and Pyrenochaeta, which were first isolated from mangroves, further enriched the diversity of mangrove fungi. Thirty-one strains of fungi displayed strong cytotoxicity to different cell lines, which was the important microbial resource for the discovery of cytotoxic compounds. Furthermore, by integrating cytotoxic activity-guided strategy and fingerprint analysis, a potent cytotoxic activity compound was quickly isolated from target active strains. This method, combining activityguided strategy with fingerprint analysis, could improve the efficiency of discovering active compounds. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/md20070432/s1, Table S1: Phylogenetic affiliations of cultivable fungi associated with A. ilicifolius. Figure S1: Phylogenetic tree of partial ITS gene sequences of mangrove-derived fungal strains. Reference sequences were downloaded from NCBI with the accession numbers indicated in parentheses. Figure S2: Phylogenetic tree of fungi in the order Hypocreales based on ITS gene sequence homology. Figure S3: The cytotoxicity relative inhibition rate data of the organic extracts in cancer cell lines. Figure S4. The fingerprint analysis of the organic extract of fungi from different species and genera. Figure S5.
4,457.6
2022-06-29T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science", "Medicine" ]
Formation of aluminium clusters in helium nanodroplets Abstract The addition of aluminium atoms to helium nanodroplets has been explored using electron impact mass spectrometry. A series of aluminium cluster ions, Al n + , were observed as the major products, which contrasts with a recent study where such cluster ions were not detected in any significant quantities (S.A. Krasnokutski, F. Huisken, J. Phys. Chem. A 115 (2011) 7120). The earlier finding was interpreted as evidence that Al atoms are separated by one or more layers of helium and therefore form a 3-dimensional ‘foam’ inside helium droplets. The current observations are not consistent with this suggestion and instead indicate that when multiple Al atoms are added to helium droplets they aggregate to form Al n clusters inside the helium droplets. Introduction The study of metal atoms picked up by helium nanodroplets has provided many intriguing observations. It is now well established that alkali metal atoms reside in a dimple on the surface of helium nanodroplets rather than settling inside. 1-3 The driving force for adopting a surface location stems from the diffuse valence electron distribution in the alkalis, which generates an exceptionally weak attractive interaction with helium. Consequently, the displacement of sufficient helium to create a cavity which can host the alkali atom is energetically unfavourable relative to the creation of a dimple on the droplet surface. Small alkali clusters also reside on the surface of helium droplets and have been found to exist in high spin electronic states, since the low spin states release sufficient energy as the atoms coalesce to expel the cluster from the droplet surface. [4][5][6] The situation for the alkaline earth metals is a little more complicated. All of the available evidence points towards a surface location for Ca, Sr and Ba atoms. [7][8][9][10][11][12] However Mg has, until very recently, proved a more challenging case to decipher. Evidence has been presented both in favour and against an interior location, 11,13,14 but this now looks to be resolved by recent calculations which suggest that Mg atoms do solvate inside helium droplets but are highly delocalized within the droplet because of the weak Mg-He interaction. 15 Another consequence of the weak Mg-He interaction is the formation of a foamlike structure for multiple Mg atoms in helium droplets. Here the Mg-He interaction is so weak that it becomes energetically unfavourable for two Mg atoms to come into direct contact. Instead, one or more solvation layers of helium form around each Mg atom and as two Mg atoms approach the energy required to displace the helium to allow the two Mg atoms to come into contact creates a barrier which cannot be overcome at the temperature of a helium droplet, 0.38 K. Consequently, the Mg n system is thought to form a metastable three-4 dimensional foam-like structure and recent experimental and theoretical studies seem to confirm this behaviour. 14,16,17 Given the foam-like behaviour for magnesium, it is interesting to consider whether other metals might show similar behaviour. Recently, Krashnokutski and Huisken have suggested aluminium as a possible candidate. 18 Evidence for this was collected from mass spectrometry. Several arguments were presented in favour of a foam-like structure for when multiple Al atoms are present in the liquid helium droplet, including the absence of any significant quantities of Al n + cluster ions. Here we report a similar mass spectrometric study of Al in helium droplets, but we obtain very different findings. In particular, we see an abundant array of aluminium cluster ions, which suggests that Al atoms aggregate to form Al n clusters in helium nanodroplets rather than foam-like structures. Experimental The apparatus used is described in detail elsewhere. 19 Briefly, helium nanodroplets were formed by expansion of highly pure gaseous helium into a vacuum through a small aperture. The stagnation pressure of the helium and the temperature of the nozzle can be adjusted to control the mean size of the droplets. A collimated beam of droplets is formed by passage through a skimmer and the droplets then entered a pickup region, where Al atoms were added. This was achieved using a resistively heated oven comprised of a cylindrical alumina cell with tantalum heating wire on the outside, in which solid aluminium was placed. As the helium droplets passed through this pickup zone Al atoms were able to collide with and be captured by the droplets. The kinetic energy of each added atom is removed by evaporative loss of helium atoms, which quickly cools the ensemble back to 0.38 K. The doped droplets then passed through a second skimmer and entered the ionization zone of a quadrupole mass spectrometer, where they were subjected to electron ionization at an energy of 60 eV. 5 Results and Discussion Figure 1 shows a mass spectrum obtained at a nozzle temperature of 12 K and a helium pressure of 15 bar, which is expected to give helium droplets with an average of 12000 helium atoms. 20 The oven temperature for this particular experiment was estimated to be 1100 K, as measured by a thermocouple in contact with the oven. The spectrum shows an array of cluster ions, including a strong series of peaks from He n + cluster ions in the lower mass part of the spectrum. At higher masses the most prominent series of ions comes from Al n + , which can be seen with n up to 15 in this particular spectrum. The observation of Al n + cluster ions is in marked contrast to the absence of these species in the work reported by Krashnokutski and Huisken. 18 The Al n + ions show a general decline in abundance with n but there are exceptions, the most notable of which corresponds to n = 7. This peak is significantly more intense than its neighbours, Al 6 + and Al 8 + , which suggests that Al 7 + has enhanced stability, i.e. it is a 'magic' number cluster. The stability of this particular cluster has been reported in previous studies and can be explained by the jellium model, since Al 7 + has 20 valence electrons which form a closed-shell in the jellium model. 21 The next most prominent peaks seen in Figures 1 and 2 Also seen in the current work are the helium-solvated ions AlHe n + and Al 2 He n + , as shown in the expanded view in Figure 2. The AlHe n + ions form the more prominent series and can be seen up to at least n = 17. The observation of AlHe n + 'snowball' ions was reported 6 previously by Krashnokutski and Huisken 18 and is evidence in favour of an interior location for Al atoms in the droplets (other evidence comes from spectroscopy 22 ), since a surfacelocation is likely to expel a bare rather than a heavily solvated ion. However, Krashnokutski and Huisken were unable to observe snowball ions associated with Al 2 + and larger Al n + ions, which contrasts with the findings reported here. In an attempt to resolve the differences between our observations and those reported by Krashnokutski and Huisken, we present in Figure 3 the response of three ions, He 2 + , Al + and Al 2 + , to changes in the oven temperature. In their study Krashnokutski and Huisken found a weak and essentially linear decline of He 2 + signal with the Al vapour pressure, with a gradient which is far lower than would be expected if the Al atoms combined to form Al n clusters. This is because the Al-Al bond energies are relatively strong and should produce a major change in droplet size as each atom is added, which would evaporate many He atoms and thus produce a large change in the He 2 + signal (since the He 2 + signal should be proportional to the geometric cross section of the helium droplet, which in turn is expected to be proportional to the electron impact cross section). On the other hand, addition of each Al atom in the limit of no clustering should add only about 140 meV of energy, which will result in the evaporation of ca. 230 helium atoms. 23 The data shown in Figure 3 are dramatically different from the findings reported by Krashnokutski and Huisken. Even though we use smaller helium droplets than in the study by Krashnokutski and Huisken (whose mean droplet size was 2 × 10 4 helium atoms 18 ), we nevertheless see abundant Al pickup at far lower vapour pressures than those implied by Krashnokutski and Huisken. For example, for the data shown in Figure 3 we use helium droplets with an estimated mean size of 10 4 helium atoms and see a maximum in the Al + signal at an oven temperature of 1100 K, which corresponds to an Al vapour pressure of about 6.5 × 10 -6 mbar. Invoking Poisson statistics for the pickup process means a pickup probability 7 for a single dopant atom given by P = z exp(-z) where z = σNl and σ = helium droplet geometric cross section, N = pickup gas number density and l = length of the pickup zone. The probability of picking up a single Al atom is a maximum when z = 1 and so under our experimental conditions (l = 7 cm and σ = 7.2 × 10 -13 cm 2 , assuming a spherical droplet 23 ) we predict that the maximum in the pickup of a single Al atom should occur at a vapour pressure of 7.3 × 10 -6 mbar. This is in excellent agreement with the vapour pressure deduced for the maximum of the Al + signal in Figure 3. It is important to be aware that there are approximations involved here which may make this level of agreement somewhat fortuitous. For example, there is an estimated error margin of ± 20 K in the oven temperature. Also, contributions to the Al + signal from the ionization of larger neutral clusters, such as Al 2 , have been ignored. Nevertheless, it is clear that our estimated pickup behaviour is in agreement with experimental observations and is distinctly at odds with that presented by Krashnokutski and Huisken. It therefore seems that the absence of Al n clusters in the experiments of Krashnokutski and Huisken is most likely the result of inadequate pickup of Al atoms, either because the helium droplets were smaller than thought or the Al vapour pressure was lower than claimed. We note also that the He 2 + signal shown in Figure 3 undergoes a rapid decline (note the log scale on the vertical axis) once Al 2 starts to form, as deduced by the appearance of the Al 2 + signal. As mentioned earlier, this type of behaviour is consistent with Al-Al bond formation and provides further evidence against the foam model for Al in liquid helium. The dissociation energy of Al 2 is 0.61 eV 24 and its formation should lead to the evaporation of roughly 1000 helium atoms (the addition of each atom should also lead to evaporation of a further 230 He atoms due to dissipation of their translational energy). As pointed out by Krashnokutski and Huisken, 18 the binding energy per Al atom tends to increase as n increases, at least up until n = 8. 24 For example, the binding energy per atom is 2.1 eV for Al 8 and so its 8 formation should result in the evaporation of ∼27000 helium atoms. Although this is substantially larger than the mean size of the helium droplets used to record the spectrum in Figure 1, the droplets are expected to follow a log-normal size distribution. The long tail in the log-normal size distribution will provide some helium droplets with sizes far larger than the mean, which could account for the observation of the largest Al n + cluster ions seen in the current study.
2,784.6
2014-05-15T00:00:00.000
[ "Physics" ]
Social Costs of Iron Deficiency Anemia in 6–59-Month-Old Children in India Introduction Inadequate nutrition has a severe impact on health in India. According to the WHO, iron deficiency is the single most important nutritional risk factor in India, accounting for more than 3% of all disability-adjusted life years (DALYs) lost. We estimate the social costs of iron deficiency anemia (IDA) in 6–59-month-old children in India in terms of intangible costs and production losses. Materials and Methods We build a health economic model estimating the life-time costs of a birth cohort suffering from IDA between the ages of 6 and 59 months. The model is stratified by 2 age groups (6–23 and 24–59-months), 2 geographical areas (urban and rural), 10 socio-economic strata and 3 degrees of severity of IDA (mild, moderate and severe). Prevalence of anemia is calculated with the last available National Family Health Survey. Information on the health consequences of IDA is extracted from the literature. Results IDA prevalence is 49.5% in 6–23-month-old and 39.9% in 24–58-month-old children. Children living in poor households in rural areas are particularly affected but prevalence is high even in wealthy urban households. The estimated yearly costs of IDA in 6–59-month-old children amount to intangible costs of 8.3 m DALYs and production losses of 24,001 m USD, equal to 1.3% of gross domestic product. Previous calculations have considerably underestimated the intangible costs of IDA as the improved WHO methodology leads to a threefold increase of DALYs due to IDA. Conclusion Despite years of iron supplementation programs and substantial economic growth, IDA remains a crucial public health issue in India and an obstacle to the economic advancement of the poor. Young children are especially vulnerable due to the irreversible effects of IDA on cognitive development. Our research may contribute to the design of new effective interventions aiming to reduce IDA in early childhood. Introduction Inadequate nutrition has a severe impact on health in India. According to the WHO Global Burden of Disease project (GBD), 6 out of the leading 15 health risk factors in India are related to inadequate nutrition and are responsible for more than 18% of all Disability Adjusted Life Years (DALYs) lost [1]. The single most important nutritional risk factor in India is iron deficiency, with more than 3% of all DALYs lost [1]. Iron deficiency anemia (IDA) is highly prevalent among Indian children in spite of substantial economic growth and numerous programs aimed at the reduction of anemia [2]. Iron deficiency in early childhood is especially detrimental due to increased mortality and its permanent impact on cognitive development, which leads to an irreversible loss of productivity in adult life [3]. This paper assesses the social costs of IDA in 6-59-month-old children in India in 2013. These social costs have 2 dimensions: 1) The human burden in terms of life years lost due to premature death and in terms of quality of life lost due to morbidity and impaired cognitive development (both measured in DALYs). 2) The economic costs in terms of lower productivity due to impaired cognitive development leading to lower income in adulthood (measured in monetary losses). We consider reversible as well as irreversible health consequences of IDA in 2 age subgroups (6-23 and 24-59-month-old) and assess the distribution of the consequent social costs across 10 socio-economic strata (SES) and 2 geographical areas (urban and rural). The distinction between age groups is important because the health consequences and the prevalence of IDA may change with age. The 6-23-month-period is part of the "first 1000 days" period from conception to the second birthday, which has been identified as a crucial period to establish a lasting foundation for health [4]. Nutrition is also likely to differ between the 2 age groups. The distinction between urban and rural households and socio-economic groups is important because the economic and social conditions are likely to affect the levels of IDA in children. Although the majority of Indians still live in rural areas, the urban population is now at around 400 million (m) and rapidly growing [5]. The urban poor may be particularly affected by IDA as they live in unhealthy environmental conditions and are not reached by the same social programs as the rural poor [6,7]. Furthermore, we examine an important methodological issue: The GBD project has recently developed a new methodology and new disability weights for the calculation of DALY losses [8], which may lead to substantial changes in the measurement of the burden of IDA. We compare the burden of IDA according to the old and to the new methodology, a comparison that has not been done before. Methods Our study is based on a health economic model estimating the lifetime health and cost consequences of IDA in 6-59-month-old children which we developed in Wieser et al. [9]. In a first step we stratify Indian households with 6-59-month-old children by SES and geographical area (urban, rural). For this we use the National Family Health Survey of 2005-06 (NFHS-3) [10] and 2011 Census data [5], which are the most recent data available. The NFHS-3 is a large, representative population survey that provides information on 109,041 households in India. The 10 SES are built upon the distribution of households across the 10 deciles of a wealth index calculated according to the Demographic and Health Survey (DHS) methodology [11]. In a second step we calculate the prevalence of mild, moderate and severe IDA in 2 age groups (6-23 and 24-59-month-old). Prevalence of IDA is calculated according to the altitude-adjusted hemoglobin (Hb) values of the NFHS-3 [10] and assuming a normal distribution defined by the mean and the standard deviations (SD) of blood Hb. Although the NFHS-3 data were collected in 2005-06, there is no indication that the prevalence of anemia in India has decreased in the meantime. Prevalence of anemia actually increased between 1998-99 (NFHS-2) and 2005-06 (NFHS-3) and recent studies find comparable prevalence rates [12,13]. Anemia may have other causes besides iron deficiency and the share attributable to iron deficiency varies according to age group and region. Based on a recent systematic review [14] and a WHO report [15], we attribute 60% of cases of anemia in 6-59-month-old Indian children to iron deficiency. The 3 degrees of severity of anemia are defined according to WHO thresholds for children under 5 (mild anemia: Hb < 110 g/l, moderate anemia: Hb < 100 g/l, severe anemia: Hb <70 g/l) [16]. We smooth the distribution of the prevalence of IDA across SES with a linear model in order to remove the sample noise when extrapolating the prevalence of IDA to the whole population. In a third step we attribute specific health consequences to IDA at different levels of severity in the 2 age sub-groups. This attribution of health consequences is mainly based on information derived from systematic reviews of iron supplementation trials. We assume that iron supplementation fills the gap of iron intake in iron-deficient children and thus eliminates all the adverse health consequences of IDA, an approach also applied in previous research [9,17]. The health consequences of IDA thus correspond to the difference in health status of the intervention and the control group in the supplementation trials. The influence of IDA on mortality is modeled as an attributable fraction of all-cause mortality. In a fourth step we attribute the costs to the health consequences of IDA by multiplying the number of children affected by mild, moderate and severe IDA with the respective cost effects. This calculation takes into consideration that IDA leads to reversible as well as to irreversible health consequences. We model the costs of IDA as the lifetime-costs of a birth cohort of children, corresponding to all the children born in one year, in order to account for these irreversible consequences. Fig 2 illustrates this approach: The birth cohort is affected by IDA from the age of 6 to 59 months. The health and cost effects of this exposure to IDA arise within the 6--59-month time window as well as in the remaining lifetime of the birth cohort. These costs correspond to the present value of the social costs of IDA in a given year. Intangible costs are calculated as DALYs lost [18] due to current illness, premature death and future permanent disabilities. We calculate the DALY losses according to the new WHO/ GBD 2010 approach and compare the results with those obtained with the previous WHO/ GBD 1996 approach [19,20]. Future life years and DALYs lost are not discounted following the new WHO/GBD approach [21,22]. We do not quantify DALY losses in monetary terms, as this is often criticized as ethically and methodologically questionable [23]. Production losses correspond to the future gross income losses due to impaired cognitive development caused by IDA in early childhood. Future production losses occur during worklife, starting at the age of labor force entry and ending with the end of work-life. Income losses are valued at the average Indian wage rate and not at the specific wages rates of the single SES. We thus implicitly assume some mobility across SES and that today's children may migrate to urban areas in their future life. This assumption is confirmed by a recent report of the World Bank which finds a substantial economic mobility, both upward and downward, in India [24]. We also take account of future economic growth. Wages are converted into US-dollars (USD) using an average 2013 exchange rate (current dollars). Future production losses are discounted to present value, as a dollar today is worth more than a dollar in 20 or 30 years, using a discount rate of 3% which is a widely accepted value in health economic evaluations [25]. Direct Medical costs are not considered in the model as anemia usually goes unnoticed and no treatment is provided. Costs are calculated for the year 2013 and reported separately for the 6-23-month and 24-59-month age groups as costs of health consequences differ depending on the age at which the deficiency occurred. Aggregate information on the population and the economy has been drawn from the World Bank [26]. We run 2 types of sensitivity analysis (SA) in order to test the robustness of our results and understand the influence of single model parameters. First, we run a probabilistic SA, which allows us to establish a range of plausible model results by randomly varying all model parameters within predefined distributions and then running the model 10,000 times (see Wieser, Plessow [9] for details on the procedure). Second, we explore the influence of changes in the DALY methodology that were introduced by the GBD 2010 [8]. We analyze the effect of new disability weights and the new rules on discounting by calculating the intangible cost of IDA in early childhood according to both the old and the new methodology. The model is implemented in R [27]. Ethics statement: The dataset used in this study was obtained from the International Institute for Population Sciences [10]. Review of this study from an institutional review board was not sought as the dataset is anonymous and available for public use with no identifiable information on the survey participants. Table 1 shows the distribution of children in the birth cohort over all households stratified by SES and area of residence. The last column displays the total number of children in the birth cohort. Socio-economic patterns and child mortality The table shows 2 clear patterns: First, the share of poor households is much higher in rural than in urban areas. In urban areas 72% of all children belong to SES 7-10, while in rural areas children are concentrated at the other end of the wealth scale with 58% of the households belonging to SES 1-4. Second, the number of children per household is higher in poorer households. Only 7% of all children live in the wealthiest 10% of households while 12% of all children live in a household belonging to the poorest 10%. Table 2 summarizes the mortality rates, which have been calculated by quintiles, as the sample for deciles turned out to be too small to consistently estimate this rather rare event. Since mortality rates have decreased since 2005-06 we reduced the rates calculated on NFHS-3 data by 29.5%, which is the reduction observed in the World Bank data for the period from 2005 to 2013 [26]. Mortality decreases as household wealth increases with a 3 to 7 times higher mortality in the poorest than in the wealthiest quintile of 6-23-month-old children. Mortality decreases from the 6-23 to the 24-59-month age-group and this decline is much more pronounced in wealthier households. Comparing mortality between rural and urban areas we find a lower mortality rate in rural areas within each wealth quintile but a higher overall mortality rate in rural areas. This apparently contradictory result is explained by the fact that children living in rural areas belong mainly to the poorer SES while children living in urban areas belong mainly to the wealthier SES (see Table 1). Table 3 reports the prevalence of IDA by severity, age-group and area of residence. The overall prevalence of IDA is 49.5% in the 6-23-month and 39.9% in the 24-59-month age-group, with mild IDA at 13.3% and 14.9%, moderate IDA at 33.8% and 24.0%, and severe IDA at 2.4% and 1.0% in the 6-23-month and 24-59-month age-groups, respectively. Moderate and severe IDA are thus higher in the 6-23-month age-group while mild IDA is higher in the 24-59-month age-group. 20% of households respectively. The prevalence of mild IDA increases slightly as wealth increases, as children move from moderate and severe IDA to mild IDA, but this increase is not statistically significant. The comparison between urban and rural households shows that the overall prevalence of mild, moderate and severe IDA is higher in rural than in urban areas. However, the distribution across SES shows that children living in poor urban households have higher risk of moderate and severe IDA than children living in equally poor rural households. The higher prevalence of severe IDA among the children of the urban poor is particularly striking. However, the overall number of iron-deficient children is considerably higher in rural than in urban areas as the large majority of poor households still lives in rural areas. Prevalence of iron deficiency anemia Finally, we did not find any major differences in the prevalence of IDA between boys and girls and therefore do not differentiate by gender in our analysis. Health and cost consequences of iron deficiency anemia The model includes 3 different adverse health outcomes of IDA: First, IDA in 6-59-month-old children leads to impaired physical activity. This effect is temporary and ends as soon as the deficiency disappears. Second, severe IDA leads to increased mortality [28]. Third, moderate and severe IDA in children of 6-23 months leads to permanently impaired cognitive ability, which leads to a reduction in adult wage. In applying separate disability weights for impaired physical activity and cognitive impairment, we follow the approach by Stein et al. [29]. Table 4 gives an overview of the effect sizes of the health effects of IDA and the respective sources in the literature. We use the new disability weights according to the GBD 2010 [18]. These new weights are based on large population surveys, while the previous disability weights were based on expert opinion [18,19]. All disability weights for the health consequences of IDA have increased from the old to the new GBD version (Fig 4) and these increases are particularly strong for impaired physical activity, which, according to the GBD 2010, is now also a consequence of mild anemia. The increase in disability weights for cognitive impairment is less pronounced but equally important due to its irreversible nature. As the GBD 2010 does not provide a weight for cognitive impairment due to moderate anemia, we calculate a corresponding weight by increasing the GBD 1996 weight by the same proportion as the disability weight for cognitive impairment due to severe anemia (+29.2%). A further major change occurred in the discounting of future DALYs: while there was an option for discounting in the GBD 1996 version, it has been eliminated from the new GBD 2010 methodology [18]. Table 5 displays the values and sources of additional economic parameters required for the calculation of production losses and DALYs in the model. Most parameters are based on World Bank data [26]. Future income growth is derived from OECD data [30] and daily wages have been obtained from the International labour organization (ILO) [31]. Table 6 reports the detailed results by age-group, area of residence, type of social cost, and time dimension of cost (current, future, mortality). The distinction between current and future [19] and GBD 2010 methodology [18] and adaptations according to Stein et al. [29]. Daily wage 5.3 USD Own calculation based on ILO data [31] costs is important, as future costs are caused exclusively by IDA in 6-23-month-old children due to the irreversible cognitive impairment triggered by moderate and severe IDA in this agegroup. Future losses due to impaired cognitive development in early childhood dominate both production losses (98% of total) and intangible costs (66.4% of total). Future effects are much larger than current effects as the birth cohort will live most of its life after the age of 5. Social costs of iron deficiency anemia The results show that rural areas bear the highest burden of IDA. With a 2.8 times larger population, rural areas have 3.1 times higher intangible costs and production losses than urban areas. Total costs of IDA differ substantially across SES with 2.4 times higher intangible costs in the poorest than in the wealthiest quintile and 2.1 times higher production losses (Fig 5). The marked differences between urban and rural areas in Fig 5 are mainly due to the higher number Current costs occur at 6-59-months of age; future costs occur after the age of 5 years. doi:10.1371/journal.pone.0136581.t006 of rural households in the poorest SES and of urban households in the wealthier SES. Overall losses of IDA are high even in wealthy households. Sensitivity analysis We assess the overall uncertainty of our results with a multivariate probabilistic SA by drawing model parameters from an appropriately parameterized distribution. Fig 6 shows the ordered results of 10,000 model runs for total intangible costs and total production losses. The figures show the share of all model runs that resulted in a cost at or below a given value. Uncertainty in intangible costs is relatively limited with 80% of all cases in a range between 1,4 m and 2,6 m DALYs in urban and between 4,4 and 8,2 m DALYs in rural India. Production losses show larger variation with 80% of all cases between 2,232 and 11,673 m USD in urban and between 7,003 and 37,064 m USD in rural India. Upward variation is substantially larger due to the uncertainty regarding the future development of the Indian economy. In a second SA we explore the effect on intangible costs of the changes in the new GBDmethod, which include the abolition of discounting of future DALY losses and increased disability weights for the health effects of IDA. Fig 7 shows that the new method leads to a substantial increase of DALY losses. The introduction of the new disability weights leads to a nearly fourfold increase of current losses and a slight increase of future losses. DALY losses due to increased mortality are not affected, as the disability weight of death does not change. Elimination of discounting has no effect on current losses but increases future losses and losses due to mortality more than twofold. The strong effect of discounting is due to the fact that 75% of intangible costs occur in the future. Overall intangible costs of IDA in 6-59-month-old children triple with the introduction of the new GBD-method. Discussion The prevalence of IDA is still extremely high in 6-59-month-old Indian children with almost every second child suffering from some degree of IDA. Our results indicate that total lifetime costs of IDA in a birth cohort affected by IDA between the age of 6 and 59 months amount to intangible costs of 8.3 m DALYs and production losses of 24,001m USD in 2013. Intangible costs correspond to 125,699 complete lifespans lost, or 0.48% of the healthy life years of the birth cohort, and production losses correspond to 1.3% of gross domestic product. The SA suggests that our results are relatively robust and that the intangible costs of IDA have been substantially underestimated with the previous WHO/GBD methodology. Both intangible costs and production losses are dominated by future losses due to impaired cognitive development, which is caused by IDA between the age of 6 and 23 months. The social costs of IDA arise in all socio-economic groups, with the largest share occurring in poor rural households but even wealthy urban households carrying a considerable share. Our estimate can be considered as the lower bound of the social costs of IDA in 6-59month-old children because it does not include a number of possible cost consequences: 1) We estimate production losses as future income losses without considering non-market production, such as subsistence agriculture, which may be especially important in rural areas. Poor households will also be particularly affected by income losses as a wage reduction may have disastrous effects for a subsistence farmer, who produces little more than what is required for bare survival, while wealthier individuals will suffer less from a similar decrease. 2) We do not consider direct medical costs due to IDA. Severe IDA is a serious illness and some children Social Costs of IDA in Indian Children certainly receive treatment. However, we lack data to estimate the extent of these medical costs. Affected poor households are likely to incur catastrophic health expenditures due to required out-of-pocket payments [32]. Our results appear plausible when compared with previous studies. The Institute of Health Metrics and Evaluation (IHME) attributes 3.3m DALYs to iron deficiency in 1-4-year-old children [33], a result that at first sight appears to be substantially below our result of 8.3m DALYs for 6-59-month-old children. This difference is, however, explained by the differences in the methodological approach and in the health consequences considered. We employ an incidence-based approach which considers the lifetime impact of impaired cognitive development due to IDA in early childhood, and includes the impact of all-cause mortality. Our estimate of the direct effects of iron deficiency amounts to 60% of the value reported by the IHME and is thus a more conservative estimate. Our results show that although IDA has been on the political agenda for the past 40 years [34], it is still a massive problem in India. IDA is essentially a consequence of the low level of bioavailable iron in the diet of the children and their mothers [35]. Indian diets are low in bioavailable iron due to phytates, which inhibit the absorption of iron, and low ascorbic acid / iron ratios, which facilitate the absorption of iron [23]. Despite the strong economic growth in India in the last two decades, there has been little improvement in the iron intake. Actually household expenditures on nutrition did not increase with income while nutritional deficiencies persist [36]. Many interventions appear to be ineffective and do not focus on the age-group of children below the age of 2, which is most in need [37,38]. In theory, public programs should provide every child between the age of 12 and 59 months with iron supplements, but in reality program coverage is very low at around 3-4% [2,39]. IDA has particularly severe socio-economic consequences for poor households as it leads to a health-based poverty trap [37,40]. Due to the impact of IDA on cognitive development, many children will not be able reach their full potential and become poor parents in their later life. The combination of IDA and other nutritional deficiencies with a weak education system and an inadequate provision of health care services is a key obstacle to the economic development of India [41]. There is an urgent need for more frequent data on the health and nutritional status of the Indian population, in particular of children and women at childbearing age. The 10-year-intervals of the NFHS survey are too long for a timely monitoring of crucial public health issues. Our study has a number of limitations: 1) We make a series of assumptions on the longterm development of important model parameters such as the future economic growth rate, which crucially affect the magnitude of production losses. However, we always apply conservative estimates of these parameters and carry out an extensive multivariate SA. 2) We do not take account of the intergenerational effects of malnutrition, which have been the focus of recent research (see for example [42][43][44]). However these findings do not allow us to attribute specific effect sizes to specific types of malnutrition occurring in specific age-groups and can thus not be considered in our model. Conclusion IDA in 6-23-month-old children remains a major health problem in India and its social costs remain extremely high, both in terms of DALYs and income losses. There is an urgent need for effective interventions capable of improving the nutritional status of children under 5 and in particular of the 6-23-month-olds. The results may have important implications for the conception and targeting of future policies aimed at the reduction of IDA prevalence.
5,878.6
2015-08-27T00:00:00.000
[ "Economics", "Medicine" ]
Forces between stable non-BPS branes As a step toward constructing realistic brane world models in string theory, we consider the interactions of a pair of non-BPS branes. We construct a dyonic generalization of the non-BPS branes first constructed by Bergman, Gaberdiel and Sen as orbifolds of D-branes on $T^4/\BZ_2$. The force between a dyonic brane and an electric brane is computed and is found to vanish at a nontrivial critical separation. This equilibrium point is unstable. For smaller separations the branes coalesce to form a composite dyonic state, while for larger separations the branes run off to infinity. We suggest generalizations that will lead to potentials with stable local minima. I. INTRODUCTION The existence of compact extra dimensions with sizes of order a millimeter appears to be consistent with all known experiments [1]. If true, the fundamental scale for physics may lie in the range 10 − 100 TeV, and the hierarchy problem becomes explaining why the size of these extra dimensions is so large. For this new view to be consistent, one must postulate that the Standard Model fields are confined to a hypersurface in this higher dimension geometry -a "brane-world" [2]. This idea is well-motivated in the context of string theory, where D-branes play exactly this role. A wide variety of gauge groups and matter content can be found as exact string theory compactifications (see [3,4] for some reviews). However we are still a long way off from reproducing the known Standard Model (with no other light fields) as an exact compactification of string theory. A step in this direction was taken in [5] where stable non-supersymmetric D-brane states were constructed in orbifolds of Type II string theory. Similar constructions developing more realistic gauge and matter contents followed (see [4] for a review). However these compactifications are always accompanied by unwanted light fields associated with rescaling the sizes of internal dimensions (or the dilaton, which in turn is related to the size of an 11th direction in the M-theory viewpoint). These light fields are often referred to as radions. At the phenomenological level, suggestions for stabilizing radion fields have been made in [6,7]. In particular, in [7] we showed the hierarchy problem could be solved by having a crystal structure in the internal dimensions, involving a large number of branes. For this to work, the forces between branes must balance at some finite critical radius, of order the fundamental length scale. In this paper we will take a step toward realizing this mechanism as an exact solution of string theory. We will begin by generalizing the non-BPS branes of [8,9] (see also [10]) to carry additional charge with respect to a p − 1 form field strength. The interaction potential between such a dyonic brane and a purely electrically charged brane takes a highly nontrivial form that does indeed display an extremum at finite brane separation. Unfortunately for the example constructed here, this extremum is a local maximum. The branes may either run off to infinite separation, or they may coalesce. We conjecture the branes will form a stable composite dyonic state. Similar bound states have been discussed for pairs of pure electric case in [11]. A supergravity solution for a stack of a large number of electric or magnetic branes has been constructed in [12]. We suggest that the inclusion of other types of brane charge will lead to a true stable minimum. To describe the non-BPS branes considered in this paper we employ the boundary state formalism. Such methods provide a convenient calculational tool for computing the potential between a pair of D-branes. Specifically for a pair of D-branes described by the boundary states |D 1 and |D 2 respectively, the potential between them is given by where D is the closed string propagator In the open string description of D-branes, one would instead have to compute the 1-loop partition function -or annulus diagram -with the open string endpoints on either brane. The advantage of the boundary state formalism is its universality. Once a boundary state is known one need only plug into (1.1) to find the potential between a pair of branes. In the open string description one must recompute the mode expansion for the open string each time one of the D-branes (|D 1, (2) ) is changed as well as find the corresponding projection operator to be inserted in the partition function. The boundary state formalism has been applied in a variety of cases to study the properties of non-BPS branes (some reviews may be found in [13,14,15,16]). II. CONSTRUCTING THE BOUNDARY STATE The next few subsections will be devoted to the construction of the boundary states used in this paper for computing the potential between a pair of non-BPS Dp-branes in type IIB for p even or type IIA for p odd. Before discussing the details let us first pause for a moment to specify the setup. Recall that non-BPS branes are in general unstablethey support tachyonic excitations. In some cases they can be stabilized by an appropriate orbifolding. The relevant example for us will be to take the x 6 , x 7 , x 8 , x 9 directions to lie on a torus T 4 with the (p + 1) directions tangent to the brane lying in the noncompact directions. Modding out by I 4 (−1) F L where I 4 reverses the signs of the T 4 coordinates and F L denotes the contribution to the spacetime fermion number coming from the left-moving sector of the worldsheet removes the tachyon field from the non-winding modes of the string. For torus radii R 6 , R 7 , R 8 , R 9 all larger than the critical value α ′ /2 this is enough to remove all tachyonic modes from winding string. In the next subsection we compute the 1-loop partition function for each of the individual branes that we shall consider. This is a necessary step in order to fix various coefficients in the boundary states, which we then construct in subsection II B. In section III we use the definition (1.1) to construct the potential between the two boundary states of interest. A. 1-loop partition function In this subsection we compute the 1-loop partition function for the non-BPS branes of interest in this paper, specifically one carrying charge associated to the twisted sector (p+1) form RR potential and the other carrying charges associated with the twisted sector (p+1) and (p-1) form RR potentials. The former is a special case of the latter so we begin with it. The inclusion of lower brane charge can be accomplished by turning on a constant B µν (or equivalently a constant F µν ) field. The resulting sigma model action is given by where ǫ AB is antisymmetric, ǫ 01 = 1, and we follow the metric conventions η M N = diag(−1, 1, ..., 1) and η AB = diag(−1, 1). Our index notation will be to use M, N, ... indices for 10 dimensional spacetime indices which we decompose as M = (µ, i, a) where µ runs over the brane coordinates µ = 0, ..., p, i runs over the remaining noncompact dimensions i = p + 1, ..., 5, and a runs over the T 4 coordinates a = 6, ..., 9. Also we have used A, B, ... to denote worldsheet indices. For the constant B field to give rise to codimension 2 lower brane charge we must have rank 2 B field. We therefore take B 12 = f = −B 21 with all other components of B set to zero. The boundary conditions on the open string endpoints following from the above action are then The worldsheet fermions are handled in the usual way with one exception. The boundary conditions for the M = 1, 2 fermions are the standard ones, namely the right-moving, ψ + , and left-moving, ψ − , fermions are related by in the Ramond sector and by in the Neveu-Schwarz sector. In order to preserve worldsheet supersymmetry however the M = 1, 2 fermion boundary conditions must be modified to where the upper(lower) sign in the second equation applies in the R(NS) sector. The mode expansions for the worldsheet fields subject to the above boundary conditions are obtained in the standard way, for the details see [17] for the boundary conditions involving an f and eg. [18] for the other boundary conditions. We find the following expansions: 1 n a µ n e −inτ cos nσ, µ = 1, 2 (2.7) for the worldsheet bosons and for the worldsheet fermions. The 1, 2 fields are given in terms of the above fields through the definitions The phase φ in the Z mode expansion (2.6) is given in terms of f by The index sum in the fermion expansions is over half-integers in the NS sector and integers in the R sector. The (anti-)commutation relations of the worldsheet fields imply the following mode (anti-)commutation relations with all other (anti-)commutation relations vanishing. From the above mode expansions it is straightforward to construct the Virasoro generators and in particular one finds for L 0 , where we have made the definitions These a 1,2 oscillators now satisfy the commutation relations in (2.15) for M, N = 1, 2. The partition function is given by where P is a projection operator. Recall that a single non-BPS brane has two Chan-Paton factors, the identity I and Pauli matrix σ 1 (the other possible Chan-Paton factors σ 2 and σ 3 are projected out in the construction of the non-BPS brane from a brane-anti-brane pair, see eg. [9]). Each Chan-Paton factor has its own projection operator. For the orbifold that we are considering, T 4 /I 4 (−) F L , these projections have been worked out [8,9] and are given by where the upper (lower) sign corresponds to the I (σ 1 ) Chan-Paton factor. The partition function is then a sum of partition functions for the I and σ 1 open string sectors. The trace appearing in each of these sectors however is over the same set of states so that the projection operator appearing in (2.23) is just P = P I + P σ 1 , which is simply Evaluating the partition function is now a straightforward task given all the data accumulated previously. The end result is where the functions f i are defined as In obtaining the result (2.26) we have used the covariant formalism. In particular the result (2.26) includes the ghost contribution. Since this contribution is independent of the background B-field we have not bothered to give the details, which can be found in eg. [19,20]. The dependence of the partition function (2.26) on the background B M N field is quite simple in that f only enters in an overall multiplicative factor. Taking f → 0 yields the partition function for a non-BPS in vanishing B M N field. This partition function agrees with that computed elsewhere [21] and serves as a useful check on our calculations. B. Construction of the Boundary State The boundary state description of D-branes has been widely used so we shall limit our discussion here to primarily listing the relevant formulae. In particular the construction of boundary states in the presence of external fields has been discussed in [22,23,24]. Some useful reviews on the subject are [13,14,15,16]. The two main problems are to determine the boundary conditions satisfied by the state and to fix the appropriate GSO projection for the orbifold under consideration. The first problem is easily handled by converting the open string boundary conditions in the previous section to the closed string boundary conditions via the procedure reviewed in [14]. The result is for the bosonic fields and for the fermionic fields where the matrix S M N is block diagonal and is the identity in the M, N = 0, 3, ..., 9 block and in the 1,2 block. The constant η can be ±1 and both possibilities arise in the final boundary state. Solving these equations is straightforward given the closed string mode expansions. The latter for the bosonic string coordinates is given by in the untwisted sector wherê in the compact directions andp M L =p M R =p M in the noncompact directions. In the twisted sector the mode expansion in the compact directions is given by X a (τ, σ) = x a + i α ′ /2 n∈Z+1/2 1 n α a n e −i2n(τ −σ) +α a n e −i2n(τ +σ) (2.39) assuming that the branes are located at the one of the orbifold fixed planes x a = 0, πR a . The fermionic mode expansions are given by where the index t satisfies t ∈ Z + 1/2 : untwisted NS or twisted R Z : untwisted R or twisted NS. respectively where the indices are as appropriate for the untwisted R and NS sectors. We will discuss the zero mode contribution |Dp, η ψ shortly. Similarly for the twisted sector we find where the twisted sector matter states |Dp, f, k X,T and |Dp, f, η ψ,T are exactly as in (2.44) and (2.45) with the appropriate changes in the index summations. The zero mode contribution to the ψ boundary state is not difficult to find, but as we shall see later the only non-trivial contribution to it that we need comes from the twisted R sector. In this sector only the M = (µ, i) worldsheet fermions have zero modes. To simplify the notation we let α, β, .. = 0, ..., 5. A convenient representation of the zero mode anticommutation relations is given by [25] where the γ matrices satisfy the SO(1, 5) Clifford algebra {γ α , γ β } = 2η αβ and γ = −γ 0 γ 1 · · · γ 5 . A simple calculation then yields |Dp, f, η ψ,T,R = Cγ 0 γ 3 · · · γ p 1 + (1/f )γ 1 γ 2 1 + 1/f 2 where we have taken an arbitrary normalization (the overall normalization will be fixed below). We have so far ignored the ghost contributions to the boundary states listed above. Since we are using the covariant formalism however it is crucial that we include these terms. As it turns out though the ghost boundary state is independent of the orbifold that we are taking, i.e., it is the same state as derived for the flat Minkowski background in [19,20]. The relevant formulae are nicely collected in the review [14] and we shall not bother to rewrite everything here. To construct the boundary state corresponding to the non-BPS brane that we want we must find the correct GSO projection corresponding to the orbifold configuration that we have taken. This has already been done [8]. In the untwisted sector one has the usual type IIA/B GSO projection. For a non-BPS brane this leaves the NS − NS sector part of the untwisted state but removes the R − R piece as it has the "wrong" worldvolume dimension. On the twisted sector side the NS − NS part of the state is projected out while the R − R sector piece remains. The resulting boundary state is where ǫ is ±1 corresponding to a (anti-)brane 1 . The final step in the construction of the boundary state is to compute the normalization factors N 1,f and N 2,f . This is done by computing the one-loop partition function for open strings on the non-BPS brane using the above boundary state and comparing to the open string computation of the previous section. Given the closed string propagator where the normal ordering constant a is 1/2 in the untwisted NS−NS sector and 0 otherwise, then the one loop partition function is given in terms of the boundary state by The matter contribution to the Virasoro generator L 0 is given by with a similar expression forL 0 . The indices here differ in different sectors (NS versus R and twisted versus untwisted) as discussed previously. Evaluation of the partition function (2.52) is now a straightforward task modulo one subtlety involving the zero modes. The point is simply that naive evaluation of the inner product (0) ψ,T,R Dp, f, η 1 |Dp, f, η 2 (0) ψ,T,R (in which we really mean not just the state (2.49) but also the ghost zero mode contribution as well given in eg. [14]) would yield a divergent result. One can however define this inner product [26] through the regularization where F 0 and G 0 are the zero mode contributions to the fermion and superghost number operators. The details of the regularization can be found in [26]. With this regularization we find where the right-hand-side is independent of f . In the next section we shall require this inner product in the case in which one of the boundary states has vanishing f , the result is which is not independent of f and will play an important role later. Finally one can determine the normalization constants N 1,f and N 2,f by comparing the boundary state computation of the partition function (2.52) to the open string evaluation (2.26). We find (2.57) (2.58) Similarly for N 1,0 and N 2,0 one simply takes f = 0 in the above expressions. In obtaining these results we have used the identity as well as the modular transformation properties of the f i 's (2.61) III. COMPUTATION OF THE POTENTIAL We now have all the ingredients to compute the potential between the pair of non-BPS branes of interest, ie., both charged under the twisted sector RR p+1 form potential with only one also charged under the twisted sector RR p − 1 form potential. Similar considerations for the interactions between non-BPS D-particles in type I string theory can be found in [27]. The potential between the two is evaluated using the boundary states through the expression (1.1). Specifically for branes located at x i and y i in the noncompact transverse directions (and located at the same orbifold fixed plane in the compact dimensions) we find where we have defined the f i functions with two arguments as and the various arguments of the f i 's are defined as In getting the result (3.1) we have also used the identity Although we have not been able to evaluate the integral in the potential (3.1) analytically, there are a few useful analytic limits that one can extract. The first and most trivial point is to check that V reduces to the potential evaluated in [21] which should just be the f → 0 limit of (3.1). Indeed it is straightforward to show that the two expressions agree. A more interesting result is to note that in the limit of large separation between the branes, i.e., large (x − y) 2 /α ′ , the integral will be dominated by large t. Hence we can determine whether the potential will be attractive or repulsive at large distances by evaluating the sign of the integrand at large t. In particular the quantity in parenthesis in (3.1) reduces in the large t limit to just where we have defined the dimensionless number ξ as Hence for large separation between the branes we should find an attractive potential when (3.9) is positive and a repulsive potential when it's negative. After some algebra it is easy to find the boundary between these two cases, i.e., vanishing asymptotic potential, and it satisfies Recall that the open string tachyon on the non-BPS branes is only projected out for radii R a > α ′ /2, hence the smallest value of ξ compatible with the absence of the tachyon is ξ = 1. For vanishing asymptotic potential this corresponds to f = 0. In fact [21] showed in this case that the potential is identically zero for all brane separations. For a given ξ > 1 we then find an asymptotically repulsive potential for f < f crit and an attractive one for f > f crit . The last piece of analytical data that we can extract is to reverse the above argument, namely for small brane separations, r ≪ √ α ′ , the integral (3.1) should be dominated by small t. Hence, as before, we can determine whether the potential will be attractive or repulsive at short distances. The term in parentheses in (3.1) reduces in the small t limit to t 2 sin(πν) sinh(πν/t) e 2πν/t − 4e πν/t + 2 − 4e −πν/t + e −2πν/t + 9 j=6 e −π(R 2 j /(α ′ /2)−1)/t . The important point to note about this expansion, when combined with the r 2 = (x − y) 2 dependent exponential factor in (3.1), is that it diverges for small enough r! Specifically if the condition holds, then the integral will diverge to minus infinity. On the other hand, we may perform an analytic continuation from large r to define the potential for r smaller than this value. The contribution from the small t part of the integral then is proportional to 2 (r 2 − r 2 crit ) (p−1)/2 . (3.14) The potential therefore behaves in a similar way to the brane-anti-brane potential computed in [28]. The potential (3.14) is finite at the critical value of r (3.13) but then becomes complex, indicating inelastic modes are opening up. At the critical separation, the interbrane As the branes move closer together, a condensate of open string tachyons will form, accompanied by emission of closed string states. We expect the endpoint of this process to be a stable composite dyonic non-BPS brane, at a non-trivial minimum of the non-abelian open string tachyon. This kind of tachyon condensation has been considered before for a pair of electric non-BPS branes in [11], and from the supergravity point of view for a large number of coincident branes in [12]. The upshot of the small t expansion is that the potential must become attractive for r near r crit . Combined with the large t expansion we find that at the very least the potential must have an unstable equilibrium point for small enough values of f . To investigate the potential further we performed the integration numerically using 500 digit precision arithmetic. We plot in figures 1,2 the two generic cases that we find for the potential. Specifically for the parameters in figure 1 f < f crit , so that the potential is asymptotically repulsive, we find a local maximum in the potential at some separation of the branes and then an attractive potential for all smaller separations. In figure 2 we instead choose parameters such that f > f crit and find that the potential is attractive at all separations. Finally, let us consider how one might generalize the brane configuration to realize a potential with a local minimum. For purely electric branes, the potential is a monotonic function of the separation, which is repulsive when the individual branes are tachyon-free. By introducing the lower dimensional brane charge we introduce a new length scale into the interbrane potential proportional to the string length times a function of the ratio of the charges. As we have seen this is sufficient to generate a local maximum in the potential. However the extra charge dominates the behavior at short distances, leading to a short-range attractive force. By introducing additional brane charges we introduce additional length scales into the interbrane potential and in general a local minimum should be present. A challenge for the future is to construct stable non-BPS brane solutions with these extra charges, which promise new insights into the brane world scenario.
5,531.4
2001-08-23T00:00:00.000
[ "Physics" ]
A DECISION LEVEL FUSION METHOD FOR OBJECT RECOGNITION USING MULTI-ANGULAR IMAGERY Spectral similarity and spatial adjacency between various kinds of objects, shadow and occluded areas behind high rise objects as well as complex relationships lead to object recognition difficulties and ambiguities in complex urban areas. Using new multiangular satellite imagery, higher levels of analysis and developing a context aware system may improve object recognition results in these situations. In this paper, the capability of multi-angular satellite imagery is used in order to solve object recognition difficulties in complex urban areas based on decision level fusion of Object Based Image Analysis (OBIA). The proposed methodology has two main stages. In the first stage, object based image analysis is performed independently on each of the multi-angular images. Then, in the second stage, the initial classified regions of each individual multi-angular image are fused through a decision level fusion based on the definition of scene context. Evaluation of the capabilities of the proposed methodology is performed on multi-angular WorldView-2 satellite imagery over Rio de Janeiro (Brazil).The obtained results represent several advantages of multi-angular imagery with respect to a single shot dataset. Together with the capabilities of the proposed decision level fusion method, most of the object recognition difficulties and ambiguities are decreased and the overall accuracy and the kappa values are improved. * Corresponding author. INTRODUCTION The complex nature and diverse composition of land cover types found within urban areas exhibit several difficulties in producing large scale topographic maps from VHR satellite imagery (Shackelford and Davis, 2003).This situation also potentially leads to lower accuracy in pixel based image classification approaches which only use spectral characteristics of remotely sensed data (Myint et al., 2011;Shackelford and Davis, 2003;Zhou and Troy, 2008;Blaschke, 2010;Duro et al., 2012).As many land cover classes in an urban environment have similar spectral signatures, spatial information such as height and topological relationships must be exploited to produce accurate classification maps.Already many researchers have investigated the potential of the object based image analysis (OBIA) approaches for dealing with VHR images and complexities in urban areas (Myint et al., 2011;Zhou and Troy, 2008;Blaschke, 2010;Peets and Etzion, 2010;Jacquin et al., 2008;Laliberte et al., 2012).As it is depicted in previous researches in the field of OBIA, the accuracy of object recognition results in complex urban areas directly depends on the segmentation and rule based classification processes (Ivits et al., 2005;Platt and Rapoza, 2008;Zhou and Troy, 2008;Myint et al., 2011;Laliberte et al., 2012).Moreover, depending on the viewing angle of the sensor, some parts of urban objects may be occluded by high rise objects such as buildings or trees (Habib et al., 2007;Bang et al., 2007).Therefore, using only single shot imagery, it is very difficult to obtain valuable object recognition results.On the other hand, stereo imagery (from just two viewing directions) may partially improve object recognition results, but still there might be missing information for filling up occluded areas.The unique agility and collection capacity of some modern satellite sensors such as WorldView-2 provide image sequences of a single target from many observation angles within one satellite orbit.Fusion of the information coming from multiangular imagery is valuable for filling up occluded areas and obtaining reliable object recognition results.Therefore, using multi-angular imagery together with contextual information and higher levels of modification should be utilized in order to improve object recognition results in complex urban areas based on VHR satellite images.In this paper a context aware system is proposed for decision level fusion of the object recognition results and solving problems arising from occlusions and shadow areas, based on the multi-angular VHR remotely sensed data. PROPOSED METHOD As depicted in figure 1, the proposed object recognition method is composed of two main stages: single view and multi views processes.In the first stage, region's internal contextual information is utilized for object classification on each of the individual images.Per segment spectral and textural pixels' interactions together with structural features based on size, shape and height of a segmented region, generate region's internal context.This kind of information is utilized for object based image analysis composed of image segmentation and object classification on each of the individual images.Second stage of the proposed methodology performs decision level fusion on the multi view classified regions based on scene context in order to reduce ambiguities and uncertainties in the generated classification map. Processing Based on Single View Object based image analysis requires generating segmented regions as classification units.In this research, multi-resolution segmentation technique is applied on the content of each of the individual images in order to segment it into regions.The multiresolution segmentation procedure starts with single image objects of one pixel and repeatedly merges a pair of image objects into larger ones.The merging decision is based on the local homogeneity criterion, describing the similarity between adjacent image objects (Baatz and Schape, 2000).After performing segmentation, a knowledge based classification process should be performed on each of the segmented regions.Therefore, it is necessary to gather proper knowledge composed of per-segment interactions between pixels and structural characteristics of each segmented region in order to provide region's internal context.Following items represent some sample of these features with their basic mathematics: 2.1.1Spectral Features: the ratios between reflectance values of various spectral bands generate normalized difference indices (NDI) and simple ratios (SR). where Energy & Entropy = Samples of textural features Despite the high potential of the strategies in OBIA, wrong classified regions due to occlusion and shadow through high rise 3D objects, decrease the reliability of the object recognition results.Therefore, using the proposed decision level fusion of the multi views object based image analysis can handle these difficulties in an efficient way. Processing Based on Multi Views High rise objects together with viewing angle of the sensor make some uncertainties in the object classification results which cannot be solved using only single shot imagery.Therefore, in the proposed object recognition methodology another processing stage is defined based on the context aware decision level fusion of the object based image analysis on multi views.This stage is composed of two main operations; pre-analysis on object classification results for all images and the decision level fusion of the multi views.2.2.1 Pre-Analysis: Total visibility map generated from all of the individual visibility maps provides a main tool for performing visibility analysis.Visibility analysis determines the number of views for all, pixel (i,j) of the ground space are visible.Performing visibility analysis as a pre-analysis operation, one can categorize all of the ground space pixels into three groups: visible in all images, visible in some images and visible in none of the (occluded in all) images.This categorization is useful for the definition of scene context in the proposed decision level fusion algorithm. Decision level fusion of multi views: The first step of the decision level fusion is the back projection from each ground space pixel to its pre-identified visible images (all images for the group of visible in none) based on the results of the pre-analysis operation.These back projections find the objects and sub-objects in multi views those pixels belong to.Therefore, there are three groups of areas containing various object types and their sub objects in multi views. As scene context of an object can be defined in terms of its cooccurrences with other objects and its occurrences in the whole scene, information regarding sensor's look angle, distance from occluded areas, heights and areas of sub-objects those pixels belong to, are the fundamentals for scene context definition. In this research, scene contextual information is utilized for weighting all class objects in order to assign them to the ground space pixels.So, this proposed decision fusion strategy takes place in the level of the pre-defined object classes based on the object classification results.Therefore, if there are n various recognizable object classes in multi views, also n different weights should be calculated for assigning each of the object classes to the ground space pixels.Weighting class objects highly depends on the scene context and ground space categorization as a result of visibility analysis.Following items represent the proposed weighting strategy based on scene context: 1. Off-nadir weight depends on the sensor information such as off-nadir viewing angle for each of the multi view images.Classified regions in the nearest to nadir views have largest off-nadir weights.2. Weight of occlusion, depends on the longest distances between each object region and its neighbouring occluded areas in the multi view images.More distance from occluded areas leads to a larger weight of occlusion.3. Weight of structural features, depends on the spatial information such as the area of the object region.For instance, if pixel (x,y) belongs to a large sub-object, the weight of structural features becomes larger.4. Weight of topological relationships, depends on the neighbouring relationships between occluded regions and their nearest visible object regions.Determining the weight of topological relationship for an occluded sub-object in all views depends on its neighbouring visible objects with smallest height differences.In other word, if there is small amount of height difference between two neighbouring object regions, class label of visible region assigns to the occluded one.For ground space pixels those are categorized in the groups of visible in all or visible in some images, calculating the weight of scene contextual information for each object class is based on the summation of off-nadir, occlusion and structural weights in their visible views.For ground space pixels those are visible in none of the images, the summation of off-nadir weight and weight of topological relationships in all views is calculated as the weight of scene context for each object class.In addition, classification accuracies of various object types in each view also affect on the decision fusion results.Weights of object classes are determined based on the user and producer accuracies in each of the multi views object based image analysis. ) The object class with the largest weight should be selected as the winner class label for each of the ground space pixels.If the winner class is shadow, structural and height based relations are used in order to determine true object types instead of shadow area. EXPERIMENTS AND RESULTS The potential of the proposed object recognition methodology is evaluated for automatic object recognition in multi-angular WorldView-2 satellite imagery over Rio de Janeiro (Brazil) which were collected in January 2010 within a three minutes time frame with satellite elevation angles of 44.7°and 56.0°in the forward direction, and 59.8° and 44.6° in the backward direction, as shown in Figure 2. The multi-angular sequence contains the downtown area of the city, including a number of large and high buildings, commercial and industrial structures, and a mixture of community parks and private housing.Moreover, using multi-angular WorldView-2 imagery the DSM is generated from multiple pairs of panchromatic stereo images in epipolar geometryusing the Semi-Global Matching (SGM) algorithm (Hirschmüller 2008;d'Angelo et al., 2010;Sirmacek et al., 2012).In order to obtain OBIA results based on multi-angular images, multi-resolution segmentation algorithm is applied on each of the independent satellite images using eCognition software with the values 90, 0.2 and 0.1 for the scale parameter, compactness and shape parameters, respectively.Then, various spectral, textural and structural features are measured on image regions for the generation of the knowledge base containing internal context and performing object level classification on segmented regions.Building, road, tree, grass land and shadow area are the preidentified object classes based on visual inspections.Despite shadow is not a real object class, detecting real objects under shadow areas based on spectral responses is a difficult task dealing with VHR imagery.Therefore, in this paper shadow is first recognized as separate object class and in a later step, we are going to recover shadow areas based on decision level fusion of topological relationships in multi-angular views.Before decision level fusion of multi-angular images, by performing visibility analysis on total visibility map, ground space categorized into three groups: 924914 pixels are visible in all images, 552286 pixels are visible in some images and just 25200 pixels are visible in none of the (occluded in all) images.After performing decision level fusion on the OBIAs based on the proposed context aware strategy, analysis represents that decision level fusion removes shadow from object recognition results and detects road regions occluded by high rise buildings especially in pixels which are visible in all or some images.Figure 3 compares the object based image analysis from each of the multi-angular images with decision level fusion of them.According to the obtained results, despite the high potential of the utilized regions' internal context, high rise buildings together with small elevation angles of the sensor increase difficulties in object recognition in urban areas.However, using scene context and classification accuracies in the proposed decision level fusion algorithm can improve the classification results.Moreover, for the quantitative evaluation of the results from decision fusion system, some areas of the pre-defined object classes are manually selected by an expert operator on 3D model generated from multi-angular scenes (Figure 4).Sample areas are then compared with their corresponding results of different steps of the proposed object recognition algorithm.As depicted in table 1, the comparison is based on the number of correctly detected pixels (true positive), wrongly detected pixels (false positive), and the not correctly recognized pixels (false negative), determined after performing the object recognition algorithm.After that, using quantitative values for each object class, completeness, correctness and quality criteria are determined for the obtained results (Tabib Mahmoudi et al. 2013).In order to perform more quantitative analysis on the results, object based image analysis of each of the multi-angular images are compared within sample regions and then, their overall accuracies and kappa are compared with their decision level fusion results.As it is depicted in table 2, the largest values of overall accuracy and kappa belong to the views with largest elevation angles and by decreasing the elevation angles; overall accuracies and kappa are decreasing.Comparing tables 1 & 2 with each other shows that using proposed algorithm for decision level fusion of multi-angular images increase the amount of overall accuracy and kappa values in classification results. CONCLUSION A context aware strategy is proposed for decision level fusion of the object based image analysis results based on multi-angular WorldView-2 satellite imagery. According to the various elevation angles of the sensors, high rise 3D objects such as buildings may cause occlusion and shadow areas in the VHR remotely sensed imagery.In such a situation, true class labels of some parts of object regions cannot be detected.So, large number of false positive and false negative pixels decreases the classification accuracies.Using the proposed decision level fusion method of multi-angular imagery based on the developed context aware system improves the classification accuracy. Results are shown that visibility analysis on class labels before and after fusion and performing decision level fusion on the OBIA results of multi-angular images reduce the amount of wrongly classified pixels for 3D object classes and reveals improved class labelling of the occluded areas.This method still needs further modifications in the field of definition of contextual information.Moreover, incorporating a Lidar DSM and multi-angular satellite or aerial images can be tested for improvement of object recognition results in complex urban areas. Features: the grey value relationships between each pixel and its neighbours in the pre-identified segmented regions. pixel (i,j) N=neighbour size in the segmented region 2.1.3Structural Features: Calculating suitable structural features based on spatial characteristics and heights of segmented regions provide another part of internal context for using in the object classification process.In this paper, 2D structural features such as area, elongation and solidity are used together with relief and surface roughness as 3D structural featuresLengths = Major & minor axes of the bounding polygon Convex area= Area of the smallest convex polygon that can contain the region After generation of the above mentioned spectral, textural and structural features based on the image data and height products, an optimum selection of features for recognizing each individual object classes has the most effective role in generating a rich knowledge base of region's internal context.The object classification can be performed by encapsulating the knowledge base containing region's internal context, into a rule set and definition of a strategy for object recognition.The proposed strategy is a multi-process classification model that is a progressive process composed of multiple steps.In the first step, the entire segmented image dataset is classified based on spectral reasoning rules.Then, in the next steps, classified objects are modified based on textural and structural reasoning rules, respectively (Tabib Mahmoudi et al., 2013). Figure 1 . Figure 1.General structure of the proposed object recognition method object class i in view k Therefore, total weight of each object class composed of weights of scene context and classification in various views. Dataset, a) WV-2 imagery with 56º satellite elevation angle, b) WV-2 imagery with 44.7º satellite elevation angle, c) WV-2 imagery with 59.8ºsatellite elevation angle, d) WV-2 imagery with 44.6º satellite elevation angle, e) Digital Surface Model (Ground space) Results of the object based image analysis and decision level fusion of them, a) OBIA of WV-2 imagery with56º satellite elevation angle, b) OBIA of WV-2 imagery with 44.7º satellite elevation angle, c) OBIA of WV-2 imagery with 59.8ºsatellite elevation angle, d) OBIA of WV-2 imagery with 44.6º satellite elevation angle, e) decision level fusion result Figure 5 Figure 5 illustrates considerable improvement in quality of context aware decision level fusion results with respect to the object based image analysis of each of the multi-angular images. Table 1 . Accuracy assessment of the obtained results from the proposed decision fusion algorithm Table 2 . Accuracies of multi-angular object based image analysis
4,046
2013-09-25T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text Style Transfer Adapting a large language model for multiple-attribute text style transfer via fine-tuning can be challenging due to the significant amount of computational resources and labeled data required for the specific task. In this paper, we address this challenge by introducing AdapterTST, a framework that freezes the pre-trained model's original parameters and enables the development of a multiple-attribute text style transfer model. Using BART as the backbone model, Adapter-TST utilizes different neural adapters to capture different attribute information, like a plug-in connected to BART. Our method allows control over multiple attributes, like sentiment, tense, voice, etc., and configures the adapters' architecture to generate multiple outputs respected to attributes or compositional editing on the same sentence. We evaluate the proposed model on both traditional sentiment transfer and multiple-attribute transfer tasks. The experiment results demonstrate that Adapter-TST outperforms all the state-of-the-art baselines with significantly lesser computational resources. We have also empirically shown that each adapter is able to capture specific stylistic attributes effectively and can be configured to perform compositional editing. Passive Past Future Such arguments aren't bought by opponents. Opponents will not buy such arguments. Opponents did not buy such arguments.author writing style (Syed et al., 2020).Nevertheless, most of the existing TST studies are confined to single-attribute TST tasks.Few works have explored multiple-attribute TST tasks, where TST models are designed to control and transfer text in multiple target stylistic attributes.Lample et al. (2019) attempts style transfer with multiple attributes by conditioning on the average embedding of each target attribute and using a combination of denoising autoencoder (DAE) and back-translation techniques.Goyal et al. (2021) propose an approach to initialize an encoderdecoder setup with a transformer-based language model that is pre-trained on a generic corpus and enhances its capability of re-writing to multiple target style dimensions by utilizing multiple style-aware language models as discriminators. A possible approach to perform single and multiple attribute TST tasks is to leverage large pretrained language models (PLMs).The PLMs have been pre-trained on large corpora, which allows them to capture natural language's syntactic and semantic information.This characteristic of PLMs makes them well-suited for TST tasks, where the model needs to understand the content and style of the input text.Syed et al. (2020) fine-tune a denoising autoencoder (DAE) for the stylized re-writing task by initializing the encoder and decoder with a pre-trained language model trained on Masked Language Modeling (MLM) objectives (Devlin et al., 2019).Wang et al. (2019) fine-tune GPT-2 model (Radford et al., 2019) using the text formality transfer rules harnessed from analyzing the GYAFC parallel dataset (Rao and Tetreault, 2018).The fine-tuned GPT-2 model was subsequently used to transfer the formality of text (e.g., informal to formal text).However, fine-tuning PLMs for multiple-attribute TST remains challenging as a significant amount of computational resources and style-labeled data are required to perform TST for each stylistic attribute. Research Objectives.To address these research gaps, we propose Adapter-TST, a parameterefficient framework that utilizes BART (Lewis et al., 2020) or T5 (Raffel et al., 2020) as the backbone model and trains neural adapters to capture multiple stylistic attributes for multiple-attribute TST.During the training of Adapter-TST, we freeze the original parameters of the pre-trained BART or T5 model and only update the parameters of adapters to relax the dependence on computational resources and supervised data.The proposed Adapter-TST model is flexible to handle different settings of multiple-attribute TST by configuring the connection method among adapters.Figure 1 illustrates the different settings of multiple-attribute TST tasks.Paralleling the adapters in Adapter-TST can generate multiple outputs in the corresponding target style simultaneously (setting b) and stacking the adapters for compositional editing in terms of different target styles at the same time (setting c).We conduct experiments on the traditional sentiment transfer task and multiple-attribute TST tasks, including multiple stylistic attribute outputs and compositional editing.Results of automatic and human evaluations show that Adapter-TST can outperform the state-of-the-art baselines to transfer and generate high-quality text with lower computational resources. Contributions.We summarize our contributions as follows: (i) We introduce an Adapter-TST, which is a parameter-efficient framework that can perform multiple-attribute TST tasks with significantly lower computational resources.(ii) Included in the Adapter-TST are two TST configurations, parallel and stacking, which support multiple-output TST and compositional editing, respectively.(iii) We conducted extensive experiments on real-world datasets.The automatic and human evaluation results show that Adapter-TST can outperform the state-of-the-art baselines to transfer and generate high-quality text. Text Style Transfer TST is an emerging research topic that has garnered attention from computational linguists and computer science researchers.The recent comprehensive survey (Hu et al., 2022a;Jin et al., 2022) summarizes the existing TST approaches. While the majority of existing studies have focused on performing TST on single attributes such as sentiment (Li et al., 2018;Luo et al., 2019;Fu et al., 2018) or formality (Rao and Tetreault, 2018), recent studies have also explored multiple-attribute TST tasks, where TST models are designed to control and transfer text in multiple target stylistic attributes.Lample et al. (2019) attempts style transfer with multiple attributes by conditioning on the average embedding of each target attribute and using a combination of denoising autoencoder (DAE) and back-translation techniques.Goyal et al. (2021) propose an approach to initialize an encoder-decoder setup with a transformer-based language model that is pre-trained on a generic corpus and enhances its capability of re-writing to multiple target style dimensions by utilizing multiple style-aware language models as discriminators.In this study, we contribute to this limited multipleattribute TST literature by proposing an alternative approach to generate multiple stylistic outputs and perform compositional editing efficiently. Due to the lack of parallel training data, most existing TST methods are designed to train with non-parallel style-labeled sentences as input.A popular line of TST approaches aims to disentangle the text's content and style in the latent space to perform TST (Shen et al., 2017;Zhao et al., 2018;Fu et al., 2018;Chen et al., 2018;Logeswaran et al., 2018;Yin et al., 2019;Lai et al., 2019;Vineet et al., 2019).Another common approach is to leverage PLMs.For instance, Syed et al. (2020) fine-tune a denoising autoencoder (DAE) for the stylized re-writing task by initializing the encoder and decoder with a pre-trained language model trained on Masked Language Modeling (MLM) objectives (Devlin et al., 2019).Wang et al. (2019) fine-tune GPT-2 model (Radford et al., 2019) using the text formality transfer rules harnessed from analyzing the GYAFC parallel dataset (Rao and Tetreault, 2018).The fine-tuned GPT-2 model was subsequently used to transfer the formality of text (e.g., informal to formal text).However, finetuning PLMs for multiple-attribute TST remains challenging as a significant amount of computational resources is required to perform the task; multiple PLMs need to be fine-tuned for the different attributes to perform multiple-attribute TST.In this study, we overcome this limitation by proposing Adapter-TST, which is a parameter-efficient framework that leverages on PLMs but requires significantly lesser computational resources to perform multiple-attribute TST. Adapter-based Models PLMs, pre-trained on large-scale text corpus with unsupervised objectives, have established state-ofthe-art performances on various NLP downstream tasks.Many studies fine-tune PLMs with language modeling and downstream task objectives to obtain better performance (Zhang et al., 2019b;Lauscher et al., 2019;He et al., 2019;Xiong et al., 2019).To leverage the powerful PLMs more efficiently, Houlsby et al. (2019) add adapter layers, small neural networks, into each transformer layer to obtain near state-of-the-art performance on the GLUE benchmark while updating only the parameters of adapters.Inspired by this work, more adapter-based models (Wang et al., 2021;Liu et al., 2021;Zhong et al., 2021) are proposed to inject task-specific knowledge into PLMs with adapters.Inspired by the adapter architecture, we propose Adapter-TST, which trains different neural adapters to capture different stylistic attributes to perform the multiple-attribute TST.The proposed adapter framework has two configurations that support multiple stylistic attribute outputs and compositional editing. Methodology This section proposes Adapter-TST, which adds neural adapters into each transformer layer to capture different attribute information for multipleattribute TST.We first introduce the adapter structure used in Adapter-TST and its parameter efficiency.Subsequently, we explain how the adapters are configured for different multiple-attribute TST settings, namely, multiple stylistic attribute outputs and compositional editing.Finally, we describe the training objectives of Adapter-TST. Adapter Structure We present an adapter structure in Figure 2. The adapter consists of a bottleneck that contains few parameters relative to the attention and feedforward layers in the original model.A skip connection is applied across two projection layers.In our proposed Adapter-TST, these adapters will be trained to capture different stylistic attributes.In contrast to Houlsby et al. (2019), adding the adapter module twice to each transformer layer, we propose simplifying the approach by just adding the adapter layer into each transformer once, making our Adapter-TST's architecture more parameter efficient.We use BART-large (24-layer, 1024-hidden, 16heads, 406M parameters) or T5-large (24-layer, 1024-hidden, 16-heads, 770M parameters) as the backbone model in Adapter-TST.As for each adapter layer, we denote the hidden dimensions of the down-projection and up-projection layers as H d = 64 and H u = 1024.The bottleneck adapter layers are plugged into each layer of BART-large or T5-large, and different adapter layers do not share parameters.Thus the total number of parameters for each attribute-specific adapter is about 3.17M, which is only 0.78% of the original BART-large model and 0.41% of the T5-large model, making the training process parameter efficient.Note that the original parameters of BART-large or T5-large are frozen during multiple-attribute TST training, and only the parameters of adapters are trainable and initialized randomly. Adapter-TST Configurations Adapter-TST has two configurations, parallel and stack, which support two multiple-attribute TST task settings: multiple stylistic attribute outputs and compositional editing, respectively.To better understand the two Configurations of Adapter-TST, we take the multiple-attribute TST task with tense and voice attributes as an example.Tense In the latter transformer layers, we distribute the hidden states to corresponding adapters to make sure that the input of an adapter in the current layer is the output of the adapter with the same attribute value in the preceding layer. Stack Connection.Compositional editing requires TST models to change multiple attributes simultaneously while preserving the original content.For instance, as shown in Figure 1(c), given an input sentence with present tense and active voice, the multiple-attribute TST models need to generate one sentence both in future tense and passive voice.Adapter-TST performs compositional editing by using the Stack connection method shown in Figure 3 Training Objectives. The TST task aims to transfer the style of inputs while preserving the original semantic content.Thus, we train Adapter-TST with classification loss L cls for style transfer and reconstruction L rec for content preservation.During training, the original parameters of BART-large or T5-large are frozen, and only the parameters of adapters are trainable. Classification Loss L cls : The classification loss ensures that the transferred sentence conforms to the target attribute value.To this end, we first pre-train a TextCNN-based (Kim, 2014) binary at-tribute classifier D for each attribute, then apply the pre-trained attribute classifiers to guide the updates of adapters' parameters such that the output sentence is predicted to be in the target style: (1) where x ′ is sampled from the distribution of model outputs at each decoding time step, and y t is the target attribute value.Policy gradient algorithm (Sutton et al., 1999) is used to for discrete training with the attribute classifiers. Reconstruction Loss L rec : The reconstruction loss attempts to preserve the original content information in the transferred sentences.Specifically, the loss function constricts the adapters to capture informative features to reconstruct the original sentence using the learned representations.Formally, we define L rec as follows: where y i is the i-th attribute value of the input sentence, z i denotes the hidden representation extracted by the corresponding adapter.The input sentences are only reconstructed by the corresponding adapter and transferred by other adapters. Putting them together, the final joint training loss L is as follows: Where λ is a balancing hyper-parameter to ensure that the transferred sentence has the target style while preserving the original content. Experiment Setting Datasets.We evaluate the proposed Adapter-TST model on sentiment transfer and multipleattribute TST tasks using the Yelp1 and StylePTB (Lyu et al., 2021) datasets, respectively.We adopt the train, development, and test split for the Yelp dataset as (Luo et al., 2019).Lyu et al. (2021) introduce StylePTB2 , a large-scale benchmark with compositions of multiple-attribute TST tasks which allow the modeling of fine-grained stylistic changes.In our experiments, we choose four subsets for multiple-attribute TST: Tense-Voice, Tense-PP-Front↔Back, Tense-PP-Removal, and Tense-ADJADV-Removal. Specifically, the four subsets Baselines.For sentiment transfer, we benchmark Adapter-TST against nine state-of-the-art TST models: BackTrans (Prabhumoye et al., 2018), CrossAlign (Shen et al., 2017), DualRL (Luo et al., 2019), Unpaired (Li et al., 2019), UnsuperMT (Zhang et al., 2018), Style Transformer (Dai et al., 2019), DeleteOnly, Template, and Del&Retri (Li et al., 2018).For multiple stylistic attribute outputs task, Style Transformer (Dai et al., 2019), a transformer-based model for single-attribute TST, is selected as a baseline.We train multiple Style Transformer models for each attribute and perform style transfer separately.For compositional editing, we use the trained Style Transformer models to perform sequential editing, which transfers one attribute after another to compare results with our model.We term this baseline as Sequential Style Transformer setup. Training.The experiments were performed on an Ubuntu 20.04.3 LTS system with 24 cores, 128 GB RAM, and Nvidia RTX 3090.The model implementation is based on AdapterHub (Pfeiffer et al., 2020) and Huggingface Transformers (Wolf et al., 2020).For the balancing hyper-parameter λ, we choose the best-performed one from (0.9, 1) as the BART-large and T5-large models can copy the input without training with TST objectives. Automatic Evaluation We evaluate the proposed model and baselines on three criteria commonly used in TST studies: transfer strength, content preservation, and fluency.An attribute classifier is first pre-trained to predict the attribute label of the input sentence.The classifier is subsequently used to approximate the style trans- fer accuracy (ACC) of the sentences' transferred attributes by considering the target attribute value as the ground truth.To quantitatively measure the amount of original content preserved after style transfer operations, we employ BERTscore (Zhang et al., 2019a) between style-transferred and original sentences.For fluency, We use GPT-2 (Radford et al., 2019) to measure the perplexity (PPL) of transferred sentences.The sentences with smaller PPL scores are considered more fluent.Finally, we compute the geometric mean of ACC, BERTscore, and 1/PPL.We take the inverse of the calculated perplexity score because a smaller PPL score corresponds to better fluency.When there is more than one accuracy in the multiple-attribute TST tasks, we use the average accuracy to compute G-score. Automatic Evaluation Results Table 2 shows the performance of the Adapter-TST model and the baselines on the sentiment transfer task.Adapter-TST has achieved the best G-score, outperforming the baselines.We observe that Adapter-TST achieves comparable performance on transfer strength and content preservation with 97.3% transfer accuracy and 0.89 BERTscore by only updating the parameters of adapters.With the impressive generative ability of the pre-trained BART-large and T5-large models, the Adapter-TST model can generate high-quality text in terms of fluency and completeness.The experiment results demonstrate Adapter-TST's ability to perform TST well and efficiently with fewer training parameters. Table 3 presents the results of the proposed Adapter-TST model and Style Transformer baselines for the multiple stylistic attribute output task.Our Adapter-TST model achieves the highest G-score across all four datasets by simultaneously modeling multiple attributes using different adapters.Adapter-TST performs well in transferring tense attributes, surpassing the baselines on three datasets.However, modeling multiple attributes together proves to be a more challenging task.While Adapter-TST exhibits a slight performance gap compared to the Style Transformer model in terms of transfer accuracy, it excels in generating fluent and coherent sentences while preserving the original content.This advantage allows Adapter-TST to outperform the baselines in content preservation and fluency.It is also worth noting that training multiple Style Transformers for the multiple-attribute TST tasks is computationally inefficient and expensive, unlike Adapter-TST. To demonstrate that the attribute-specific adapters capture the corresponding attribute information, we evaluate the proposed Adapter-TST model on the compositional editing task.Note that the parameters of adapters trained in the multiple stylistic attribute outputs task are reloaded, and the connection method is changed to Stack for compositional editing.Table 4 shows the performance of the Adapter-TST and Sequential Style Transformer on the compositional editing task.The Adapter-TST model achieves the highest G-score across four datasets, similar to the results obtained in the multiple stylistic attribute output task.We observe that the average G-score of the multiple stylistic attribute outputs task is 2.24, significantly higher than compositional editing's average Gscore of 1.89.The difference in the average Gscore highlights the challenge of the compositional editing task.Interestingly, Adapter-TST achieves comparable performance on style transfer accuracy over attributes, indicating that the attribute-specific adapters effectively capture the stylistic attributes. Human Evaluation We conducted a human-based evaluation study to assess the performance of the Adapter-TST model in handling multiple-attribute TST tasks.The study involved randomly sampling 200 sentences from the Tense-Voice dataset.Both Adapter-TST and the baselines were used to generate multiple stylistic attribute outputs and perform compositional editing on the sampled sentences.Two linguistic researchers evaluated the generated sentences based on three criteria used in automated evaluation.To measure transfer strength, evaluators indicated whether the sentences were in the target attribute value (e.g., future tense, passive voice) using a true/false indicator.For content preservation, evaluators rated the amount of preserved content on a 5-point Likert scale, ranging from no content preserved (1) to all content preserved (5).Fluency was assessed on a 5-point Likert scale, where 1 represented unreadable sentences with numerous grammatical errors, and 5 indicated perfect and fluent sentences.To reduce biases, the model names were concealed, and the order of the models was randomized when displaying the generated sentences.This ensured that evaluators were unaware of which model generated the sentences they were evaluating. Human Evaluation Results Table 5 shows the evaluation results.The style transfer accuracy of the models was computed using the binary feedback from the evaluators.The average scores for the criteria of content preservation and fluency were calculated using the 5-point Likert scores.Adapter-TST is observed to outperform the baselines in content preservation, fluency, and G-score.Adapter-TST is also rated to generate more syntactically sound and fluent sentences compared to the baselines.We can also observe that there is still a style transfer accuracy drop of Adapter-TST on attribute Voice when modeling multiple attributes at the same time.These results align with the automatic evaluations and demonstrate Adapter-TST's effectiveness in performing multiple-attribute TST well and efficiently. Case Study We conducted case studies to showcase the style transferred outputs of both the Adapter-TST and Style Transformer models.Randomly sampled examples and their corresponding outputs are presented in Table 6, specifically for the Tense-Voice dataset.Our findings reveal that Adapter-TST successfully transfers the style while preserving the content and sentence structure in multiple-attribute TST tasks.In contrast, the Style Transformer model generates sentences with grammatical errors, making it challenging to determine if the style transfer was successful.Moreover, the Style Transformer model performs poorly in the task of compositional editing due to its inherent complexity.Despite the difficulty of compositional editing, Adapter-TST is capable of generating fluent sentences that preserve the original content. Conclusion In this paper, we introduced a parameter-efficient framework, Adapter-TST with different neural adapters to capture different attribute information for multiple-attribute TST tasks.During training, the original parameters of BART-large were frozen, and only the adapters' parameters were optimized to relax the dependence on computational resources and supervised data.We conducted extensive ex-periments on traditional sentiment transfer and multiple-attribute TST tasks.The automatic and human-based evaluation results showed that the attribute-specific adapters in Adapter-TST is able to capture relevant stylistic attributes to transfer the style while preserving the original content successfully.Our case studies also demonstrated that Adapter-TST was able to generate high-quality text in the target style.For future work, we will continue to improve TST models' ability to model multiple attributes in terms of quality and efficiency.We will also explore plugging Adapter-TST on other PLMs and evaluate its effectiveness. Limitations This work has two limitations.First, there is a style transfer accuracy reduction on one of the attributes, while the proposed model models multiple attributes simultaneously.Explorations on improving TST models' ability to handle multipleattribute TST tasks and the dependency among attributes are potential directions in this field.Second, even though we have frozen the parameters of the pre-trained BART-large model to improve parameter efficiency, we still need to run BART-large model to extract representations for performing TST tasks. Ethics Statement The ethical implications of using large language models trained on data containing unchecked biases are acknowledged.As with any generative task, style transfer also has the potential for misuse, such as fact distortion, plagiarism, etc.The paper aims to demonstrate the academic utility of the proposed framework.This solution must be paired with strict checks for misrepresentation, offensiveness, and bias to adhere to ethical standards. Figure 1 : Figure 1: Examples of different settings of multipleattribute TST.(a) Existing single-attribute TST models perform sequential editing by transferring the text style sequentially to achieve compositional editing.Multipleattribute TST models can (b) generate multiple outputs simultaneously in the corresponding target style, or (c) perform compositional editing by transferring different target styles.The proposed Adapter-TST enables a single PLM to achieve both settings (b) and (c) by configuring the adapters' connection method. Figure 2 : Figure 2: Structure of the adapter layer.The adapter layer consists of a bottleneck with up and down projection layers, and a skip connection between two projection layer. Figure 3 : Figure 3: Adapter-TST Configurations -Left: Paralleling the adapters enables a single PLM to model different attributes simultaneously and generate multiple outputs in the corresponding target style.Right: Stacking the adapters for compositional editing in terms of different target styles at the same time.Stack connection is used for inference to verify the relevant attribute information captured by adapters. has three attribute values (Future, Past, Present), while Voice has two attribute values (Passive, Active).Thus, we add five attribute-specific adapters Adapter(Future, Past, Present, Passive, Active) to the base model for all the possible attribute values, respectively.Each adapter is employed to learn to generate sentences with corresponding attribute values while preserving the semantic content of the inputs.Parallel Connection.We define the multi-ple stylistic attribute outputs as follows: given a sentence x = {x 1 , ..., x n } with n tokens and y tense , y voice labels, the Adapter-TST model is required to generate multiple outputs with all possible other attribute values at the same time.For instance, as shown in Figure 1(b), given an input sentence with present tense and active voice, the multiple-attribute TST models need to generate three sentences in the past tense, future tense, and passive voice simultaneously.The multiple stylistic attribute output setting requires TST models to capture all the stylistic attributes and have the capability of performing style transfer among the attribute values.Adapter-TST performs the multiple stylistic attribute output by utilizing the Parallel connection configuration shown in Figure 3(a).Specifically, we plug the paralleled adapters Parallel(Future, Past, Present, Passive, Active) into each transformer layer of the base model.During training, each training sample passes all the attribute-specific adapters, but adapters will take different actions according to the attribute values of input sentences.The adapter learns to reconstruct the input sentence for training samples with the same attribute value as an adapter.Conversely, when training samples with different attribute val-ues, the adapter learns to transfer the attribute of the input sentence while preserving the original content.The outputs of all the adapters are concatenated together to the next layer.The replication is only performed once in the first transformer layer. (b), where adapters belonging to the same attribute are parallelized because a sentence should only contain one attribute value for a specific attribute.Specifically, we have Parallel(Future, Past, Present) and Parallel(Passive, Active) for tense and voice attributes.The two sets of paralleled adapters are stacked as Stack(Parallel(Future, Past, Present), Parallel(Passive, Active)) to learn to transfer multiple attributes.Similar to the Parallel connection method, the hidden states are replicated according to the number of adapters in the Parallel connection module.It's worth noting that, to demonstrate the attribute-specific adapters captured the attribute information, we only use the Stack connection method in inference time.During inference, we reload the parameters of adapters trained in multiple stylistic attribute outputs tasks and change the connection among the adapters to Stack. Table 1 : Dataset statistics for Yelp and StylePTB.Removing).Table 1 shows the training, validation, and test splits of the Yelp and StylePTB datasets used in our experiments. Table 2 : Performance of models on Yelp dataset (Sentiment Transfer Task).The best performances are bold. Table 3 : Automatic evaluation results of models on multiple stylistic attribute task.The best performances are bold. Table 4 : Automatic evaluation results of models on compositional editing task.The best performances are bold. Table 5 : Human evaluation results of models on both multiple stylistic attribute outputs and compositional editing tasks.The best performances are bold. Table 6 : Qualitative results for transfer to different target style combination across different models.Different colors highlight the transferred segments contributing to the target style.
5,749.8
2023-05-10T00:00:00.000
[ "Computer Science" ]
Muscle fatigue in relation to forearm pain and tenderness among professional computer users Background To examine the hypothesis that forearm pain with palpation tenderness in computer users is associated with increased extensor muscle fatigue. Methods Eighteen persons with pain and moderate to severe palpation tenderness in the extensor muscle group of the right forearm and twenty gender and age matched referents without such complaints were enrolled from the Danish NUDATA study of neck and upper extremity disorders among technical assistants and machine technicians. Fatigue of the right forearm extensor muscles was assessed by muscle twitch forces in response to low frequency (2 Hz) percutaneous electrical stimulation. Twitch forces were measured before, immediately after and 15 minutes into recovery of an extensor isometric wrist extension for ten minutes at 15 % Maximal Voluntary Contraction (MVC). Results The average MVC wrist extension force and baseline stimulated twitch forces were equal in the case and the referent group. After the fatiguing contraction, a decrease in muscle average twitch force was seen in both groups, but the decrease was largest in the referent group: 27% (95% CI 17–37) versus 9% (95% CI -2 to 20). This difference in twitch force response was not explained by differences in the MVC or body mass index. Conclusion Computer users with forearm pain and moderate to severe palpation tenderness had diminished forearm extensor muscle fatigue response. Additional studies are necessary to determine whether this result reflects an adaptive response to exposure without any pathophysiological significance, or represents a part of a causal pathway leading to pain. Introduction Intensive use of mouse and keyboard among professional computer users has been identified as a risk factor for pain in various regions of the upper extremity including the forearm [1][2][3]. The mechanism and pathophysiology of the pain response are not well understood [4]. In most studies pain complaints are poorly associated with commonly accepted criteria for specific clinical diagnoses. Muscle fatigue can be defined as an exercise induced transient decrease in the force generating capacity of the muscle [5]. While electromyography (EMG) can be used to measure muscle fatigue in moderate to high force work [6,6], it is rather insensitive to fatigue developed during the performance of low force occupational activities such as the use of mouse and keyboard [7,8]. The ratio of force output from low and high frequency (e.g. 20 Hz and 100 Hz) electrical stimulation has been used as a measure of muscle fatigue [9][10][11]. In some persons high frequency stimulation is very unpleasant, which makes this method less suitable for epidemiological studies and which may introduce selection problems. In 1998 Johnson demonstrated that the a muscle's force (twitch) response following very low frequency (2 Hz) electrical stimulation of a forearm flexor muscle was a reliable method to measure muscle fatigue [12,13]. Significant but transient levels of muscle fatigue were observed in computer users who applied average forces between 0.7 and 6.5% of the maximal voluntary contraction (MVC) for 3 to 4 hours [12]. However, EMG measurements indicate higher load of the forearm extensor muscle groups compared to the flexor muscle groups during use of the mouse and keyboard [14][15][16][17]. Therefore we adapted the 2 Hz stimulation technique used by Johnson and others [13,18] to measure muscle fatigue of the forearm extensor muscles. The purpose of the study was to investigate the hypothesis that computer users with forearm pain have a higher level of extensor muscle fatigue than computer users without forearm pain [17]. The study is part of the Danish nationwide NUDATA study of neck and upper extremity disorders among technical assistants and machine technicians [2]. Selection of participants and assessment of forearm pain and tenderness Computer users with forearm pain (cases) and without forearm pain (referents) were recruited among respondents in the NUDATA study. In January through June 2000 6,943 participants of 9,480 (73%) technical assistants and machine technicians completed a baseline questionnaire on job tasks, lifestyle and pain in the upper extremities including the forearms [2]. Forearm pain within the past seven days was assessed on a nominal scale with eight pain categories (no pain, very little pain, little pain, little to moderate pain, moderate pain, moderate to severe pain, severe pain, and very severe pain). Subjects reporting at least moderate pain in one or both forearms during the past 7 days were defined as symptom cases in the NUDATA study and were offered a clinical examination that took place within 14 days of receipt of the questionnaire data. Palpation tenderness in the proximal lateral aspect of the forearm extensor muscle group was assessed using a digital pressure of approximately 4 kg perpendicular to the surface. The response was scored on a 0-3 scale (0, none; 1, mild without withdrawal; 2, moderate with withdrawal; 3, severe with jump sign). Eligibility criteria Assistants and technicians with forearm pain (cases) Among participants who reported at least moderate pain in the right forearm during the past 7 seven days and at the clinical examination showed moderate or severe palpation tenderness, we invited the first 24 consecutive cases in the Aarhus region to take part in this study. Twenty persons accepted, but two declined participation after enrolment. Specific job tasks as work at video display terminals were not requested but due to the sampling frame all participants used computers to some degree in their daily work. Assistants and technicians without forearm pain (referents) For each accepting case we enrolled concomitantly from the same cohort and geographic area a participant of same sex and age (within a 5 year interval) without any selfreported pain in the upper extremities during the last 12 months. The referents that matched the two case subjects who withdrew after enrolment were kept in the study. None of the referents had become cases during the time elapsed from filling in the questionnaire and enrolment into this study. Exclusion criteria Assistants and technicians with a history of surgery involving the right extremity, trauma sequelae, epicondylitis, carpal tunnel syndrome and arthritis were not eligible for the study. Furthermore, workers with abnormal range of movement in the right shoulder, elbow, wrist or fingers or with signs of acute inflammation (swelling, rubor and increased skin temperature) in these regions were excluded. All examinations were performed at the Department of Occupational Medicine αt Aarhus University Hospital. Cases and controls were intermingled across time and the examiner was blinded to case or control status. The time schedule for the tests for one subject can be seen in Figure 1. The regional ethical committee approved the study and all subjects signed an informed consent prior to enrolment into the present study. MVC-measurements The maximal voluntary wrist extension forces were measured using a force measurement apparatus with a standard voltmeter reading the amplifier output voltage. The meas-urements were calibrated using laboratory grade weights. Each subject performed three maximum exertions while encouraged to extend the wrist and fingers as forceful as possible. The posture of the forearm is indicated in Figure 2. Participants were allowed to relax for a couple of minutes between each exertion. The highest recorded of the three force readings defined the subject's MVC. Twitch force measurements Muscle twitches were evoked using a custom built timer and a Digitimer DS7A Constant Current Electrical Stimulation Unit. Figure 2 shows the experimental set-up. The right forearm extensor muscles were stimulated using a 12 mm (active) Stimtrode Ag/AgCl electrode, which was placed on the proximal lateral aspect of the forearm one third of the distance between the elbow and wrist. The electrode was placed over the muscle belly that could be felt/palpitated when the third finger was extended. A 20 mm (passive) electrode was placed just anterior-medial to caput radii at the elbow. The muscle was stimulated with 100 microsecond square pulses at a frequency of 2 Hz. The twitch forces were measured with an Omega LC105 force transducer and an Omega DMD 465 amplifier (Omega Engineering Inc.; Stamford, CT USA) placed as indicated in Figure 2. In setting up the experimental procedures it proved more convenient to measure extensor muscle twitch force at the metacarpophalangeal (MCP) joint rather than a position distal to this location. Data was collected with an IBM PC instrumented with a data acquisition card (model AT-MIO-16E; National Instru-Placement of electrodes and force transducer in a study of right forearm extensor muscle twitch forces Figure 2 Placement of electrodes and force transducer in a study of right forearm extensor muscle twitch forces. A vertical plate to stabilize the distal forearm and wrist region sideways is not shown in order not to hide the transducer. Experimental design and protocol for measurements of right forearm extensor muscle twitch forces in 38 whitecollar workers with and without forearm pain Figure 1 Experimental design and protocol for measurements of right forearm extensor muscle twitch forces in 38 whitecollar workers with and without forearm pain. ments; Austin, TX USA) running Labview Software (version 3.0; National Instruments; Austin, TX USA). The sampling frequency was 1000 Hz. Calibration of the stimulation current After the hand had been placed in the apparatus ( Figure 2), the current was gradually increased up to or above 40 mA. In a pilot study, this current level was tolerable to most persons and produced contractions with sufficient force (> 1.0 N) to ensure reliable measurements. Nine persons did not tolerate 40 mA, but currents ranging between 32.5 and 37.5 mA produced reproducible twitch responses. In other seven persons 40 mA produced contractions that were insufficient to ensure reliable force measurements. In these subjects the current was increased to 42 -50 mA. For each individual, the established current level was kept through the rest of the study. To accustom the subject to the procedures, a full pilot measurement was performed 30 minutes prior to the experiment. Experimental procedure In the experiment the subject performed an exercise consisting of a static wrist extension at 15% of MVC for 5 minutes, 30 seconds break and another 5 minutes 15% MVC wrist extension. The duration of the exercises was set according to earlier experience [19,20]. The force was measured in the same way as at the MVC-manoeuvre. The subject could see the force reading and was urged by the researcher to keep the force level. Before (baseline), immediately after the exercise (post-exercise), and 15 minutes after exercise (recovery) twitch measurements were collected. It is well known that the muscle force increases during the initial phase of a muscle contraction -whether stimulated or voluntary [21,22]. At each measurement (baseline, post-exercise, and recovery), the subject's muscle was therefore first conditioned with 90 seconds continuous 2 Hz electrical stimulation in order to reach the plateau of steady state twitch force [12,13,18]. Immediately after the conditioning, muscle twitch forces were measured during five trains of approximately 30 twitch stimulations. Between each of the five trains, subjects removed and repositioned their hands in the measurement apparatus in order to minimize minor effects that hand position might have on recorded twitch force (Figure 3). The hand repositioning between trains typically took less than five seconds. Statistical analyses For the five trains collected from each subject at each time period, the between-5 train coefficient of variation (CV) was computed by dividing the train standard deviation by the mean [23]. Average values were then computed across times for the cases and referents, respectively. To allow comparisons of twitch measurements between subjects, twitch force data were standardized with respect to the baseline measurements performed before the voluntary wrist extension exercise. A repeated measures analysis of variance [SAS version 8.02 Proc Mixed random (SAS Institute, Cary, NC)] was used to determine whether the twitch forces differed after the exercise in comparison with baseline levels among cases and referents and whether the changes relative to baseline values differed between the two groups. In all analyses, the dependent variable was the standardized twitch force of the single contractions and independent variables included case status (case/referent) and time (baseline, post-exercise, recovery). In additional analyses the possible confounding effects of a number of extraneous factors were evalu-Time-force record (screen printout) from a single measurement consisting of 90 seconds of conditioning followed by five trains of approximately 30 twitch stimulations separated by removal and repositioning of the hand Figure 3 Time-force record (screen printout) from a single measurement consisting of 90 seconds of conditioning followed by five trains of approximately 30 twitch stimulations separated by removal and repositioning of the hand. ated in multiple regression analyses. These factors included gender, age (> = 40 years: yes/no), current smoking (yes/no), body mass index (BMI) (> 27 kg/m 2 : yes/no) and physical activity in leisure time (three levels). Results The characteristics of the study population are outlined in Table 1. The majority of participants were women (over 80%) and the average age was 43.4 years (range 25-55 years). The distributions of lifestyle factors were similar in the case and the reference group but the cases reported slightly more working hours, computer work and mouse use. The average force recorded during maximal voluntary extension of the wrist was at the same level among cases and referents. The baseline stimulated twitch forces averaged 2.29 (SD 1.58 N) showing no difference between the case group and the reference group. The coefficients of variation in the three series of measurements are displayed in Table 2. The reliability of the twitch force measurement was very high -on average, the coefficient of variation between 5 trains was 1.2 %. Among the referents we observed extensor muscle fatigue following the 10 minutes exercise at 15% of MVC as indicated by a 17 % decline in twitch force compared to baseline levels. The fatigue developed further during the first 15 minutes of the recovery phase (Table 3). Among the cases we did not observe any reduction in muscle force immediately after the exercise but a slight decrease in twitch force during the recovery period. Current smoking, age and MVC were not related to muscle fatigue but higher BMI was associated with a higher degree of fatigue (p = 0.01). Inclusion of these covariates into the models did not change the displayed effects of exercise on twitch forces. Discussion In this study, computer users with pain and moderate to severe palpation tenderness in the forearm experienced less forearm extensor muscle fatigue after an exercise protocol than a healthy referent group. This finding is contrary to our a-priory hypothesis stating that pain and tenderness of the forearm would be associated with increased muscle fatigue. The observed muscular fatigue response among the referents is consistent with earlier findings. Blangsted observed muscular fatigue as measured with mechanomyography and electromyography in the extensor carpi radialis muscle 30 minutes following a 10 minutes 10% MVC wrist extension [19]. In Johnson's experiments a 10% reduction in flexor muscle twitch force was observed immediately after 10 minutes static contractions at 15% of MVC, which increased to 20% reduction after 30 minutes [12]. Follow- ing a similar provocation of the forearm extensor muscles in our referent group we observed 17% reduction in twitch force immediately after the contraction, which increased to 27% after 15 minutes. In contrast, in our cases we saw virtually no change in twitch forces immediately after the contraction and only a 9% decrease 15 minutes into the recovery period. Referents and cases were investigated in random order and thus a change in measurement conditions or the experimental set-up across the study period is not likely to explain our findings. Moreover, the very low coefficient of variation of less than 5% in the majority of measurements indicates that our methods were reliable. A diminished fatigue response among the persons with forearm pain might result if the pain caused the subjects to produce reduced force at the maximal voluntary contraction and subsequently cause reduced power at the exercise before the twitch measurements. However, the findings are not the result of a lower force output in persons with forearm pain since the absolute force during stimulated contractions, the contraction force during exercise and the maximum voluntary contractions were at the same level in the two groups. Cases and referents were recruited among members of the same trade union and were therefore expected to be socially and economically rather homogeneous. Care was taken to ensure equal gender and age distributions in the two samples and cases and referents were similar with respect to a number of physical and lifestyle characteristics. A higher BMI was associated with increased fatigue but inclusion of this variable as well as leisure time physical activity and current tobacco smoking did not change the observed associations between study group and degree of fatigue. Nor was the level of statistical significance affected. One to five months had past from the assignment to the case and referent group until the measurements were made. However, on the day of the experiment the clinical examination assured that the referents had not developed any muscle tenderness. Any bias due to the cases becoming non-cases would lead to an underestimation of the true difference between the groups. Accordingly, we do not believe that the findings can be explained by errors in the measurement technique or biased statistical comparisons but obviously there is a need to corroborate or refute the findings in independent studies. Assuming that the findings reflect genuine biological differences in muscle function among subjects with and without forearm pain we need new hypotheses to understand the results. Due to the cross-sectional study design we cannot infer whether the diminished fatigue response precedes the development of pain and maybe makes the muscle more vulnerable to exposure, or whether it is a correlate or consequence of pain. It has been hypothesized that the patophysiology of upper extremity muscle disorders including forearm pain in computer users are caused by disorders of muscle cells or limitations of the local circulation [24][25][26]. The Cindarella hypothesis proposes that the development of chronic muscular pain is due to an overuse of fibers belonging to low-threshold motor units. It has in a study been demonstrated, indeed, that there are motor units that are continuously active under a 25 minutes static low-level excertion of the extensor digitorum communis muscle while the majority of motor units were only partially active over All analyses were adjusted for the train number (1)(2)(3)(4)(5) and multiple measurements (30 in each train). 1 Test for difference of mean values in cases compared to referents. time [25]. Although speculative, it can be hypothesized that forearm pain develop more frequent in workers with a larger proportion of Cindarella fibers and if such fibers are less likely to be fatigued by exercise this could explain the limited decline in forearm muscle twitch among cases in out study. If so, we would expect to observe the same diminished fatigue pattern in the contra lateral forearm without pain. Unfortunately this was not measured in this study. Conclusion Computer users with moderate to severe forearm pain had a diminished forearm extensor muscle fatigue response. It cannot be inferred from this study whether the abnormal fatigue pattern is a result of the pain or is part of the causal mechanisms leading to pain. The findings need to be corroborated and further explored in additional studies.
4,456.2
0001-01-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Asymptotic behaviour of non-isotropic random walks with heavy tails A random flight on a plane with non-isotropic displacements at the moments of direction changes is considered. In the case of exponentially distributed flight lengths a Gaussian limit theorem is proved for the position of a particle in the scheme of series when jump lengths and non-isotropic displacements tend to zero. If the flight lengths have a folded Cauchy distribution the limiting distribution of the particle position is a convolution of the circular bivariate Cauchy distribution with a Gaussian law. Introduction We consider the problem of random flights in Euclidean spaces defined by a series of displacements,r j , the magnitude and direction of each one being independent of all the previous ones. This model was introduced by Karl Pearson in 1905 and has a long and interesting history, both as a purely mathematical problem in probability theory and as a model for various physical and chemical processes [2]. The majority of papers investigate the problem of random flights with the orientation of movements uniformly distributed over a sphere, and deviations separated by exponentially distributed or Dirichlet distributed time lapses (cf. discussions in [5,6,10]). For the most recent developments in the studying of the random walks in a random environment we refer to [3] and the papers cited therein. In this short note we introduce a novel feature in the form of non-isotropic displacements at the moments of the direction changes. As a model of this non-isotropic perturbation we consider Hadamard's (or componentwise) product of a fixed deterministic vector (∆ 1 , ∆ 2 ) with the unit vectorē = (cos θ j , sin θ j ) in the direction of the previous movement. In the following analysis these perturbations are assumed to be small and the direction changes are frequent enough. In this note we are mainly interested in the case where the distribution of the i.i.d. flight lengths has a heavy tail, say it follows a folded Cauchy distribution. More precisely, we consider a planar, non-isotropic random walk performed by a particle taking steps (X j , Y j ), j ∈ N. We assume that where θ j and R j = |r j | are independent positive random variables (hereafter r.v.'s), and ∆ 1 = ∆ 2 are deterministic positive real numbers. We assume that θ j are uniformly distributed in [0, 2π) and positive r.v.'s R j are identically distributed with density f (r), r > 0. Clearly, after n steps the position reached by the moving particle is given byX A possible sample path of the random walk (2) is depicted in Figure 1 and can Fig. 1. A sample path of the random walk be interpreted as the position of a particle taking jumps at integer-valued times, with arbitrary orientation. Since ∆ 1 = ∆ 2 , the distribution of the random walk (X n ,Ŷ n ), n ≥ 1, as well as that of its asymptotic limiting process, is not rotation invariant. If the angles θ j are non-uniformly distributed on [0, 2π) the resulting random motion is anisotropic as well, this case will be studied elsewhere. Here two qualitatively different examples are considered: 1) the exponential distribution of the i.i.d. flight lengths, and 2) the folded Cauchy distribution when all the moments m r , r ≥ 1 are infinite. Naturally, in the former case under a suitable scaling one obtains the Gaussian limit with independent components having different variances. In the latter case the limiting law is a convolution of a circular bivariate Cauchy distribution with a Gaussian law. Main results In case 1) we are working under the following assumptions • (i) The jump lengths R j = |r j |, j = 1, . . . , n are exponentially distributed with parameter • (ii) The asymmetry conditions: for some C 1 , C 2 > 0 (we are interested in the case i.e. the displacement vectors decrease with n and are the same for all 1 ≤ j ≤ n. Condition (i) means that for fixed values of n the step lengths R j are i.i.d. with exponential distribution whose parameter µ (n) is adjusted continuously. One can easily see that EX j = EY j = 0. Next, in view of the equality E[(cos θ) 2 ] = 1 2 . In fact, (5) is an identity. Here and in what follows the symbol ≈ is used to indicate that LHS is an expansion of RHS up to O(n −k ) for some k ∈ Z but the difference RHS-LHS is o(n −k ). Similarly, Moreover, X j and Y j are dependent but not correlated r.v.'s. These facts suggest that a joint limiting distribution is Gaussian and is represented in the form of two independent diffusions. The proof may be provided by the standard methods via the checking Lindeberg's conditions. However, we prefer to use a direct computation to pave the way for further results for r.v.'s with the heavy tails. Theorem 1. Under the assumptions (3) and (4) the sequence (X n ,Ŷ n ) defined in (2) weakly converges to the zero-mean Gaussian vector (X, Y ) where X, Y are independent and possess the variances Our main result in Theorem 2 below establishes the limiting law for the (folded) Cauchy flights: Let us remind that the standard circular bivariate Cauchy distribution has the joint PDF (see [8], Ch.II, formula (1.19) or [9]) Theorem 2. Assume the condition (4) and fix b > 0. Let the parameter of the folded Cauchy distribution (7) for the jump lengths be scaled as a n = πb 2n . Then the distribution of random vector (X n ,Ŷ n ) weakly converges as n → ∞ to the convolution of the cumulative distribution functions F X,Y • F V,W where (X, Y ) is a zero mean Gaussian vector with independent components, and the vector (V, W ) has a circular bivariate Cauchy distribution with the shape parameter b, i.e. Remark 1. Let us represent the position of the moving particle as a result of n random flights and n non-isotropic displacementsX n = U n + T n ,Ŷ n = V n + S n . The expectations in the RHS of (11) may be split as n → ∞, cf. (38) below. Moreover T n and S n are asymptotically independent. However, the pair (U n , V n ) asymptotically follows a circular bivariate Cauchy law. Proofs The initial steps are the same for both Theorems 1 and 2 and valid for any PDF f (r) of flight lengths. They are based on the properties of Bessel functions J ν (x) which are solutions of ODEs and admit the expansion [1,11] Let us fix a small open neighbourhood U of (0, 0). For (α, β) ∈ U the characteristic function of the steps (X j , Y j ) reads Due to the addition formula of Bessel functions ( [4], formula 8.531, page 979) where, in our case,r 4 Proof of Theorem 1 For the sake of brevity we omit the upper index µ (n) , low index ϕ n (α, β), etc., whenever it is possible. We can calculate explicitly the characteristic function ϕ(α, β) by means of integration term by term. Further, we must keep into account the additional result ( [4], formula 6.611, page 707). In view of all these formulas we have that for f (r) = µe −µr ϕ(α, β) A crucial point is now to preserve only the relevant terms of the expansion of ϕ n (α, β) in view of the evaluation of the limit for the characteristic function lim n→∞ [ϕ n (α, β)] n , taking into account that µ = µ (n) , ∆ i = ∆ (n) we can cut the terms of the expansion for k ≥ 2. To justify this fact let us use the expansion (13). All the terms with k ≥ 2 in (18) contain the factors (α 2 ∆ 2 1 + β 2 ∆ 2 2 ) k/2 and in view of assumptions (3) and (4) it is easy to check that the sum of these terms is o(n −1 ) uniformly over (α, β) ∈ U . Hence, For n → ∞, ∆ for small values of x we can obtain the following relationship We now take the equalities and thus for small values of x we have that (1 + x) −1/2 ≈ 1 − x 2 . In conclusion, by writing explicitly µ (n) , ∆ (n) i , i = 1, 2, as in the assumptions (i) and (ii) we have that (by assumptions (i) and (ii)) This concludes the proof of the Theorem 1. Remark 2. An interesting question is what are the implications of the assumption that ∆ 2 1 and ∆ 2 2 are negligible with respect to ∆ 1 and ∆ 2 , Let us apply the following formula ( [4], formula 6.616, page 710) By using (25), the characteristic function ϕ(α, β) can be written as Therefore, the characteristic function of (X n ,Ŷ n ), in view of the assumptions on µ (n) and ∆ (n) In the steps above we made use of µ (n) √ (µ (n) ) 2 +α 2 +β 2 ≈ 1 − α 2 +β 2 2(µ (n) ) 2 as n → ∞ in view of (22), and used assumption (i) afterwards. In contrast to (6) the terms are missing from the limiting expression (27). We conclude that the approximation considered above leads to the linearization of the limiting variances with respect to C 1 and C 2 . Proof of Theorem 2 In the case of jump lengths with a folded Cauchy distribution (7) the CLT is not applicable. Again, our goal is to calculate the limiting characteristic function keeping the relevant terms in the asymptotic expansion. We omit the lower index in a n , ϕ n whenever it is possible. The equalities (14) and (15) imply As in Theorem 1 all the terms of the asymptotic expansion (28) with k ≥ 2 contain the multiplyer of (α 2 ∆ 2 1 + β 2 ∆ 2 2 ) k/2 and the remaining sum is o(n −1 ) uniformly over (α, β) ∈ U . Indeed, in view of (13) for k ≥ 2 we have The module of the term cos(kφ) in (15) is estimated by 1. Let c = α 2 + β 2 and δ be the Kronecker delta-function. Note that for any κ > 0 and a similar bound holds from below. Hence, the series is absolutely convergent. For these reasons it remains to consider the first two terms in the expansion (28). For this aim note that where K 1 stands for the Macdonald function [1,11]. So, we obtain that (32) If a = a n = πb 2n then for large n we have K 1 (ac) = 1 anc + anc 4 (2γ − 1) + o(n −1 ) where γ = 0.57721566 stands for the Euler-Mascheroni constant and I 1 = 1 a 2 n c 1 − a n c( 1 a n c + a n c 4 As a result we obtain that the second term in (28) is O(n −3/2 ) and does not contribute asymptotically. Alternatively, according to [4], formula 6.532.1 for non-integer ν whereJ ν (a) stands for the Anger function which is a solution of the inhomogeneous Bessel equation Ly = (x − ν) sin πx π [1]. By definition the Anger functions always coincide with the Bessel functions for the integer values of ν. The following identity is well-known [7] So, we use l'Hôpital's rule to evaluate the integral I 1 (a) = lim ν→1 I ν (a) in (33) and obtain lim n→∞ lim ν→1 π a n sin(πν) J ν (a n ) − J ν (a n ) = 0. Anyway, we need to evaluate the first term in expansion (28). According to [4], formula 6.532.6 where I 0 stands for the modified Bessel function and L 0 is the modified Struve function. Let us remind that the modified Struve function satisfies the inhomogeneous Bessel equation [1,11] Ly = x 2 d 2 y By using expansions of the modified Struve functions in a neighbourhood of 0 (see [1,11]) we write two terms of the asymptotic expansion Finally, in view of the formula 6.565 Applying (37) for ν = 0 we obtain that the characteristic function of the circular bivariate Cauchy law has the form e −b √ α 2 +β 2 . So, in the limit we obtain the product of the characteristic functions of the Gaussian law and the circular bivariate Cauchy distribution and Theorem 2 is proved. Acknowledgments The article was prepared within the framework of the Academic Fund Program at the National Research University Higher School of Economics (HSE) and supported by the subsidy granted to the HSE by the Government of the Russian Federation for the implementation of the Global Competitiveness Programme. Appendix section It is interesting that for odd 2n + 1 the integral ∞ 0 J2n+1(cr) r 2 +a 2 dr can be expressed in terms of the Macdonald function K 2n+1 (ac) and for even 2n the integral
3,122
2017-04-06T00:00:00.000
[ "Mathematics" ]
Whole Genome Sequencing of an Unusual Serotype of Shiga Toxin–producing Escherichia coli Shiga toxin–producing Escherichia coli serotype O117:K1:H7 is a cause of persistent diarrhea in travelers to tropical locations. Whole genome sequencing identified genetic mechanisms involved in the pathoadaptive phenotype. Sequencing also identified toxin and putative adherence genes flanked by sequences indicating horizontal gene transfer from Shigella dysenteriae and Salmonella spp., respectively. Shiga toxin-producing Escherichia coli serotype O117:K1:H7 is a cause of persistent diarrhea in travelers to tropical locations. Whole genome sequencing identified genetic mechanisms involved in the pathoadaptive phenotype. Sequencing also identified toxin and putative adherence genes flanked by sequences indicating horizontal gene transfer from Shigella dysenteriae and Salmonella spp., respectively. T here are >400 serotypes of Shiga toxin-producing Escherichia coli (STEC), and >100 of these are known to be associated with severe disease in humans (1). STEC are defined by the presence of 1 or both phage-encoded Shiga toxin genes stx1 and stx2. However, those serotypes associated with more severe disease generally harbor additional virulence genes, such as eae (intimin), which is encoded on the locus of enterocyte effacement, or virulence regulation genes, such as aggR, which is located on the aggregative adherence plasmid. Both of these genes mediate attachment of the bacteria to the host gut mucosa (2). The stx1 gene is also found in Shigella dysenteriae serotype 1. A range of molecular typing methods show that the shigellae belong within the Escherichia coli species (3). Peng et al. (4) described an evolutionary path of Shigella spp. from E. coli involving gene acquisition (virulence plasmid and pathogenicity islands) and gene loss (pathoadaptivity). Gene loss, or loss of gene function, may result from changes to bacterial biosynthesis pathways driven by the abundance of resources in the host or because the genes may encode proteins adverse to bacterial virulence. Olesen et al. (5) described a strain of STEC serotype O117:K1:H7 found in travelers from Denmark who returned from tropical locations. The strain was unusual because it was negative for the production of lysine decarboxylase and b-galactosidase (ortho-nitrophenol test) and positive only for stx1. Since 2004, 19 isolates of STEC O117:K1:H7 have been submitted to the Gastrointestinal Bacteria Reference Unit at the Health Protection Agency in London, UK, from frontline diagnostic microbiology laboratories in England and Wales for confirmation of identification and typing (Table). All isolates were originally misidentified by the submitting laboratory as Shigella sonnei or Shigella spp., probably because of the unusual biochemical phenotype exhibited by this strain. The purpose of this study was to use whole genome sequencing to investigate the evolutionary origins, putative virulence genes, and pathoadaptive mechanisms of this unusual STEC serotype. The Study DNA from 5 isolates (151/06, 371/08, 290/10, 754/10, and 229/11) was prepared for sequencing by using the Nextera sample preparation method and sequenced with a standard 2 × 151 base protocol on a MiSeq instrument (Illumina, San Diego, CA, USA) (6). Sequences were analyzed as described (7). In brief, Velvet version 1.1.04 (www.ebi.ac.uk/∼zerbino/ velvet/) was used to produce an average of 489 contigs with an average N50 length of 38722. Illumina reads were mapped to the reference strain (GenBank accession no. CU928145) by using Bowtie2 2.0.0 β-5 (http://bowtie-bio.sourceforge. net/bowtie2/) and a variant call format file was created from each of the binary alignment maps, which were further parsed to extract only single nucleotide polymorphism (SNP) positions that were of high quality in all genomes. Concatenated SNPs generated against the reference strain 55989 were used to produce a maximum-likelihood phylogeny of 5 strains in the Gastrointestinal Bacteria Reference Unit archive and 36 other publically available E. coli genomes and Shigella spp. (Figure). Despite temporal and spatial diversity of the 5 sequenced isolates, they clustered on the same branch, but they were distant from other publically available sequences of STEC strains. A phylogenetic tree based on a diverse range of E. coli showed that the 5 strains of STEC O117 have 130 polymorphic positions, and the closest 2 strains (299/11 and 754/10) are 26 SNPs apart (Table; Figure). Furthermore, on the basis of a diverse range of E. coli, genome sequences of EDL933 and Sakai, 2 well-described strains of STEC O157, are ≈35 SNPs apart. The multilocus sequence type ST504 was assigned in accordance with the E. coli multilocus sequence type databases at the Environment Research Institute, University College (Cork, Ireland). Conclusions Alignment of the genome of strain 229/11 with STEC O157 (EDL933) and Shigella dystenteriae serotype 1 (Sd197) indicated gene acquisition, loss, and rearrangement in 229/11. The stx1 gene is adjacent to the yjhS gene in 229/11 and Sd197, and in 229/11 this fragment is flanked by phage-like sequences that are closely related to Stx2-converting phage sequences but not to other Stx1converting phages. This unusual gene arrangement was described by Sato et al. (8). In Sd197, this region is flanked by integrases and insertion sequences. Other open reading frames homologous to those of Shigella spp. in stx-flanking regions in E. coli have been described, and it is likely that E. coli and the shigellae have exchanged stx many times in their evolutionary past but only certain strains, such as 229/11, have the appropriate genomic background to retain and stably express Stx (9). Cadaverine has an inhibitory effect on enterotoxin activity by preventing full expression of the virulent phenotype, and it has been suggested that there is evolutionary pressure to mutate or delete the cadA gene (12). This gene is missing from S. flexneri (Sf301) and S. boydii (Sb227) because of inversion-associated deletions, and in Sd197 and S. sonnei (Ss046) it is inactivated by a frameshift mutation and an insertion sequence, respectively (12). In 229/11, loss of cadA (lysine decarboxylase) activity is caused by repositioning of the of the cadA activator gene, CadC, upstream of the cadA gene and a 90-bp deletion at the 5′ end of cadC. The cadA gene and truncated cadC gene are separated by a large fragment of DNA inserted into the cadC gene. This fragment contains several open reading frames, including genes encoding aerobactin siderophore biosynthesis proteins. Lactose fermentation is a biochemical property commonly used for distinguishing Shigella spp. from E. coli because shigellae are non-or late-lactose fermenters. In Sd197 and Ss046 (late lactose-fermenting strains), the key gene, lacZ (encoding b-d-galactosidase) is intact, although lacY (encoding galactose permease) is a pseudogene (12). Like Sf301 and Sb227, lacZ and lacY are deleted in strain 229/11. The lack of a functional lac operon has been associated with pathogenicity mechanisms in S. enterica (13). E. coli as a species contains a large diversity of adaptive paths. This diversity is the result of a highly dynamic genome, with a constant and frequent flux of insertions and deletions (3). Pathogenicity in STEC O117:K1:H7 is most likely multifactorial and results from a novel combination of lack of cadA and lacZ expression and the presence of stx1 and the intimin-like sivH genes, demonstrating pathoadaptivity and horizontal gene transfer.
1,600.2
2013-08-01T00:00:00.000
[ "Biology" ]
Fasciola hepatica: comparative metacercarial productions in experimentally-infected Galba truncatula and Pseudosuccinea columella As large numbers of metacercariae of Fasciola hepatica are necessary for research, experimental infections of Galba truncatula and Pseudosuccinea columella with this digenean were carried out to determine the better intermediate host for metacercarial production and, consequently, the most profitable snail for decreasing the cost price of these larvae. Pre-adult snails (4 mm in shell height) originating from two populations per lymnaeid species were individually exposed to two or five miracidia, raised at 23 °C and followed for cercarial shedding up to their death. Compared to values noted in G. truncatula, the survival of P. columella on day 30 post-exposure was significantly greater, while the prevalence of F. hepatica infection was significantly lower. In the four P. columella groups, metacercarial production was significantly greater than that noted in the four groups of G. truncatula (347–453 per cercariae-shedding snail versus 163–275, respectively). Apart from one population of G. truncatula, the use of five miracidia per snail at exposure significantly increased the prevalence of F. hepatica in P. columella and the other population of G. truncatula, whereas it did not have any clear effect on the mean number of metacercariae. The use of P. columella for experimental infections with F. hepatica resulted in significantly higher metacercarial production than that noted with G. truncatula, in spite of a lower prevalence for the former lymnaeid. This finding allows for a significant decrease in the cost price of these larvae for commercial production. Introduction Large numbers of metacercariae of Fasciola hepatica Linnaeus, 1758 [10] are necessary for research to follow qualitative and quantitative variations of morphological and/ or biochemical parameters in experimentally-infected animals, or to study the effectiveness of new anthelminthic agents against fascioliasis in naturally-infected ruminants. To produce these metacercariae, it is necessary to raise the intermediate host, i.e. a freshwater gastropod belonging to the family Lymnaeidae, under laboratory conditions. In Western Europe, the snail Galba truncatula O.F. Müller, 1774 [11] was used in most cases (see review by Rondelaud and Barthe [16]). In the New World, most metacercariae productions were carried out using Pseudosuccinea columella Say, 1817 [6,24,25]. As cost prices for metacercarial production are dependent on the method used to raise infected snails [17], the choice of the lymnaeid species is of a great interest. Indeed, several specific techniques have been proposed in the past to raise G. truncatula because of its amphibious living environment [7,9,13,14,17,19,21,26]. Among them, the use of 14 cm Petri dishes with dried lettuce and dead grass as food for snails and live spring moss for spring water oxygenation gave the best results for survival of infected snails [19]. In contrast, the breeding of the more aquatic P. columella in aquaria or in Petri dishes with fresh lettuce for snail food seems to be easier [1,2,4,8,15,27]. In view of these findings, the following two questions arose: was the use of P. columella easier and more profitable to produce F. hepatica metacercariae? Did an increase of the miracidial dose for each snail at exposure have an effect on metacercarial production? To answer these two questions, experimental infections of pre-adult P. columella with F. hepatica were carried out using two or five miracidia per snail at exposure. Controls were constituted by pre-adult G. truncatula infected according to the same protocol. Snails and parasite The first population of P. columella originated from Egypt and was from a water body (29°20 0 2.77 00 N, 31°12 0 17.83 00 E) at Al-Wasta, governorate of Beni Suef. The other was found at two French sites (44°23 0 27.31 00 N, 0°32 0 2.43 00 E and 44°23 0 31.18 00 N, 0°29 0 59.30 00 E) located near Castelmoron along the banks of the Lot River, department of Lot. Previous experimental infections of these two snail populations with cattle-derived miracidia of F. hepatica have demonstrated the high susceptibility (prevalence 42% to 60%) of these snails to this digenean [4,5]. Adult snails, measuring 10-15 mm in height, were collected in March 2013 from the first population and in September-October 2013 from the other. They were transported to the laboratory and placed in 10 L covered aquaria with five snails per litre of permanently oxygenated spring water. These aquaria were subjected to constant conditions: temperature 23°± 1°C; light/dark period 12 h/12 h. Dissolved calcium concentration in spring water was 35 mg/L. Snails were fed on pesticide-free fresh lettuce leaves ad libitum and spring water in aquaria was changed weekly. Egg masses laid by these adult snails were collected and placed into small rearing aquaria. Newly hatched snails were fed on finely powdered lettuce and those that attained 4 ± 0.1 mm in shell height were used. For each P. columella population, a total of 200 snails were subjected to experimental infections. The two populations of G. truncatula originated from central France and were from the communes of Migné (46°40 0 27 00 N, 1°21 0 21 00 E) and Thenay (46°37 0 23 00 N, 1°26 0 2 00 E), department of Indre. Their habitats were located on clay (favourable for snail growth) so that the upper shell height of adults was 11-12 mm. These populations were highly susceptible to experimental infection with F. hepatica (prevalence > 60%), as demonstrated by our team in previous experiments [22,23]. Two hundred pre-adults, measuring 4 ± 0.1 mm in shell height and belonging to the overwintering (Migné) or the spring (Thenay) generation, were collected from each population in February (Migné) and April 2014 (Thenay). They were kept in the laboratory at 20°C for 48 h for temperature acclimatization before being exposed to miracidia. Eggs of F. hepatica were collected from the gall bladders of heavily infected cattle at the slaughterhouse of Limoges, department of Haute Vienne (central France). They were washed several times with spring water and were incubated at 20°C for 20 days in the dark in order to obtain miracidia [12]. Experimental protocol Two experiments were carried out using eight groups of 100 pre-adults each. The first experiment was performed from February to April 2014 with two groups of G. truncatula (Migné) and two of P. columella (Beni Suef). Snails from two groups (one of G. truncatula and the other of P. columella) were individually exposed to bimiracidial infections. The protocol was the same for the other two groups but with five miracidia per snail. The second experiment was performed from April to June 2014 in order to verify the results from the first experiment and was carried out using two groups of G. truncatula (Thenay) and two of P. columella (Castelmoron). Snails were exposed to F. hepatica miracidia according to the protocol used in the first experiment. All exposures were performed for 4 h at 23°C in 35 mm Petri dishes, each recipient containing 3.5 mL spring water. The four groups of G. truncatula were then raised for 30 days in 14 cm Petri dishes with 10 snails and 60 mL spring water per recipient. In each dish, small pieces of pesticide-free dried lettuce and dead Molinia caerulea leaf were placed, while several stems of live spring moss (Fontinalis sp.) ensured oxygenation of the water layer. Water and grass or lettuce leaves, if necessary, were changed daily, while cleaning of Petri dishes was done each week [19]. In contrast, the four P. columella groups were raised in covered, aerated 5 L aquaria. Snails were fed on pesticide-free fresh lettuce ad libitum and spring water in aquaria was changed weekly [4]. Petri dishes and aquaria were placed at 23°± 1°C in the same air-conditioned room as parent P. columella. On day 30 post-exposure (p.e.), each surviving snail from the eight groups was put in a 50 mm Petri dish with 10 mL of spring water. In each dish with G. truncatula, small pieces of dried lettuce, dead grass and live spring moss were placed, while pieces of fresh lettuce and several spring moss stems were used for each recipient with P. columella. Petri dishes were then placed at 23°± 1°C as parent snails. Spring water and food were changed, if necessary, every day until snail death. When the first cercarial shedding occurred, surviving snails were subjected to a thermal shock every three days by placing their Petri dishes at 10°-13°C for 3 h to stimulate cercarial exit [20,29]. After their emergence, cercariae were counted and removed from Petri dishes. At the death of each infected snail, its shell was measured using callipers. Data analysis The first two parameters were snail survival on day 30 p.e. and the prevalence of F. hepatica infection calculated using the ratio: number of cercariae-shedding (CS) snails/number of surviving snails on day 30 p.e. A v 2 test was used to compare the differences between snail survival and prevalence rates. The shell height of CS snails at their death, the lengths of the prepatent and patent periods, and the total number of metacercariae were also considered. Individual values recorded for the last four parameters were averaged and standard deviations were established for each snail group. Normality of these last values was analysed using Shapiro-Wilk test [25]. According to results given by this test, one-way analysis of variance (ANOVA), Student's t test or Kruskal-Wallis test was used to establish levels of significance. All the statistical analyses were performed using Statview 5.0 software (SAS Institute Inc., Cary, NC, USA). As the aim of the present study was to determine the better lymnaeid species for metacercarial production of F. hepatica, the statistical tests were only used to compare the differences between values noted for both lymnaeid species or for miracidial doses used at exposure (two or five miracidia/snail). The differences between values recorded for the two snail populations of each lymnaeid were not considered here. As the cost price of F. hepatica metacercariae was dependent on the method used for breeding the snail host [17], it was interesting to determine whether the method using pre-adult P. columella resulted in a decrease in this price. The maintenance cost for 100 snails took into account (i) the time spent by a technician for snail exposure to miracidia, the surveillance of breeding recipients, the count of metacercariae and their transfer to Eppendorf tubes, and (ii) the purchase price of consumables and lettuce. The cost price of 100 metacercariae was calculated using the ratio: total cost of maintenance for 100 snails at miracidial exposure/(total number of metacercariae encysted on dish walls and bottoms/number of snails at miracidial exposure). This cost price did not take into account the infrastructure required for commercial production of these larvae. Results Compared to G. truncatula subjected to bimiracidial infections (Table 1), the survival of P. columella on day 30 p.e. was significantly higher (v 2 = 27.79, p < 0.001), while the prevalence of F. hepatica infection was significantly lower (v 2 = 27.96, p < 0.001) and the shell height of CS snails significantly greater (H = 25.53, p < 0.001). The prepatent period was significantly longer (t = 3.41, p < 0.001) in P. columella than in the other species. In contrast, no significant difference between the lengths of patent periods was noted. Lastly, the mean numbers of metacercariae were significantly higher (H = 19.64, p < 0.01) in the two groups of P. columella than in G. truncatula. The highest totals of metacercariae were noted for P. columella, with 19 snails (out of 36 in the Beni Suef group) and 14 (out of 29 in the Castelmoron group) shedding more than 500 larvae per individual (data not shown). In quinquemiracidial infections (Table 2), the survival of P. columella was significantly higher (v 2 = 74.90, p < 0.001) than that of G. truncatula, while the prevalence of F. hepatica was significantly lower (v 2 = 499.88, p < 0.001). At snail death, the shell height of CS P. columella was significantly higher (H = 23.53, p < 0.001) than that of G. truncatula. Significant differences in favour of P. columella were noted for the prepatent period (t = 2.97, p < 0.01), the patent period (t = 4.60, p < 0.001) and the number of metacercariae (H = 8.89, p < 0.05). The highest totals of metacercariae were also noted for P. columella. In the Beni Suef and Castelmoron groups, 34 snails (out of 57) and 27 (out of 43), respectively, produced more than 500 metacercariae per individual (data not shown). The values noted in the eight snail groups were also compared to determine whether the miracidial dose used for snail exposure had any significant effect on the characteristics of F. hepatica infection. In G. truncatula from Thenay, snail survival on day 30 p.e. was significantly higher (v 2 = 17.44, p < 0.01) in the two-miracidia than in the five-miracidia group. In contrast, no significant difference in snail survival between the two-and five-miracidia groups was noted for the other three snail populations. The prevalence of F. hepatica infection was significantly greater in the five-miracidia groups of three populations (Migné: v 2 = 7.59, p < 0.01; Beni Suef: v 2 = 8.60, p < 0.01; Castelmoron: v 2 = 6.34, p < 0.05). In each snail population considered separately, the shell height of CS snails in the two-and five-miracidia groups did not significantly differ from each other. Similar findings were also noted for the lengths of prepatent and patent periods. In G. truncatula from Migné, the number of metacercariae was significantly higher (H = 4.74, p < 0.05) in the fivemiracidia than in the two-miracidia groups. In contrast, in the other three snail populations, the differences between the two-and five-miracidia groups were not significant. Table 3 shows the results from the four groups subjected to five miracidia per snail. Compared to cost prices given for G. truncatula (10.30 Euros for both populations used), the values reported for P. columella (4.60 Euros = 5.06 US dollars) were one half lower. Discussion The values noted for the characteristics of F. hepatica infection in G. truncatula agreed with those recorded by our team in previous experimental infections of these two populations with the digenean [21][22][23]. The figures noted in the Beni Suef group were close to data reported by Dar et al. [4] for the same population of P. columella but infected with an Egyptian isolate of F. hepatica miracidia. In the same way, the values noted in the Castelmoron group also correlated well with those reported by Dreyfuss et al. (unpublished data) for the same population infected with a French isolate of miracidia. The slight differences between the Beni Suef and Castelmoron groups might be due to interpopulation variability in the susceptibility of P. columella to this digenean, as reported by Vázquez et al. [28] for the Cuban populations of this lymnaeid. Even though the prevalence of F. hepatica infection in P. columella was significantly lower than that noted in G. truncatula, the use of the former lymnaeid for the production of F. hepatica metacercariae was of interest for the following three reasons: (i) the survival of F. hepatica-exposed P. columella on day 30 p.e. was greater than that of G. truncatula, whatever the miracidial dose used for snail infection; (ii) the mean height of CS snails at their death was greater (10.2-12.1 mm compared to 6.0-6.6 mm for G. truncatula); and (iii) the total metacercarial production in P. columella was two times higher than that of G. truncatula when five miracidia were used for each snail at exposure. As the breeding of P. columella in aquaria during the first 30 days p.e. requires less daily surveillance than maintenance of 14 cm Petri dishes with G. truncatula, the use of P. columella populations susceptible to F. hepatica represents a good alternative to replace G. truncatula as a snail host and, consequently, to enhance metacercarial production under experimental conditions. In the four groups of P. columella, there was a significant increase in prevalence when the number of miracidia per snail went from two to five, whereas snail survival on day 30 p.e. did not differ significantly. In contrast, the results were less clear in controls. In the G. truncatula groups from Thenay, snail survival was significantly lower in the five-miracidia than in the two-miracidia groups, while prevalence did not show any significant variation. The opposite finding was noted in the Migné groups, with a significantly increased prevalence in the five-miracidia snails but without clear variation in snail survival between the two-and five-miracidia groups. These differences between P. columella and controls can partly be explained by the method used to calculate the prevalence, i.e. the ratio between the number of CS snails and that of surviving snails on day 30 p.e. Apart from the Migné groups, the number of metacercariae in the other groups did not significantly differ from each other when the miracidial dose increased from two to five. This last result was more surprising because a positive relationship between the number of miracidia used for each snail and that of free rediae developing within the snail body was reported by Rondelaud and Barthe [16]. Competition occurring between free rediae during their development [3,18] induced a delay in the differentiation of intraredial cercariae and their exit from the snail so that the numbers of shed cercariae were within the same scale of values in the two-and five-miracidia infections. Compared to cost prices given for G. truncatula, the values reported for P. columella were one half lower (Table 3). These cost prices might be even lower if surviving snails in these groups were dissected during the patent period to collect metacercariae according to the method used by Rondelaud et al. [21] for G. truncatula. However, the use of this last technique needs to study the viability of these metacercariae in the definitive host because these larvae comprised different cohorts of free cercariae, i.e. those that just exited from their parental rediae, free cercariae whose tegument was covered by the first secretions of their cystogenous cells, and mature cercariae ready for release in water [18]. In conclusion, the use of P. columella for experimental infections with F. hepatica resulted in significantly higher metacercarial production than that noted for G. truncatula, in spite of a lower prevalence for the former snail species. This finding allowed a decrease in the cost price of these larvae for commercial production.
4,260.2
2015-04-23T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Color Stability of Zinc Oxide Poly(methyl methacrylate) Nanocomposite—A New Biomaterial for Denture Bases (1) Background: The purpose of this in vitro study was to evaluate the color change and stability of a zinc oxide nanoparticle–poly(methyl methacrylate) (ZnO NP–PMMA) nanocomposite for denture base material after immersion in different dietary and cleaning agent solutions. (2) Methods: One hundred samples were prepared and divided into four equinumerous groups depending on the weight content of ZnO NPs. The color coordinates (CIE L*a*b*) were measured using a digital colorimeter, ColorReader (Datacolor AG Europe, Rotkreuz, Switzerland), before and after immersion of the specimens in five different solutions (distilled water, coffee, red wine, black tea, denture cleaning tablet solution) for 6 months. The color changes (ΔE) were calculated using Euclidean distance and analyzed by the Shapiro–Wilk test and the ANOVA/Kruskal–Wallis multiple comparison and adequate post hoc tests. (3) Results: All tested materials showed significant color changes after their exposure to all solutions. Color changes were greatest in the case of red wine and progressed with the duration of the study. (4) Conclusions: The modification of PMMA with ZnO nanoparticles is acceptable in aesthetic terms in 2.5% and 5% weight content; however, color changes are more noticeable with higher nanoparticle content and must be discussed with the patient prior to possible use. Introduction Poly(methyl methacrylate) (PMMA) resins are clinically used in prosthodontics in full and partial denture production for decades [1][2][3][4]. Despite the fact that there are many advantages of these materials, such as ease of laboratory processing, polishability, appropriate physicochemical properties, and no smell or taste after polymerization, the issues of microporosity and thus water absorption and susceptibility to microbial growth are still challenges in considering this material as ideal for removable prosthetic devices [5][6][7][8]. The appropriate composition of PMMA intended for the fabrication of dentures allows for satisfactory aesthetic effects not only in terms of artificial dentition, but also the so-called pink aesthetics, i.e., the color of the denture plate imitating atrophied soft tissues in the oral cavity and designed to distribute the chewing forces over a larger surface of the toothless prosthetic foundation [9]. PMMA can be modified with both organic and inorganic substances to improve its mechanical, tribological, aesthetic, or microbiological properties. Amid the development of nanotechnology, substances are increasingly doped on the nanoscale in order to change specific properties of the material [10,11]. The most desirable feature of such a modification is microbiological activity, which would reduce the possibility of the bacterial and fungal biofilm accumulation on the prosthesis base plate. The use of silver nanoparticles with proven antibacterial and antifungal activity is the most well-known modification of prosthodontic materials nowadays [12][13][14][15]. However, the dark brown color that results from the preparation of PMMA-nanosilver composite disqualifies it in terms of aesthetics, limits its applicability, and eliminates its relevance in routine clinical practice. The addition of nanosilver in composite materials may also affect the stability of the biomaterial in the oral cavity by increasing the release of metal ions with all their positive and negative effects [16]. Another well-known modification of acrylic material is the incorporation of nanotitanium particles aimed at improving both mechanical and microbiological properties. In the case of this modification, a whitish color of the nanocomposite was demonstrated, significantly limiting the use of dentures for repair or relining, where the new material is located only in the unsightly zone [17,18]. Regarding the chemical composition of these materials, the mechanical and functional properties including color change in the resin-matrix materials depends on the organic matrix and inorganic particles, and the type of polymerization initiator system [19][20][21][22]. A large filler content reduces the organic matrix's content. Insufficient polymerization, water absorption, or the adsorption of water-soluble colored beverages such as coffee, red wine, etc., can all cause color changes. The type and degree of monomer conversion that establishes the required physicochemical characteristics influence the susceptibility of the organic matrix of the resin-matrix composite to retain coloration [23]. The authors previously reported the production of PMMA modified with zinc oxide nanoparticles and characterized its mechanical, microbiological, and cytotoxic properties [24,25]. Color and its durability when exposed to the coloring agents that are present in the oral cavity environment during everyday usage of dentures will significantly determine the prospective applicability of the aforementioned modifications in dental clinics [26]. Color stability can be assessed using either visual or instrumental techniques. The color change can be measured clinically or with special instruments that remove the subjective disturbance that occurs with visual color perception. It can be assumed that a standard observer notices a color difference as follows [27,28]: • 0 < ∆E < 1-does not notice the difference; • 1 < ∆E < 2-only an experienced observer notices the difference; • 2 < ∆E < 3.5-an inexperienced observer also notices the difference; • 3.5 < ∆E < 5-notices a clear color difference; and • ∆E > 5-the observer has the impression of two different colors. The assessment of color changes can also be analyzed according to the formula proposed by the National Bureau of Standards: NBS = ∆E × 0.92. The range of NBS units is as follows: 3.0-6.0, appreciable; • 6.0-12.0, large (much); and • >12.0, very much. Colorimeters and spectrophotometers are common equipment for detecting color changes in restorative materials while minimizing subjective interference, and they enable the comparison of two colors within the same color space in the form of the ∆E parameter [29][30][31]. The light range of the illumination, the wavelength reflected or transmitted by the object, and the observation characteristics of the human observer can all alter objective evaluation of color parameters. For this reason, it is necessary to perform not only qualitative but also quantitative assessments and comparisons of the color characteristics of the newly created nanomaterial composites with those used on a daily basis in clinical work. The aim of this study was to evaluate the manner in which the incorporation of zinc oxide changes the color properties of PMMA and the permanence of the obtained color after immersion in different dietary and therapeutic solutions. The first null hypothesis was that there is no significant difference in color changes between PMMA modified with ZnO nanoparticles and unmodified PMMA. The second null hypothesis was that there are no differences in color permanence of modified and unmodified PMMA, depending on the external solution environment. Characteristics of ZnO Nanoparticles In this study, the author's procedure [32] for microwave solvothermal synthesis (MSS) [33] was used to produce zinc oxide nanoparticles (ZnO NPs). Zinc acetate dihydrate (Zn(CH 3 COO) 2 ·2H 2 O, pure for analysis, Chempur, PiekaryŚląskie, Poland) and ethylene glycol (C 2 H 4 (OH) 2 , pure for analysis, Chempur, PiekaryŚląskie, Poland) were used in the production protocol. Zinc oxide was obtained by dissolving zinc acetate in ethylene glycol. The reaction solution was placed into a covered Teflon vessel and heated using microwave radiation after 45 min of additional stirring. The microwave reactor MSS2 (IHPP PAS (Warsaw, Poland), ITeE-PIB (Radom, Poland), ERTEC (Wrocław, Poland)) was then set to 2.45 GHz with a power density of approximately 10 W/mL [34]. The reactions took 12 min to complete at a constant pressure of 3 bar and a microwave power of 3 kW. The resulting powder was sedimented, rinsed three times with deionized water (1 class, HLP 20 UV, Hydrolab, Straszyn, Poland), centrifuged (MPW-350, MPW Med Instruments, Warsaw, Poland), and dried in a freeze dryer (Lyovac GT 2, SRK Systemtechnik GmbH, Riedstadt, Germany) after the synthesis. The average particle size of the ZnO NPs used in this study was ≈30 nm, with a density of 5.24 g/cm 3 , a specific surface area of 39 m 2 /g, and phase purity. The scanning electron microscopy (SEM) and transmission electron microscope (TEM) examinations revealed great uniformity of nanoparticles in terms of size and shape. The characterization of the ZnO samples reported was carried out in a certified research laboratory with the accreditation number AB 1503 [35], which follows the PN-EN ISO/IEC 17025:2018-02 standard. Preparation of Specimens The test material used in this study was thermally polymerized acrylic resin (Superacryl Plus, Spofa Dental, Jicin, Czech Republic). The mixing procedure involves a 3:1 powder-liquid volume ratio, which corresponds to 22 g of polymer and 10 mL of liquid monomer. A calculated amount of nanopowder was suspended in a liquid acrylic resin monomer and mechanically mixed for 60 s using a metal spatula. The estimated amount of PMMA was then added to achieve final weight concentrations of 2.5%, 5%, and 7.5%. Table 1 shows the precise weight composition of the constituent components. Modeling wax (Vertex Regular, Vertex-Dental BV, Centurionbaan, The Netherlands) was used to create 13 mm × 13 mm × 2 mm samples, which were then processed into acrylic samples using Class III hard plaster (Stodent, Zhermack, Badia Polesine, Italy) according to the standard flasking procedure. The wax samples were covered with a 0.025 mm thick polyethylene sheet (Divosheet, Vertex-Dental BV, Centurionbaan, The Netherlands) to obtain a flat surface and make the possible roughness of the material independent from machining and polishing. Afterwards, the material was subjected to traditional thermal polymerization in a polymerizer (PS-2, PEM, Warsaw, Poland), as recommended by the manufacturer (gradual temperature increase to 97 • C; polymerization period at 97 • C: 30 min). The control group consisted of acrylic specimens without nanoparticles. Color Measurement One hundred samples were created and divided into four equinumerous groups based on the weight content of ZnO NPs in order to compare the color of PMMA with the PMMA-ZnO nanocomposite. A colorimetric test was performed on the samples using a digital colorimeter (ColorReader, Datacolor AG Europe, Rotkreuz, Switzerland) with a wireless Bluetooth (BT) interface allowing connection to dedicated software. The data from the device were sent to the software, and then encoded in the CIE L*a*b* color space. The recording device's specifications correspond to CIE (Commission internationale de l'éclairage-International Commission on Illumination) colorimetric test standards, which include a 10° observation angle and a D65 illuminant, with a built-in light source in the form of six light-emitting diodes, ensuring that the technical requirements are met. Measurements were made by one operator (M.S.) in the same room and under the same lighting conditions (dark room) and the samples were placed on the same test bench. The colorimeter was calibrated as recommended by the manufacturer with the included white standard before each series of data collection. CIE L*a*b* color space is a three-dimensional measurement system, where L* represents the clarity of an object ranging from black (0) to white (100), a* represents a measurement for the quality of red (a > 0) or green (a < 0), and b* represents a measurement for the quality of yellow (b > 0) or blue (b < 0). Each sample was subjected to five independent colorimetric measurements in various areas and on both sides of the prepared samples. The values obtained in this manner were compiled with the use of descriptive statistics (mean, SD), and then differences in the obtained colors between groups were calculated by means of the ΔE parameter according to the distance Formula (1). The ΔE value is the Euclidean distance between two colors in the color space, assuming that both colors have been described in the same space, and it is expressed as a number. In order to analyze color permanence depending on the external environment, the material prepared while comparing the color of unmodified and modified PMMA was Color Measurement One hundred samples were created and divided into four equinumerous groups based on the weight content of ZnO NPs in order to compare the color of PMMA with the PMMA-ZnO nanocomposite. A colorimetric test was performed on the samples using a digital colorimeter (ColorReader, Datacolor AG Europe, Rotkreuz, Switzerland) with a wireless Bluetooth (BT) interface allowing connection to dedicated software. The data from the device were sent to the software, and then encoded in the CIE L*a*b* color space. The recording device's specifications correspond to CIE (Commission internationale de l'éclairage-International Commission on Illumination) colorimetric test standards, which include a 10° observation angle and a D65 illuminant, with a built-in light source in the form of six light-emitting diodes, ensuring that the technical requirements are met. Measurements were made by one operator (M.S.) in the same room and under the same lighting conditions (dark room) and the samples were placed on the same test bench. The colorimeter was calibrated as recommended by the manufacturer with the included white standard before each series of data collection. CIE L*a*b* color space is a three-dimensional measurement system, where L* represents the clarity of an object ranging from black (0) to white (100), a* represents a measurement for the quality of red (a > 0) or green (a < 0), and b* represents a measurement for the quality of yellow (b > 0) or blue (b < 0). Each sample was subjected to five independent colorimetric measurements in various areas and on both sides of the prepared samples. The values obtained in this manner were compiled with the use of descriptive statistics (mean, SD), and then differences in the obtained colors between groups were calculated by means of the ΔE parameter according to the distance Formula (1). The ΔE value is the Euclidean distance between two colors in the color space, assuming that both colors have been described in the same space, and it is expressed as a number. In order to analyze color permanence depending on the external environment, the material prepared while comparing the color of unmodified and modified PMMA was Color Measurement One hundred samples were created and divided into four equinumerous groups based on the weight content of ZnO NPs in order to compare the color of PMMA with the PMMA-ZnO nanocomposite. A colorimetric test was performed on the samples using a digital colorimeter (ColorReader, Datacolor AG Europe, Rotkreuz, Switzerland) with a wireless Bluetooth (BT) interface allowing connection to dedicated software. The data from the device were sent to the software, and then encoded in the CIE L*a*b* color space. The recording device's specifications correspond to CIE (Commission internationale de l'éclairage-International Commission on Illumination) colorimetric test standards, which include a 10° observation angle and a D65 illuminant, with a built-in light source in the form of six light-emitting diodes, ensuring that the technical requirements are met. Measurements were made by one operator (M.S.) in the same room and under the same lighting conditions (dark room) and the samples were placed on the same test bench. The colorimeter was calibrated as recommended by the manufacturer with the included white standard before each series of data collection. CIE L*a*b* color space is a three-dimensional measurement system, where L* represents the clarity of an object ranging from black (0) to white (100), a* represents a measurement for the quality of red (a > 0) or green (a < 0), and b* represents a measurement for the quality of yellow (b > 0) or blue (b < 0). Each sample was subjected to five independent colorimetric measurements in various areas and on both sides of the prepared samples. The values obtained in this manner were compiled with the use of descriptive statistics (mean, SD), and then differences in the obtained colors between groups were calculated by means of the ΔE parameter according to the distance Formula (1). The ΔE value is the Euclidean distance between two colors in the color space, assuming that both colors have been described in the same space, and it is expressed as a number. In order to analyze color permanence depending on the external environment, the material prepared while comparing the color of unmodified and modified PMMA was Color Measurement One hundred samples were created and divided into four equinumerous groups based on the weight content of ZnO NPs in order to compare the color of PMMA with the PMMA-ZnO nanocomposite. A colorimetric test was performed on the samples using a digital colorimeter (ColorReader, Datacolor AG Europe, Rotkreuz, Switzerland) with a wireless Bluetooth (BT) interface allowing connection to dedicated software. The data from the device were sent to the software, and then encoded in the CIE L*a*b* color space. The recording device's specifications correspond to CIE (Commission internationale de l'éclairage-International Commission on Illumination) colorimetric test standards, which include a 10° observation angle and a D65 illuminant, with a built-in light source in the form of six light-emitting diodes, ensuring that the technical requirements are met. Measurements were made by one operator (M.S.) in the same room and under the same lighting conditions (dark room) and the samples were placed on the same test bench. The colorimeter was calibrated as recommended by the manufacturer with the included white standard before each series of data collection. CIE L*a*b* color space is a three-dimensional measurement system, where L* represents the clarity of an object ranging from black (0) to white (100), a* represents a measurement for the quality of red (a > 0) or green (a < 0), and b* represents a measurement for the quality of yellow (b > 0) or blue (b < 0). Each sample was subjected to five independent colorimetric measurements in various areas and on both sides of the prepared samples. The values obtained in this manner were compiled with the use of descriptive statistics (mean, SD), and then differences in the obtained colors between groups were calculated by means of the ΔE parameter according to the distance Formula (1). The ΔE value is the Euclidean distance between two colors in the color space, assuming that both colors have been described in the same space, and it is expressed as a number. In order to analyze color permanence depending on the external environment, the material prepared while comparing the color of unmodified and modified PMMA was Color Measurement One hundred samples were created and divided into four equinumerous groups based on the weight content of ZnO NPs in order to compare the color of PMMA with the PMMA-ZnO nanocomposite. A colorimetric test was performed on the samples using a digital colorimeter (ColorReader, Datacolor AG Europe, Rotkreuz, Switzerland) with a wireless Bluetooth (BT) interface allowing connection to dedicated software. The data from the device were sent to the software, and then encoded in the CIE L*a*b* color space. The recording device's specifications correspond to CIE (Commission internationale de l'éclairage-International Commission on Illumination) colorimetric test standards, which include a 10 • observation angle and a D65 illuminant, with a built-in light source in the form of six light-emitting diodes, ensuring that the technical requirements are met. Measurements were made by one operator (M.S.) in the same room and under the same lighting conditions (dark room) and the samples were placed on the same test bench. The colorimeter was calibrated as recommended by the manufacturer with the included white standard before each series of data collection. CIE L*a*b* color space is a three-dimensional measurement system, where L* represents the clarity of an object ranging from black (0) to white (100), a* represents a measurement for the quality of red (a > 0) or green (a < 0), and b* represents a measurement for the quality of yellow (b > 0) or blue (b < 0). Each sample was subjected to five independent colorimetric measurements in various areas and on both sides of the prepared samples. The values obtained in this manner were compiled with the use of descriptive statistics (mean, SD), and then differences in the obtained colors between groups were calculated by means of the ∆E parameter according to the distance Formula (1). The ∆E value is the Euclidean distance between two colors in the color space, assuming that both colors have been described in the same space, and it is expressed as a number. In order to analyze color permanence depending on the external environment, the material prepared while comparing the color of unmodified and modified PMMA was used. Samples were divided first in terms of weight percentage content of ZnO NPs in the PMMA matrix, and subsequently into five groups depending on the sample immersion environment. Four solutions were applied: coffee (CO), red wine (RW), denture cleaning tablets (CT), and black tea (BT), which, when used by patients, could potentially change the color of the denture base plate. The control group contained samples immersed in distilled water (DW). The division by ZnO NP content, groups, and solutions used is illustrated in Table 2. The samples were fully immersed in the prepared solutions in separate polystyrene molds in order to prevent samples from contacting each other and stored without light access at the temperature of 23 • C ± 1 • C. Color measurements were performed before the immersion and subsequently after 1, 2, 3, 4, 5, 6, 7, 14, 28, 56, and 182 days. Each series of measurements was preceded with removing the samples from the vessels, rinsing them with distilled water, and drying them with dust-free cellulose towels. The solutions in which the samples were immersed were replaced every 24 h. The discrepancies in the obtained colors are presented in the form of the ∆E parameter. The unit ∆L, ∆a, and ∆b values were calculated for each sample as the difference between the value obtained before the immersion and after a given time of sample immersion in the solution. Statistical Analysis The statistical analysis of obtained data was performed with Statistica 13 software (ver. 13.3, Tibco Software Inc., Palo Alto, CA, USA). Descriptive statistics including means and standard deviations were performed. The normal distribution of data was verified using Shapiro-Wilk tests. ANOVA or Kruskal-Wallis tests and then post hoc tests (Tamhane's or Conover-Iman) were performed in groups with statistically significant differences. The level of significance for tests was set at p < 0.05. Comparison of Color of Modified and Unmodified PMMA All tested samples in this part of the study, regardless of the group, showed the highest recorded mean values for the CIE L* component (lightness), and they increased with an increase in the weight content of ZnO nanoparticles in the range of 47.41 for the control group (PMMA) to 61.86 for the PMMA-ZnONPs-7.5% group. The differences in the L* value range were statistically significant between each group (ANOVA p < 0.000001; Tamhane's post hoc p < 0.000001). The mean results of the CIE a* (green ↔ red) component decreased with the weight content of nanoparticles, reaching 12.67 for PMMA and 10.12 for PMMA with the highest content of Zn ONPs. The CIE b* (blue ↔ yellow) component also varied depending on the nanoparticle content in the range of 10.62 for PMMA to 5.01 for 7.5% Zn ONP content. The differences for the CIE a* and b* components were statistically significant for the individual groups (ANOVA p < 0.000001) in addition to the post hoc comparison between the PMMA-ZnONPs-5% and PMMA-ZnONPS-7.5% groups. The calculated ∆E showed differences in the range of 7.705-15.708 compared to the control group. The NBS parameter allowing for the qualitative presentation of the ∆E discrepancy results showed that the color of the samples in individual groups differed noticeably from high to very high. The mean values with standard deviations, ∆E in relation to control group, NBS, statistical data of PMMA samples, and digital representation of mean color of materials with different ZnO NP content are shown in Table 1 and Figure 1. Comparison of Color of Modified and Unmodified PMMA All tested samples in this part of the study, regardless of the group, showed the highest recorded mean values for the CIE L* component (lightness), and they increased with an increase in the weight content of ZnO nanoparticles in the range of 47.41 for the control group (PMMA) to 61.86 for the PMMA-ZnONPs-7.5% group. The differences in the L* value range were statistically significant between each group (ANOVA p < 0.000001; Tamhane's post hoc p < 0.000001). The mean results of the CIE a* (green ↔ red) component decreased with the weight content of nanoparticles, reaching 12.67 for PMMA and 10.12 for PMMA with the highest content of Zn ONPs. The CIE b* (blue ↔ yellow) component also varied depending on the nanoparticle content in the range of 10.62 for PMMA to 5.01 for 7.5% Zn ONP content. The differences for the CIE a* and b* components were statistically significant for the individual groups (ANOVA p < 0.000001) in addition to the post hoc comparison between the PMMA-ZnONPs-5% and PMMA-ZnONPS-7.5% groups. The calculated ∆E showed differences in the range of 7.705-15.708 compared to the control group. The NBS parameter allowing for the qualitative presentation of the ∆E discrepancy results showed that the color of the samples in individual groups differed noticeably from high to very high. The mean values with standard deviations, ∆E in relation to control group, NBS, statistical data of PMMA samples, and digital representation of mean color of materials with different ZnO NP content are shown in Table 1 and Figure 1. Color Permanence of Modified and Unmodified PMMA, Depending on the External Environment All the tested samples showed color changes from the first day of the experiment, which progressed with the duration of the tests. The greatest color differences were noted for samples stored in red wine, reaching a ∆E value of over 31. The material placed in a black tea solution was characterized by the highest color stability. Regardless of the dye medium used, the greater the color changes, the greater the weight content of zinc oxide nanoparticles in the nanocomposite. Considering only the duration of the test, the Color Permanence of Modified and Unmodified PMMA, Depending on the External Environment All the tested samples showed color changes from the first day of the experiment, which progressed with the duration of the tests. The greatest color differences were noted for samples stored in red wine, reaching a ∆E value of over 31. The material placed in a black tea solution was characterized by the highest color stability. Regardless of the dye medium used, the greater the color changes, the greater the weight content of zinc oxide nanoparticles in the nanocomposite. Considering only the duration of the test, the minimum color change was noted on the 1st day of the research for samples in group B (2.5 wt % ZnO NP-PMMA) and was ∆E 0.96. The maximum color change for this criterion was seen for the material with 7.5 wt % ZnO NP-PMMA (group D) on day 56 of the study (∆E 10.61), but the difference in this case between day 56 and day 182 of the study was not statistically significant. The test time significantly influenced the changes in color regardless of the medium in which the samples were soaked-on the first day, the mean change for all samples was ∆E 1.83 and progressed until the change of ∆E 8.63 for the last day of the test. It is worth noting that the average values of ∆E changes progressed much faster for the samples with higher content of ZnO NPs in the nanocomposite, while the increase in changes stabilized over time, unlike pure PMMA samples, whose color changed gradually throughout the test. The exact results of the differences in mean ∆E values, along with statistically significant differences in the groups and between them, are presented in Table 2. A graphical representation of changes in the delta E parameter with a breakdown into groups and days of measurements is shown in Figure 2. Discussion The main goal of dental prosthetics is the reconstruction of lost tissues within the stomatognathic system. In order for the restorations to be fully accepted by the patient, they must look natural and be imperceptible to the environment. The specificity of materials used in the oral cavity must therefore not only meet the strength and biocompatibility standards for the host tissues, but also the increasingly restrictive biomimetic standards for imitating the tissues of the tooth or mucosa. We were the first to create a ZnO-PMMA nanocomposite to reduce the accumulation of microorganisms on the material's surface [36]. Satisfactory microbiological properties have been demonstrated with acceptable mechanical properties and low cytotoxicity [24,25]. Before introducing the above modifications to a wide range of use, it is necessary to evaluate the aesthetics of the material, including its color and color stability. Both null hypotheses were disproved due to the existing statistical differences between the colors of the samples of PMMA and those modified with ZnO nanoparticles, and the color changed statistically significantly, regardless of the coloring agent/external environment of the substance used in the study. The research showed that the CIE L* component increases with the increase in the zinc oxide content in individual nanocomposites, which proves that the material is whitened. Each test group showed a color change defined by the standard observer as two different colors (∆E > 5). In the NBS classification, this change is defined from large to very large. The value of the L component for the 2.5%, 5%, and 7.5% nanocomposites increased by 13%, 22%, and 30%, respectively, compared to the control group. The whitening of the material after ZnO incorporation was also noticed by Rudolf et al. for a 2% PMMA-ZnO nanocomposite as well as by Cierech et al. in preliminary studies preceding this publication [37,38]. Smaller differences were observed in the CIE a* component, where the red intensity decreased, and for the 7.5% nanocomposite, the decrease was 20%. In the CIE b* component for the 7.5% nanocomposite, the largest decrease in value compared to the control group was observed, which amounted to as much as 47%. The results are consistent with the work of Kamonkhantikul et al., where the color change of PMMA modified with silanized and non-silanized ZnO was investigated, creating 1.25%, 2.5%, and 5% composites [39]. The ∆E for the 5% nanocomposite was 19-22, which gives the impression of two different colors. For the 5% silanized composite, an increase in the CIE L* parameter by 26% and a decrease in the CIE a* and CIE b* parameters by 23% and 65%, respectively, were observed. In the subjective opinion of the authors, as dentists with clinical experience in dental prosthetics, the 2.5% and 5% nanocomposites are aesthetically acceptable and could be used in the fabrication of removable denture plates. Before potential clinical use, however, a color key should be prepared to obtain patient approval of the expected shade of the denture plate. The problem of color change after ZnO incorporation can also be solved by adding an appropriate pigment, which, by changing the color of the material, would make the prosthesis even more similar to the shade of the patient's mucosa. The authors are not aware of any previous publications dealing with this issue. Acrylic resin is considered the least stable in terms of color compared to other materials used in the production of long-term restorations, such as composites or ceramics [40]. Even though the prosthesis plate is constantly exposed to the coloring agents present in food and cleaning agents, with proper hygiene and compliance with medical recommendations, it is possible to maintain the obtained aesthetic effect for many years. In this study, premeditated specimens were covered with a polyethylene sheet prior to wax conversion to standardize and make specimen roughness independent of subsequent processing and polishing steps. In reality, however, properly carried out and repeated activities aimed at adequately reducing the roughness of prosthetic restorations seem to be one of the most important measures to prevent staining, external discoloration, and the deposition of denture plaque. In addition, as the hygienic activities performed by patients, such as brushing dentures, or modifications of dentures by dental professionals also affect the formation of micro or macro surface roughness, the assessment of the need to polish the prosthetic restoration should be one of the obligatory stages during follow-up visits. The study used various coloring media that the denture plate may most often come into contact with. The duration of the study was arbitrarily set at 182 days. Unfortunately, there are no studies that would directly show the impact of laboratory PMMA discoloration time on the clinical application scenario. The range of factors influencing possible discoloration related to individual food preferences requires the conclusion that such indicators would be difficult to define; we consider this to be one of the limitations of the study. From our searches in the available literature, similar studies on color changes differed significantly in terms of the staining period and ranged from 7 to 180 days [41][42][43][44]. In our study, we wanted to show the trend between the duration of the coloring agent and the actual color change of ZnO NP-PMMA. Considering that materials with an admixture of nanoparticles are relatively new solutions, and their physico-chemical and aesthetic properties are still the subject of many studies, we wanted our study to also decide whether to accept or at least indicate the limitations of these materials in the aesthetic context. General knowledge about removable prosthetic restorations and the limitations resulting from the characteristics of classic polymethyl methacrylate indicate that full dentures should be replaced after a period of about 5 years of use, but the material we are testing could also potentially be used as an element of cast metal partial dentures or even long-term full-arch reconstructions based on intraosseous implants, the service life of which is significantly extended compared to classic plate removable restorations [45][46][47]. It was found that the dynamics of the color change of composites decreased significantly after 4 weeks of observation. After this period, the color was basically saturated, and the color changes of the materials were negligible. The 7.5% nanocomposite discolored the fastest, achieving results similar to its final coloring on the 7th day, which is why the 7th day of the test was considered the most appropriate for comparing the color stability of composites. It was observed in the study that distilled water and black tea as coloring media had the least effect on the color change of materials. For 7.5% nanocomposites on the 7th day of the study, the color changes were 1.44 ± 0.23 and 0.95 ± 0.33, respectively, which correspond to a slight change in the NBS classification. Red wine was the strongest staining medium, as it was the only one on the 28th day of observation to obtain a ∆E result above 20 both for the control group and individual study groups. Compared to pure PMMA, the lower concentrations of nanocomposites (2.5% and 5%) behaved similarly or underwent slightly more discoloration. The 7.5% nanocomposite behaved completely differently, and turned out to be much less color-stable than the other nanocomposites. This tendency was most visible for highly colored media such as CO, RW, and CT. The ranges of ∆E CO for the 2.5% and 5% nanocomposites were 1.74-2.65 and 8.53 for the 7.5% nanocomposite. For RW, the range was 9.89-12.91 compared to 15.93 and for CT, it was 3.12-2.39 compared to 7.00. The explanation of such results may be the fact that the PMMA matrix is able to accept and stably integrate into its structural network a certain amount of ZnO nanoparticles. This was proved in studies of ZnO NPs' release into the environment, where a 7.5% nanocomposite after 7 days of incubation showed release at the level of 3.5 mg/mL, while the release from 2.5% and 5% nanocomposites was at a similar level and amounted to 2.2 mg/mL and 2.1 mg/mL, respectively [25]. The increase in release by 62% shows that a large amount of nanoparticles are unstable in the polymer matrix and go to the external environment. Thus, the voids created in this mechanism can incorporate pigments from the environment and thus be responsible for a significant color change. Another possible mechanism of nanocomposites' discoloration may be the absorbability of the material. Along with the absorption of water, microorganisms penetrate the interior of the polymer, as do dyes from the environment [48]. The evident tendency of significant discoloration, independent of the weight content of ZnO nanoparticles in PMMA in the case of red wine, cannot be explained solely by the content of ethanol. Alcohols, but also water as solvents, tend to penetrate the polymer mesh and can chemically soften polymeric dental materials. Water, as a complex solvent, because of its possible strong interaction with the polymer, due to its polarity and ability to form hydrogen bonds, has a tendency to cluster and cause plasticization of the material matrix. Ethanol enhances the plasticization, dissolution, and causes irreversible dental composite degradation by penetrating the matrix and expanding the space between polymer chains. The greatest discoloration in the case of red wine may be caused by the penetration of strong dyes in the form of polyphenols, and more precisely anthocyanins, into the spaces in the polymer mesh created by the ethanol solvent. In addition, the deposition of these highly colored substances can also take place mainly in the outer layer of the material due to the long-lasting tendency of ethanol to dissolve the unbound resin surface [49][50][51]. In the previous studies of the authors, it was proved that the absorbability of nanocomposites is up to 2%, which corresponds to the requirements of ISO standards (20795-1.2013 Dentistry-Base polymers-Part 1, Denture base polymers) for dentures. Small fluctuations in water absorption for individual nanocomposites do not explain the increased color change for the 7.5% nanocomposite. The tendency of discoloration may also result from the very nature of the zinc oxide, the particles of which, when incorporated in the matrix, can change color. This phenomenon is often observed in clinical settings, where the ZnO-based temporary fillings left on for a longer period turn brown or become discolored upon contact with the oral environment. To overcome the limitations of the present study, it would be useful in future studies to evaluate a dynamic model that uses an experimental temperature range similar to human intraoral conditions. In our study, a static laboratory model was chosen, which seems to be one of the main limitations of this study. The colorants we chose are also consumed at different temperatures (hot or cold drinks, denture cleaners), so the model of subsequent studies should be supplemented with fluctuations associated with changes in temperature during the possible supply or use of these agents. The second limitation was the arbitrary imposition of the duration of the action of the coloring agents-the difficulties described above in the form of the lack of unambiguous models for recalculating the duration of the agents' action in the oral cavity during the use of acrylic dentures require that the results of this study be perceived in the form of a material tendency rather than a clear effect during in vivo use. The third limitation-and at the same time, a direction for further studies-is the lack of microscopic or spectroscopic studies to assess the actual possibility of coloring agents interfering with the structure of polymethyl methacrylate, particularly with the admixture of nanoparticles in the form of a ZnO NP-PMMA nanocomposite. Conclusions Modification of PMMA with ZnO nanoparticles is aesthetically acceptable. However, slight whitening of the material (especially for 2.5% and 5% nanocomposites) must be discussed with the patient before potential clinical application. The use of a 7.5% nanocomposite is not recommended in clinical practice due to the high whitening of the material worsening the biomimetic properties and due to poor color stability that weakens the aesthetic effect.
9,125.2
2022-11-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Determinants of the job satisfaction of PhD holders: an analysis by gender, employment sector, and type of satisfaction in Spain We analyze the determinants of job satisfaction of PhD holders in Spain. Specifically, we consider overall job satisfaction as well as basic and motivational satisfaction, following Herzberg ’ s typology (based on Maslow ’ s hierarchy of needs). Using representative data for Spain ’ s PhD population — collected from the Spanish Survey on Human Resources in Science and Technology (2009) — we report an analysis by gender and the institutional sector (university and non-university) in which employees work. We employ Ordinary Least Squares (OLS) regressions to identify the determinants of basic and motivational satisfaction in the workplace and an ordered logit model for overall job satisfaction. Results do not allow us to confirm Herzberg ’ s factor differentiation for Spanish PhD holders since the factors of basic motivation (including salary or working conditions — needs of “ safety ” ) have a bearing on all types of job satisfaction (and not solely on the basic satisfaction of PhD holders). Our results do not show any significant differences by gender. However, it seems that meeting these “ basic ” needs is less important for the job satisfaction of PhD holders working in universities. The results seem reasonable in a Southern European country where the monetary conditions of the labor market are worse than those in other developed countries. Introduction Spanish doctorate holders (PhDs) 1 enjoy better economic conditions in the labor market, both in terms of rates of employment and earnings (Benito et al. 2014;European Commission 2007). Unemployment in Spain among PhD holders is, on average, 4.3% compared to 16.0% among university graduates and 25.7% among the active population (INE 2013), while PhD holders on average earn 60% more than those who have finished secondary school education and roughly twice as much as those who only finished compulsory education (INE 2010). However, job satisfaction has been shown to depend not solely on earnings but on a number of non-monetary factors, including job stability, promotion opportunities, conciliation between labor and family life, self-fulfillment, etc. (Vila 2000). In addition, several changes have occurred in Spain that may affect the labor conditions of PhD holders and, therefore, their job satisfaction. Firstly, Spain has experienced a growth rate above the level of OECD countries in regard to the number of doctorate holders (Benito et al. 2014). Secondly, Spanish universities have been submitted to legal and socio-economic changes that have toughened the access and tenure of doctorate holders to university positions (Beltramo et al. 2001;Cruz-Castro and Sanz-Menéndez 2005;Sanz-Menéndez et al. 2013). Finally, Spain has suffered a very important economic crisis since 2008, which has reduced significantly the budgets of universities and public administration where most of the doctorate holders work (Benito et al. 2014;Cruz-Castro and Sanz-Menendez 2015). The importance of job satisfaction is twofold: on the one hand, employees maximize their well-being; on the other, it is associated with increased productivity and organizational commitment, lower absenteeism and turnover, as well as with greater organizational effectiveness (Ellickson and Logsdon 2001). Job satisfaction can be measured either objectively or subjectively. Objective measures typically refer to the position attained in the hierarchy and, especially, the salary level (see Canal-Domínguez and Wall 2013); subjective measures ask workers about their degree of satisfaction in several areas related to their job. In this article, we adopt this second approach based on Maslow's typology of job satisfaction and Herzberg's subsequent revision (Maslow 1943(Maslow , 1954Herzberg et al. 1959;Herzberg 1968). Maslow famously established a hierarchy of needs (in the shape of a pyramid): from top to bottom, they comprise esteem, affection, safety, and physiological needs. Maslow stressed that the basic levels (safety and physiological needs) have first to be met before the individual can start to crave the higher level needs (although he was at pains to clarify that the levels are interrelated). Additionally, he showed that esteem comprises two levels: a lower one which includes the need for the respect of others, status, and recognition; and a higher one (that of self-actualization) which comprises the need for self-respect, mastery, self-confidence, independence, and freedom. Herzberg (1968) incorporated an additional dual approach whereby not having job satisfaction does not mean a worker is dissatisfied, but rather has no satisfaction. Thus, failing to fulfill the lower-order needs in Maslow's pyramid generates dissatisfaction, but achieving them does not serve as a motivator. Rather motivation is achieved when higher-level needs (related to the job itself) are satisfied. Herzberg defined the factors related to working conditions as "hygiene" factors, which in turn are related to the work environment and which may result in job dissatisfaction. These needs have to be satisfied before higher-level needs emerge and affect motivation. Thus, personnel policies should focus on the satisfaction of higher-level needs (once those of the lower levels have been fulfilled) in order to increase the individuals' motivation. Herzberg taxonomy still has considerable influence on job satisfaction studies. Recent efforts undertaken by Hagedorn (2000) to build a theoretical model in order to explain job satisfaction of faculty members acknowledge-among other variables-motivators and hygiene factors. At the same time, the model also takes into account what Hagedorn termed triggers variables, defined as significant life events affecting job satisfaction. The influence of a changing environment on job satisfaction and job stress was analyzed in 19 higher education systems by Shin and Jung (2014), concluding that market-oriented managerial reforms are the main source of academic stress while the high social reputation of academics in their society and academic autonomy are the source of job satisfaction. More recently, an empirical study conducted by Bentley et al. (2015) examines job satisfaction from an international and comparative perspective through Hagedorn's theoretical model. Their results for 19 countries show that the available time for research and institutional resources are among the variables that have a positive incidence on the academics' job satisfaction in most countries. The authors interpret that these results are related to the recent changes in university systems and the pressure for universities around the world to do more with fewer resources. On the other hand, the positive effect of available time for research in job satisfaction coincide with the "taste for science" (Roach and Sauermann 2010) found in previous studies that examined the preference for research in the PhD holders who work in academia. In this study, we analyze the determinants of job satisfaction among PhD holders in Spain. Specifically, we consider overall job satisfaction, and we also look at satisfaction in terms of Herzberg's dual-factor theory of basic (hygiene) and higher (motivation) needs levels. We conduct the analysis for the whole sample and also by subsamples of gender and work sector (university and non-university). The gender gap is based on previous studies that analyzed gender differences in the job satisfaction of highly educated individuals, which did not reach a clear consensus. While some studies found that few or no significant difference exist between male and female faculty (Smith and Plant 1982;Ward and Sloane 2000), other studies identified differences in both directions (Bender and Heywood 2006;Hagedorn 1996;Oshagbemi 1997Oshagbemi , 2000Oshagbemi , 2001. In addition, for the specific case of Spain, the analysis by gender is relevant because the number of female doctorate holders has undergone a progressive increase since the 1990s; therefore, differences in job satisfaction are reported (Canal-Domínguez and Wall 2013). With regards to the employment sector, we highlight the following. Firstly, OECD-Knowinno (2013) shows that the likelihood to work as a researcher (or in a job related to doctoral studies) is higher in the university sector than in other sectors. This difference is especially significant in the Spanish case where the odds of working as researcher are 19.2 times higher for those working at a university than for those in the business sector. Likewise, the probability to hold a job related to doctoral studies is also 9.1 times higher among those working at the University than in the business sector. Secondly, although the University sector concentrates the higher share of PhD employees in the labor market, all non-university sectors (industry, government, and non-profit organizations) represent approximately 58% of the total employed. Moreover, this figure will probably increase in the future, especially in the business sector, given the limited capacity to incorporate doctorate holders by the university and public organizations (Cruz-Castro and Sanz-Menéndez 2005;Cruz-Castro and Sanz-Menendez 2015). It has to be pointed out that the role of private R&D spending may be a major driver of highly qualified jobs. Thus, Benito and Romera (2013) indicate that a 1% increase in the R&D spending leads to a 3.7% increase in highly qualified jobs. According to this, nonacademic jobs are important not only for Spain, where doctorate holders employed at the business sector are still a small proportion of total doctorate holders, but for the EU countries in general, where doctoral programs are looking for a better matching with the industry needs. The study we report here is, we believe, of value for a number of reasons. First, as it is shown in the next section, few studies to date specifically consider the job satisfaction of PhD holders, apart from a line of research that examines the job satisfaction of faculty members. Second, unlike most of the research in this field, the survey used in this study includes many responses related to job satisfaction (a total of 13 items, in fact). This is a highly relevant point, given that job satisfaction has been shown to be a complex concept comprising several dimensions. Indeed, Oshagbemi (1999Oshagbemi ( , 2006 and Sabharwal and Corley (2009) suggest that multiple-item scales are preferable to single-item scales in the case of job satisfaction. Finally, we divide the sample into several groups so as to obtain a better understanding of the determinants of the job satisfaction of different types of employees. Finally, results are also relevant and have implications in terms of human resource policies. Thus, from all the variables, the ones that are really relevant are those related to the labor market. In particular, Herzberg's distinction is not so clear: factors that are defined as basic needs (especially wages) are relevant in all types of job satisfaction, and not only in the basic ones as expected from the Herzberg's model. Notwithstanding, the factors related to the basic needs are less relevant among university staff. Likewise, there are minor differences in the determinants of job satisfaction between men and women. The determinants of the job satisfaction of PhD holders The majority of studies analyzing job satisfaction adopt a wide-angled focus with few examining PhD graduates in isolation. Moguerou (2002) and Bender and Heywood (2006) analyze job satisfaction-defined as a categorical response to a general question about the feelings an individual has for their job-in the United States. The authors consider the same data sample: the Survey of Doctorate Recipients (SDR) in the United States, which contains 35,000 individuals with a PhD in the sciences ("hard" and social) and engineering. Both studies report a U-shaped age profile for job satisfaction (especially among males). The gender analyses conducted by Moguerou (2002) and Bender and Heywood (2006) show that female PhD graduates enjoy greater job satisfaction than men. This result is in line with what has been referred to as the "paradox of the contented female worker," whereby it is argued that higher levels of job satisfaction among women are related to their lower expectations (see Clark 1997 andBender et al. 2005). A specific analytical framework seems to be associated with the job satisfaction of academics. Thus, Sabharwal and Corley (2009), in a review of 14 studies, report that the majority show male faculty members as having higher levels of overall job satisfaction than female faculty members, particularly as regards benefits and salary received and opportunities for promotion. Considering age and gender, Sloane and Ward (2001), who analyze academics in Scotland, report a negative effect of being female among academics younger than 35 but a positive effect among an older cohort. In a previous analysis, also conducted in Scotland, Ward and Sloane (2000) show that gender (being a man) only has a bearing on promotion prospects. However, Kifle and Desta (2012) report that no consensus is reached on gender job satisfaction among academics. Moreover, in the analysis of faculty members, different discipline areas have to be taken into consideration since there are different attitudinal and behavioral patterns that are shaped by their distinctive epistemology, organizational commitments, and member social relationship (Xu 2008). Also, when the disciplines are considered, the results are not conclusive with regards to gender. Thus, while some empirical evidence did not find differences in job satisfaction in male and female faculty by disciplines (Hagedorn 2000), other studies show discipline as an important predictor of male and female job satisfaction (Sabharwal and Corley 2009;Canal-Domínguez and Wall 2013). Thus, the literature shows that the effect of gender on job satisfaction may vary among different contexts. Moguerou (2002) emphasizes job security for both men and women-defined in terms of the temporary nature of a job-as an important predictor of job satisfaction. However, Bender and Heywood (2006) report just the opposite for those who work in the business sector. Within this same framework of analysis, Oshagbemi (2006)-who considered the university instructors in the United Kingdom-shows that although the length of employment in higher education does not correlate with job satisfaction, the longer an individual has been employed at their current university, the higher their level of job satisfaction. Moguerou (2002) reports that the number of hours worked has a positive effect on the job satisfaction of males (especially those employed in the industrial sector) but a negative effect on females. However, Bender and Heywood (2006) report no effect of the number of hours worked for the whole sample, being positive only for those working for the Government. Likewise, earnings increase the job satisfaction of all those interviewed in both analyses. Finally, studies that consider the sector in which PhD holders work show that this factor affects levels of job satisfaction and that some of the determinants of job satisfaction may vary according to their discipline (Sabharwal and Corley 2009). Likewise, in their study for the United States, Bender and Heywood (2006) show a slightly higher level of job satisfaction among those working in the university as opposed to a non-academic sector. This positive effect is also reported by Moguerou (2002) in his subsample of those holding PhDs in science and engineering from the whole sample used by Bender and Heywood (2006). As Moguerou (2002) pointed out, PhD graduates are expected to be more satisfied if they develop the expected work for PhD holders. In a sectoral analysis that considers individuals working at the university and elsewhere, as the one developed here, we should expect higher levels of satisfaction, especially in the case of motivational satisfaction. In the case of Spain, Cruz-Castro and Sanz-Menéndez (2005) find that Spanish PhD graduates not working in a University value job stability. Canal-Domínguez and Wall (2013), using a previous wave (2006) of the survey used in this study, create an indicator of job satisfaction based on the responses of PhD holders to questions about intellectual challenge, contribution to society, and social status. The authors show that, in contrast to private sector jobs, working in the public sector or for non-profit institutions increases the level of satisfaction of both male and female PhDs. Likewise, in line with international evidence, women express higher degrees of satisfaction. Moreover, age and having a permanent contract have a positive effect on employee satisfaction. However, the presence of over-education or over-qualification creates dissatisfaction, again in line with international evidence. The latter is also true for seniority. Finally, civil status has a bearing on satisfaction: compared to single women, married women are more satisfied than married men, whereas the opposite is the case for widows or divorced women. In this context, Di Paolo (2016) considers the specific case of Catalonia (a Spanish region) and shows that compared to faculty members, PhD recipients working in other sectors (public or private) are more satisfied with their earnings but they have a lower level of non-monetary satisfaction. Data and econometric strategy The database used for this study is the second edition of the Survey on Human Resources in Science and Technology (2009), conducted by the Spanish National Statistics Institute (INE, by its Spanish initials) in coordination with OECD, UNESCO, and Eurostat. The Survey is part of a major effort to examine and compare careers, international mobility, and satisfaction of doctorate holders across different countries. According to the INE (2009), the main objective was to develop the analysis of human resources engaged in research and, to this end, doctorate holders below 70 years were identified as statistical unit. The survey provides exhaustive information about PhD holders from Spanish doctoral programs, which were grouped in six scientific disciplines: agricultural science, natural sciences, engineering and technology, medical science, humanities, and social sciences, in public and private institutions, and who were resident in Spain between 1990 and 2009. For every Spanish region, an independent sample was designed to obtain a better representativeness of the national population of doctorate holders (INE 2009). The final sample consisted of 4123 doctorate holders. The nature of the data is cross-sectional, i.e., compare individuals in a single point of time, taking December 2009 as date of reference. As discussed in the "Introduction" section, Maslow and Herzberg's typology is used to analyze the self-perceived level of satisfaction expressed by respondents in the sample. The survey includes 13 questions that enquire about self-perceived levels of satisfaction with various aspects of work. Accordingly, two composite scales were constructed to proxy two dimensions of satisfaction, namely basic (or "hygiene," in Herzberg's terms) and motivational satisfaction. Both constructs follow the previous studies based on Herzberg taxonomy (Bentley et al. 2015;Hagedorn 2000). Basic satisfaction captures lower-order levels of need in Maslow's pyramid and is associated with extrinsic factors of the job, including physiological needs (salary and fringe benefits in our questionnaire) and safety (labor stability, work location, and labor conditions). Motivational satisfaction considers the following variables: Career opportunities, Intellectual challenge, Responsibility, Level of autonomy, Contribution to society, Social status, and Work-life balance. These items are related to higher-order needs in Maslow's pyramid and refer to membership and recognition, as well as self-actualization. Each of these items is assessed with a Likert-type scale, ranging from 1-no satisfaction to 4highly satisfied. Thus, each satisfaction category is a composite scale calculated as the arithmetic mean of the variables that each one includes. In order to validate the internal consistency of this construct, we compute Cronbach's alpha. In the case of basic satisfaction, this composite scale was 0.68; in that of motivational satisfaction, the scale reliability was 0.73. These values are considered as being acceptable in the literature (Malhotra 2010). In addition to these composite scales, the survey included a question about overall job satisfaction, which is also considered in our analysis. This variable takes values ranging from 1 to 4, in correspondence with the individuals' perception of job satisfaction according to the following scale: none, low, medium, and high level of satisfaction. The survey includes information about several individual characteristics of PhD holders as well as information about doctorate training and labor conditions. In order to identify the determinants of basic, motivational, and total satisfaction in PhD holders, we consider five categories of variables related to the individual characteristics, current labor conditions, doctoral training, academic job-related characteristics, and region of residence (input/output relationships are shown in Fig. 1). As shown in Table 1, men are more satisfied than women in terms of their overall job satisfaction, although this difference is not statistically significant. Men are also more satisfied in relation to all the elements making up the scale of basic satisfaction (except in the case of salaries where male and female levels are the same). In the case of motivational satisfaction, when the differences are statistically significant, men are more satisfied in terms of career opportunities and job autonomy. By employment sector, PhD holders working in the university are more satisfied in relation to most elements of both basic and motivational satisfaction. In contrast, those working outside the university have higher levels of satisfaction only in relation to salary and fringe benefits (basic satisfaction) and responsibility (motivational satisfaction). Our analysis also considers gender by employment sector. In universities, men are more satisfied in terms of basic satisfaction, but the levels of motivational satisfaction are more similar in both genders. Men employed outside the university are more satisfied than women in terms of their basic job satisfaction, but we find hardly any differences in terms of their motivational satisfaction. We use three measures of job satisfaction: two correspond to the scales created to assess basic and motivational satisfaction (as previously described), and one is derived from a specific question concerning overall job satisfaction in the questionnaire. In the former cases, being continuous variables, both indexes are arithmetic means of different variables. We propose ordinary least squares regression to identify the determinants of basic and motivational satisfaction in the workplace. In addition, considering the ordered response for the overall job satisfaction variable, an ordered logit model is estimated in line with the literature (Bender Ward and Sloane 2000). We propose the following estimations (see Eqs. 1 to 3): Job Sat: Non University Job Sat: Uni: Job satisfaction is analyzed for the total sample of PhD holders (1), PhD holders working in the non-university sector (2), and PhD holders working at universities (3). Each equation is estimated by gender and for the three different types of job satisfaction (overall, basic, and motivational). On the right side of the equations, the explanatory variables are represented by elements from different vectors corresponding to the following categories: IC (individual characteristics), LC (labor conditions), DT (doctoral training), AE (academic employment), and RR (Spanish region of residence). Table 5 in the Appendix shows the descriptive statistics of the variables. Results Tables 2, 3, and 4 show the determinants of the job satisfaction of Spanish PhD holders. In each table, the analysis considers the determinants of overall, basic, and motivational satisfaction by gender. Table 2 considers the entire sample; Table 3 includes only those working in universities and Table 4 those employed elsewhere. As stated above, for overall satisfaction, we consider ordered logit estimations, taking "low level of satisfaction" as the base for comparison, whereas our analyses of the determinants of basic and motivational satisfaction follow OLS estimations. Table 2 shows the results for the whole sample. In the case of individual characteristics, only one of the variables is significant (married men)-and then at the 10% significance level-in one of the six regressions. The negative sign may be associated with the lack of promotion or work-life balance. However, it is hardly significant, as mentioned before, and we can conclude in general that the variables related to the individual characteristics are not significant in determining job satisfaction. The same is true for doctoral training variables. Thus, the job satisfaction of PhD holders seems not to be related to the experiences before their incorporation into the labor market (which may have occurred much earlier). However, most labor conditions have a bearing on job satisfaction: thus, the higher the wage level, the higher the level of overall, basic, and motivational satisfaction in both men and women employees. Likewise, having a permanent contract increases all types of satisfaction, whereas the number of hours worked reduces them. The other variables condition job satisfaction to a lesser extent. Thus, having a full-time job increases overall and basic satisfaction in female employees. A close relation between the job performed and the PhD holder's studies increases overall and motivational satisfaction, while a weak relation reduces basic and motivational satisfaction (although only for males in the case of basic satisfaction). We find that a mismatch between the job performed and the PhD holder's qualifications reduces all types of satisfaction in men (as well as motivational satisfaction in the case of women). The institution where an employee works and the existence of an educational mismatch have hardly any impact on job satisfaction. Finally, some regional variables are significant (results available upon request). To sum up, some labor conditions have a bearing on job satisfaction (with the expected sign): a higher wage level, having a permanent contract, working fewer hours, and being employed in a job that is related to the field of specialization all relate positively to job satisfaction. In the case of these factors, we find no differences by type of job satisfaction or gender. Indeed, only a few variables present different outcomes by gender: the mismatch between qualifications and job is especially relevant among men, whereas having a full-time job is significant only for females. Thus, the job satisfaction of PhD holders is very much related to the direct conditions they face at work and not to their previous experiences in life (individual, school and family characteristics as well as doctoral training). Finally, there are hardly any differences between the types of satisfaction (overall, basic, or motivational) for those variables expected to have a particular bearing on basic satisfaction (i.e., those related to income and basic needs, such as type of contract, hours worked, and workday). However, the variables most closely associated with motivational satisfaction have hardly any impact on basic motivation. The results in Table 3 consider only those PhD holders working in universities. As for the whole sample, individual and doctoral training variables are not statistically significant. In the case of labor conditions, we obtain the following outcomes: the influence of wages is less clear Overall satisfaction was estimated following an ordered probit model; basic and motivational satisfactions were estimated by OLS *p < 0.1, **p < 0.05, ***p < 0.01; standard errors are in parentheses in the case of the university sector (especially in relation to overall and motivational satisfaction); a permanent contract has a positive effect on both male and female satisfaction in the case of overall and basic satisfactions; the number of weekly hours worked reduces all kinds of satisfaction, but only for men; and a full-time job is significant especially in the case of basic satisfaction. The relationship between the job and doctoral studies is positively significant only in the case of a close match (and especially in the case of male PhD holders). No effect of education/qualification mismatch is found, given that this is unlikely among PhD holders employed in universities. As for those variables related to the job position held, being a professor increases all types of satisfaction among male PhD holders. Additionally, being a supervisor of a Master's or PhD thesis increases the motivational satisfaction of men. As for the whole sample, residence in certain regions is also found to be significant. Results for PhD holders not working in universities are shown in Table 4. Here, again the estimations show few differences with respect to the previous analyses in the case of individual characteristics and doctoral training. Labor conditions, however, are more relevant. The results in Table 4 show that wages have a similar bearing on satisfaction to that described for the whole sample. As such, this variable is more relevant than for those employed in the University sector: low wages significantly reduce the three types of satisfaction among men and women. High wages increase satisfaction (especially in the case of male PhD holders). A permanent contract increases (especially overall and basic) satisfaction. A full-time job also increases job satisfaction (above all basic satisfaction) for women. However, this variable is statistically significant with a negative sign (although at 10%) for men. This may be related to Overall satisfaction was estimated following an ordered probit model; basic and motivational satisfactions were estimated by OLS *p < 0.1, **p < 0.05, ***p < 0.01; standard errors are in parentheses the fact that dissatisfied female employees find easier to leave their current jobs than equally dissatisfied male employees (Sanz-De Galdeano 2002). As above, the number of hours worked reduces satisfaction. The closeness of the relationship between a worker's doctoral studies and the performed job is again significant; yet, it mainly increases levels of motivational satisfaction. Our results for the educational and qualification mismatch confirm, to some extent, those found for the total sample: overqualified women have lower levels of motivational satisfaction, whereas overqualified men have lower levels of basic and overall satisfaction. However, the results are significant at a very low level. As for the institution where PhD holders work, only men in non-profit organizations show higher levels of motivational satisfaction. For nonuniversity employees, some regional variables are also significant. Conclusions In this study, we analyzed the determinants of job satisfaction among PhD holders in Spain. Satisfaction is defined in overall, basic, and motivational terms following Herzberg's typology. We carried out our analysis for the whole sample as well as for subsamples based on gender and work sector (university or elsewhere). The analysis has revealed several interesting results. With regards to basic and motivational satisfaction-when the whole sample is considered-the variables that can be related to basic motivation (salary, type of contract, and workday) have a bearing, with the expected signs, on both basic and motivational job satisfaction (as well as overall satisfaction). However, the variables that can be related to motivational satisfaction affect mainly this type of job satisfaction. As such, it seems that the differentiation between basic and motivational Overall satisfaction was estimated following an ordered probit model; basic and motivational satisfactions were estimated by OLS *p < 0.1, **p < 0.05, ***p < 0.01; standard errors are in parentheses satisfaction in Spain is not so clear in the case of the former since wages and labor stability increase all three types of job satisfaction among Spanish employees. This is perhaps a reasonable finding in a Southern European country 2 where, according to European Commission (2007), the monetary conditions of researchers are lower than the wages of Northern Europe and Nordic countries, and where labor relations are not as "sophisticated," to use the terminology of Purcell and Sisson (1983). Yet, these factors related to basic needs do not seem to be so relevant for PhD holders employed in universities. In fact, university reforms, and especially the 2001 University Law (named Ley Orgánica de Universidades or LOU), shaped the PhD labor market, bringing widespread discontent among senior and particularly junior researchers. The LOU established a requirement for the national accreditation of faculty members, which implied a mechanism that enforce PhD holders to develop evidence of quality progress in teaching and research, putting more pressure on both competencies. As this study did not compare job satisfaction by age cohorts, this constitutes an interesting line of research for the future. These types of changes may have an impact on PhD holders' satisfaction as Shin and Jung (2014) show for different university systems. In addition, there was a sharp increase in the number of graduates together with a lack of capacity of the public universities to absorb this increasing flow (Cruz-Castro and Sanz-Menéndez 2005). This increase generates more pressure on the supply side of the market on both university and non-university sectors. In the case of universities, the system is constrained by the increase of budget deficits which reduces the capacity to create new positions. As for the non-university systems, the figures for Spain reveal a mismatch between the doctorate holders' training and the market demands, as Garcia-Quevedo et al. (2012) point out, since the demand of doctorate holders in the industry is limited due to the poor knowledge about the potential value added by this type of employees. The evolution of the PhD labor market, then, helps to explain the results obtained in this research. Given the deterioration of working conditions in a large number of PhD graduates, it seems adequate that factors related to "basic satisfaction" are relevant for all types of satisfaction. This result is also consistent with the work of Cruz-Castro and Sanz-Menéndez (2005), who showed that PhD holders employed in the private sector indicated job stability as an important factor of satisfaction (traditionally reported by the public sector staff). With regards to gender, our results do not reveal any significant differences, except for the motivational variables of PhD holders employed at the university. This is due to the absence of significant differences in the labor market by gender for PhD holders in Spain. Thus, no relevant differences are observed in terms of the sector of activity, whether the job is related to the doctoral studies, if the job is full time or part time as well as the number of hours worked (INE 2009). Likewise, with regards to the wages, differences are very small as well. Our sample wage is divided into four levels, 2.8 being the average value level for men and 2.5 for women. In addition, as the INE (2010) shows, women's salaries represent 96.5% of that of men's in the education sector and 93.6% in the public sector (the two sectors of activity where 82% of our PhD holders are employed). Thus, it seems reasonable that labor market factors have a similar effect on both groups by gender in respect to their main labor conditions. These results are consistent with Sanz-Menéndez et al. (2013), who found no significant differences by gender when analyzing the time to tenure in academic labor market in Spain. Results show some differences by working sector. Thus, our subsample of university employees presents a clearer differentiation between the factors related to basic and motivational satisfaction. Among these workers, moreover, we find that wage levels have a minor impact on their satisfaction, while variables related to mismatch are not significant (as expected among PhD holders working at the University). Certain "motivational variables" related to status and mastery-such as being a professor or a PhD advisor-are found to increase male motivational satisfaction (as well as basic satisfaction in the case of being a professor). However, among PhD holders not working at a university, wages are relevant for all types of satisfaction (as they are for the whole sample). The rest of the variables (personal and related to training), in common with the other sample, are hardly significant. Finally, the results show that the job satisfaction of PhD holders is very much related to the direct conditions they face at work and not to previous experiences in life. As expected, they prefer higher wage levels and less working hours. Likewise, other factors positively related to job satisfaction relate to "basic" or "hygiene" needs, such as having a permanent contract and a full-time job. The implications of the results are quite clear. In order to increase the job satisfaction of PhD holders in Spain, not only "sophisticated" human resource management policies (e.g., motivational policies) are required but also salary and other labor variables related to basic satisfaction, such as to guarantee permanent contracts with full-time employment. This seems to be strongly influenced by the reduction of probabilities to work at the public research system, traditionally considered as the first option for PhD graduates. Given the public sector constraints, public policies should try to improve the industry capacity to hire PhD holders in order to contribute to produce a better matching between the doctoral programs and the jobs offered by the firms. In addition, motivational policies are also required. Once a permanent position is guaranteed, PhD holders want to develop a professional career within the institution they work. Thus, career opportunity plans should be available to retain the best in the company. In this context, and especially for those employed outside the university, firms should be aware of the presence of educational mismatch and, mainly, qualification mismatch since they reduce individual's satisfaction. Likewise, companies should not recruit PhD holders if they need less educated individuals. The future evolution of the PhD job market without considering the factors that increase job satisfaction may lead to reduce the supply of doctorates. For university staff, actions related to motivational satisfaction seem to be more relevant, especially for male employees. Thus, to retain individuals to this type of work, it seems crucial to invest in factors related to motivational satisfaction. As reported before, gender differences are only observed for the motivational variables of PhD holders employed at the university. Therefore, human resource management should consider most personnel policies indicated above for both male and female employees. Finally, regional location should be taken into consideration since job satisfaction is also conditioned by the place where PhD holders work. This study has some limitations. Firstly, we use cross-section data instead of a panel. Thus, some of the results may be conditioned to the specific economic circumstances at the time of the interview. Secondly, we develop a quantitative analysis that shows the effect of some independent variables on the dependent ones. The reasons behind these relationships cannot be always answered with our data; hence, further analysis would be required. However, we believe that our analysis helps to understand the determinants of PhD holders in Spain, which may be similar to the ones in other Southern European countries.
8,631
2017-11-01T00:00:00.000
[ "Economics", "Sociology" ]
THE PRINCESS PROJECT: FROM DIFFERENTIAL TO INTEGRAL EXPERIMENTS Following the shutdown of the CEA Valduc experimental facilities, where, for more than 50 years, IRSN used to perform experiments related to criticality safety, IRSN initiated a new project named PRINCESS (PRoject for IRSN Neutron physics and Criticality Experimental data Supporting Safety). The objective is to continue collecting experimental data necessary for the IRSN missions in nuclear safety. For this purpose, collaborations with various national and international laboratories have been established. The PRINCESS project covers various nuclear physics fields from nuclear data to criticality-safety and reactor physics providing information to both differential and integral data improvements. INTRODUCTION In the frame of its missions, IRSN performed various experimental programs related to criticality-safety in collaboration with the CEA Valduc. Following the shutdown of the Valduc facilities, IRSN has initiated a new project named PRINCESS [1] for "PRoject for IRSN Neutron physics and Criticality Experimental data Supporting Safety". The aim is to continue gathering experimental data necessary for criticality safety by the mean of collaborations with various national and international laboratories and to extend them to reactor physics. The PRINCESS project covers the acquisition of new experimental data, but also old ones that are not freely available. This paper presents an overview of the status of the IRSN PRINCESS project. NUCLEAR PHYSICS FIELDS OF THE PRINCESS PROJECT AND ON-GOING COLLABORATIONS The PRINCESS project covers various nuclear physics fields from nuclear data to criticality safety and reactor physics providing information to both differential and integral data improvements. The following main technical domains related to the project have been identified according to the various needs. Nuclear data A precise knowledge of neutron cross sections is of great importance to accurately calculate reaction rates and detailed neutron flux distributions in many nuclear applications. Reducing uncertainties in the neutron cross-section data can result in an enhanced safety of present and future nuclear systems. Therefore, differential experiments with high energy resolution over the whole energy spectrum are required. These measurements are mainly obtained at neutron Time-Of-Flight (TOF) facilities. Water thermal scattering cross-sections measurements The water cross-sections and the distribution of neutrons in the thermal energy range are determined by the interatomic bindings of the hydrogen atoms in the molecular system. In the standard thermal scattering libraries, these effects are described by a S(D E) function which is often termed as thermal scattering law (TSL). The differences between the most recent TSL evaluations and the lack of clear understanding of the temperature behavior of the TSL highlight the need of new experiments for light water to prepare and validate new TSL evaluations. Thus, IRSN has carried out Inelastic Neutron Scattering measurements of light water with two time-offlight spectrometers at the Institut Laue-Langevin (ILL) in France to generate the frequency spectrum at several high temperatures and pressures [2]. Additionally, series of light water inelastic neutron scattering experiments were also conducted at the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source (SNS) covering temperatures ranging from 295 K to 600 K and pressures of 1 bar and 150 bar [3], which correspond to ranges of pressurized light water reactors. Considering the requirement of high resolution inelastic neutron scattering measurement, it appeared that the SEQUOIA spectrometer at the SNS is one of the best suited facilities to study the dynamical structure properties of light water at high temperature and pressure with a satisfactory signal-to-noise ratio. Molybdenum cross section measurements In reactor physics, molybdenum isotopes are mainly encountered in irradiated fuel as fission products or in molybdenum alloys in research or naval reactors. In the nuclear fuel cycle, 95 Mo is also taken into account in criticality safety studies considering burn-up credit for transport casks or irradiated fuel storage, 95 Mo being one of the 15 main absorbing fission products in Light Water Reactors (LWR) irradiated fuel assemblies. Besides, in reprocessing plants, during the dissolution process, some UPu-MoZr deposits appear in specific equipments. Thus, accurate nuclear data of molybdenum isotopes in a wide energy range are of great importance for nuclear safety. As the available nuclear data for molybdenum included in the nuclear data libraries are not of sufficient quality and information about uncertainties and covariance are missing, IRSN and the Japan Atomic Energy Agency (JAEA) performed experimental measurements on molybdenum at the J-PARC (Japan Proton Accelerator Research Complex) facility in Japan at the beginning of 2019 [4]. J-PARC is a proton accelerator facility operated by JAEA and KEK in Tokai-Mura. Neutron capture cross section and transmission measurements were performed on natural molybdenum with ANNRI (Accurate Neutron-Nucleus Reaction measurement Instrument) in MLF (Material Life and science Facility for the purpose of using neutrons to investigate material properties) of J-PARC. Five metallic natural molybdenum samples with various thicknesses of 0.1 mm, 0.5 mm and 2 mm for capture and 0.5 mm and 5 mm for transmission were considered. Additional measurements were performed to determine the background and the normalization factors. A NaI detector (flight length of about 28 m) was used for capture measurements and a Li-glass detector (flight length of about 28.7 m) for transmission measurements. After the data reduction process, the measured data are being analyzed in order to produce more accurate cross sections and associated uncertainties. Additional measurements on 95 Mo, 96 Mo, and 97 Mo enriched samples are already envisioned for the next years. The JAEA-IRSN collaboration in nuclear data field will also be strengthened with the measurements of neutron capture cross section of iron isotopes (using enriched samples of 56 Fe, 57 Fe and 54 Fe) that are planned to be conducted at J-PARC in the beginning of 2020. Criticality risk prevention The objective is to contribute to criticality calculation packages and nuclear data validation. The experiments of interest are mainly slightly sub-critical approaches extrapolated to critical conditions using split-tables, pool tanks or sub-critical experiments dealing with noise measurement techniques. Within the PRINCESS project, IRSN continues its long-standing collaboration with US Department of Energy (DOE) laboratories on various experimental programs under the auspices of the US Nuclear Criticality Safety Program (NCSP). Additionally, the long term collaboration established with JAEA has been strengthened notably with the study of neutronic characteristics of fuel debris in the new STACY Facility. Design of critical experiments dealing with Mo and Rh in SPRF/CX In nuclear fuel cycle, molybdenum and rhodium are sometimes taken into account in criticality safety studies for transport casks, irradiated fuel storage or in reprocessing plant (use of Burn-up credit with fission products or UPu-MoZr deposits in reprocessing plants equipment after dissolution, for example). Thus, having accurate nuclear data of Mo and Rh isotopes validated for a wide energy range is important for criticality-safety practitioners. Taking into account that very few integral experiments are available in thermal and epithermal spectra and in order to have uncorrelated additional data for Mo and Rh, IRSN studied the benefit of performing some experiments in the SPRF/CX installation at Sandia National Laboratories (SNL). Integral Experimental Requests (IER) were submitted to the NCSP for both isotopes in 2015 and preliminary designs of experiments [5] were proposed by IRSN. For Molybdenum experiments (IER 305), it is envisioned to perform experiments using molybdenum metallic sleeves around SANDIA UO 2 fuel rods or molybdenum metallic foils, which would be inserted between UO 2 pellets in SANDIA specific fuel rods (known as BUCCX). Regarding Rhodium validation (IER 306), preliminary design shows that using rhodium foils in BUCCX fuel rods brings potential improvements to the sensitivity of k eff to the total cross section of 103 Rh when comparing with former existing experiments and that experiments with UO 2 fuel rods in a rhodium nitrate solution offer the opportunity to cover the sensitivity at the first resonance peak of 103 Rh. Experiments involving a rhodium resin block pierced with holes hosting UO 2 rods are also being investigated. TEX experiments The need for epithermal and intermediate energy range critical benchmarks is an established international criticality safety data need. Indeed, a high sensitivity to the cross sections of interest in the intermediate energy range (0.625 eV -100 keV) is very desirable to validate nuclear data in the resonances range. The goals of the TEX (Thermal and Epithermal eXperiments) program [6], performed under the auspices of the NCSP, are to address these needs by executing critical experiments with NCSP fissile assets that span a wide range of fission energy spectra, from thermal (below 0.625 eV), through the intermediate energy range (0.625 eV to 100 keV), to fast (above 100 keV). Three test bed assemblies that will be assembled on a vertical lift machine (namely PLANET or COMET), have been designed by Lawrence Livermore National Laboratory (LLNL): a 239 Pu assembly that uses Zero Power Physics Reactor (ZPPR) plutonium metal plates, a 235 U assembly that uses highly enriched uranium (HEU) metal Jemima plates, and a 233 U assembly that uses 233 U oxide ZPPR plates. The assemblies were designed to be easily modified and consist of layers of fuel interspersed with varying amounts of polyethylene moderator, which is used to tune the neutron fission spectrum, and a thin polyethylene reflector to reduce the effects [7], IRSN being the independent reviewer. Since the TEX kick-off meeting, IRSN is collaborating with NCSP in experiments design and evaluation and is currently leading the design of the TEX-MOX experiments [8]. The aim of the TEX-MOX program is to obtain critical experiments representative of UO 2 -PuO 2 powder mixtures (11wt% -30wt% PuO 2 content) with a varying isotopic content of 240 Pu (5wt% -25wt%) and different water contents (between 0wt% and 5wt%). The set of the TEX-MOX experiments should also cover different energy ranges from thermal to fast neutrons. Considering some experimental constraints and using optimization algorithms implemented in the IRSN PROMETHEE tool [9], preliminary critical configurations that satisfy the goals were found. Mixing available fuel plates allows varying the plutonium content and the isotopic 240 Pu content and the use of different materials as moderator and reflector allows covering different parts of the intermediate energy range. Thus, using Al 2 O 3 as moderator allows covering the 1 keV-100 keV energy range, whereas polyethylene, borated polyethylene and graphite plates allow increasing the k eff sensitivity to cross sections for energies ranging from 5 eV to 1 keV. Additionally, it was shown that critical thermal and fast configurations can be achieved. It is important to mention that the decay heat of these configurations is higher than those of the TEX-Ta configurations. Therefore, thermal and radiation dose issues associated to the strong decay heat should be analyzed in further studies. Fuel debris experiments in the new STACY Facility In case of severe accident, it is important to evaluate the neutronic characteristics of molten-coreconcrete-interaction (MCCI) products, which are composed of a mixture of fuel, concrete, and alloys (steel, zircaloy, etc.) to facilitate criticality risk assessments during retrieval operations from the reactors. Following the Fukushima accident, the Japan Atomic Energy Agency (JAEA) has developed activities on the criticality control of fuel debris within a contract with National Regulation Authority (NRA). For that purpose, it was decided to re-start and renew the Static Critical Experiment Facility (STACY), whose core is being converted from a solution-fuel type to a fuel rod and water-moderator type [10]. The first criticality of the new STACY is planned in the beginning of 2021. For many years, IRSN and JAEA have renewed regularly their general collaboration agreement and have had technical exchanges on various subjects in the frame of criticality safety. Therefore, in the frame of the PRINCESS project, IRSN and JAEA are collaborating in the core design and performance assessment of the fuel debris experiments [11]. Preliminary calculations were performed to optimize the design of core configurations of the new STACY to measure the criticality characteristics of pseudo fuel debris focused on Molten Core Concrete Interaction (MCCI) debris. The design method applied to define preliminary configurations is based on the optimization of k eff sensitivity to the cross section using the IRSN PROMETHEE tool [9]. It was proposed that, for the first experimental configurations, pseudo fuel debris rods would be composed only of concrete. To have experiments adapted to the diversity of possible concretes, experiments for different concrete compositions could be considered. These compositions could be based on the samples taken from Fukushima Daiichi NPP. Sub-critical experiments using neutron noise techniques Because sub-critical experiments are time-dependent they are helpful to validate fission multiplicity treatments in calculation packages. Besides, validating neutron noise techniques measurements are also of great interest for characterization of multiplying systems, which could be useful to the nuclear counterterrorism and non-destructive analysis communities. In the frame of the PRINCESS project, IRSN is collaborating with LLNL and Los Alamos National Laboratory (LANL) on the design, evaluation and analysis of various sub-critical experiments involving different fissile materials and reflectors. Thus, IRSN contributed to the ICSBEP evaluation [7] and the analysis (using the IRSN MORET code [12]) of the 5 subcritical benchmark experiments conducted by LLNL with the Inherently Safe Subcritical Assembly (ISSA) [13], a highly enriched uranium (HEU) system moderated and reflected by water. Additional experiments using HEU fuel elements with lower uranium content are planned in 2019. Moreover, IRSN and LLNL will collaborate on the design of new experiments mixing fuel assemblies of different uranium content in order to study the feasibility of detecting a fuel loading error with neutron noise techniques. In recent years, LANL has performed subcritical benchmark evaluations that comprise many configurations with beryllium-reflected plutonium (BERP) ball, referred as to FUND-NCERC-PU-HE3-MULT benchmarks in the ICSBEP handbook [7]. Some configurations include Cu (known as SCRaP experiments [14], for Subcritical Copper-Reflected Alpha-phase Plutonium), Ni and W reflectors. These experiments were performed at the National Criticality Experiments Research Center (NCERC) at the Nevada National Security Site (NNSS). IRSN and LANL collaborated on the evaluation and the analysis of these experiments using their own codes. To expand the range of subcritical benchmarks, a new experimental program, named MUSIC [15], will be performed by LANL in collaboration with IRSN using highly enriched uranium shells, known as the Rocky Flats shells. This experiment will be performed using the Planet vertical lift machine, which will combine an upper and lower subassembly together to assemble the full experimental configurations. The aim is to validate subcritical measurement and simulation methods for deeply subcritical to critical configurations. Criticality accident detection and consequences assessment In the frame of its missions in nuclear safety, IRSN should evaluate the expected radiological consequences of a potential criticality accident, in order to verify that preventive measures planned by the operators (detection, evacuation, mitigation of the criticality accident, etc.) can limit as much as possible these consequences to the staff, the population and the environment. Besides, in case of a criticality accident, doses to the victims should be estimated within a short delay to contribute to the diagnosis and to help to define the best therapeutic strategy. For that purpose, it is necessary to understand the phenomenology of the criticality accidents and to validate the Radiation Protection Instrumentations and doses estimation. It is also tremendous to maintain competences by practicing criticality accident dosimetry regularly and to train staff to perform such dosimetry. Since 2014, IRSN has contributed to many experiments and dosimetry inter-comparison exercises [16] with DOE laboratories, the first one being organized by IRSN with the CALIBAN and PROPSERO reactors at the CEA Valduc research center [17]. This was followed by criticality accident dosimetry exercises that occurred in the US Nevada National Security Site using GODIVA-IV and FLAT-TOP reactors under the auspices of the NCSP. One of the main conclusions of these exercises is that they need to be performed regularly (ideally every one or two years) in order to maintain skills of the labs for this very specific dosimetry. Further collaboration is envisioned with LLNL in the next years using a well characterized 252 Cf source, in order to investigate of Criticality Accident Alarm System (CAAS) response. IRSN would perform the neutron kerma and photon dose measurements and neutron spectrum using various detectors. A full dosimetry exercise that would be performed under the auspices of the NCSP is also under discussions. The aim is to play an exercise in conditions as close as possible of a real accident in order to test not only the classical criticality dosimetry systems, but also retrospective accident techniques in order to estimate the dose distribution in the body. Several phantoms (with various orientations and distances from the reactor) may be placed around GODIVA or FLATTOP reactor depending on their availability. These phantoms will be equipped with various "items" and samples (such as hairs, nails, teeth, blood sample, etc.), which will be measured and used for the estimation of doses. Validation of depletion calculation codes Depletion codes are used in reactor physics to determine the composition of the spent fuel assemblies and also in criticality-safety studies for burn-up credit calculations. In order to validate the calculation packages (including codes, nuclear data and calculation schemes), Post-Irradiated Examinations (PIE), which consist of chemical analyses of irradiated fuel samples, are needed. On November 2014, SCK•CEN (the Belgian Nuclear Research Centre) and IRSN signed a new framework cooperation agreement in the field of nuclear safety, radiation protection and nuclear waste management, which was actually the renewal for another 5 years of an on-going longstanding collaboration that was initiated in 2008. At the same time, a bilateral collaboration agreement was signed under this framework agreement that outlines the collaboration between SCK•CEN and IRSN in the REGAL (Rod Extremity and Gadolinia AnaLysis) program [18]. It aims investigating two challenging issues in terms of characterizing the isotopic inventory of nuclear fuel: rod extremity effects and the use of gadolinium-doped fuel. Reactor physics The aim is mainly to contribute to neutron physics codes validation and to instrumentation and measurements techniques validation. The experiments considered are mainly reactor mock-up experiments. In the frame of the PRINCESS project, measurements are being performed in order to support safety assessment of reactors start-up and to study material ageing. Simulations have shown that, at low power levels in a critical system, neutrons may start to cluster. Therefore, measurements were performed in 2017 at the Rensselaer Polytechnic Institute (RPI) Walthousen Reactor Critical Facility (RCF) in the frame of LANL, IRSN, and RPI collaboration in order to measure this phenomenon in a real reactor [20]. Measurements were conducted at different reactor powers, from less than 1 mW to 0.85 W at various reactivity states. Neutron counting was performed using two NOMAD detectors placed one on the other but also 3 He tubes and RCF detectors (uncompensated ion chambers). 3 He tubes were used to evaluate correlations versus distance in the core and NOMAD detectors for the analysis of spatial correlations versus the reactor power. Measurement times varied from 30 seconds to 2 hours long. Experimental results are being analyzed taking into account experimental uncertainties. Density laws In criticality-safety studies, density laws are often used to determine the composition of fissile media as a function of the moderation ratio. Chemical analyses of fuel solutions at various concentrations, acidities and temperatures are thus required in order to generate and validate density laws. Some measurements on plutonium solutions are envisioned in the frame of the collaboration with the DOE-NCSP. IRSN also takes part to the European Horizon 2020 GENIORS project [21], which focuses on improving current recycling of spent nuclear fuel and future multiple recycling strategies to be implemented in the 4 th generation of reactors. In this frame, chemical analyses of fuel solutions using solvents envisioned for new reprocessing process at various densities and concentrations are planned to be performed. CONCLUSIONS IRSN initiated a project named PRINCESS (PRoject for IRSN Neutron physics and Criticality Experimental data Supporting Safety). The objective is to collect experimental data necessary for its missions in nuclear safety. The PRINCESS project covers various nuclear physics fields from nuclear data to criticality-safety and reactor physics providing information to both differential and integral data improvements. It concerns the acquisition of new experimental data, but also old ones that are not freely available. Strong bilateral collaborations have already been put in place with DOE/NCSP, JAEA and SCK-CEN, allowing IRSN to maintain its capabilities in nuclear criticality-safety and reactor physics. Collaborations with other institutes and other countries are foreseen in the next years.
4,708.2
2021-01-01T00:00:00.000
[ "Physics" ]
Node embeddings in dynamic graphs In this paper, we present algorithms that learn and update temporal node embeddings on the fly for tracking and measuring node similarity over time in graph streams. Recently, several representation learning methods have been proposed that are capable of embedding nodes in a vector space in a way that captures the network structure. Most of the known techniques extract embeddings from static graph snapshots. By contrast, modeling the dynamics of the nodes in temporal networks requires evolving node representations. In order to update node representations that reflect the temporal changes in the local graph structure, we rely on ideas for data stream algorithms. For example, we assess neighborhood overlap by a MinHash fingerprint-based algorithm. To evaluate our methods, in addition to the standard link prediction task, we provide dynamic ground truth data for the quantitative evaluation of similarity search by using online updated node embeddings. In our experiments, we constructed tennis tournament Twitter mention graphs as edge streams and compiled dynamic ground truth by using tournament schedule as external source. Our new algorithms outperformed snapshot-based batch methods for both link prediction and similarity search. Introduction The need for machine learning over data streams is motivated by a rapidly growing number of industrial applications of graph algorithms (Wang et al. 2017;Nie et al. 2017;Zhou et al. 2017;Wei et al. 2017) and online machine learning (Bifet et al. 2010;De Francisci Morales et al. 2016;Zhu and Shasha 2002;Žliobaite et al. 2012). In graph streams, the combination of the two areas, edges arrive continuously over time from a large network and have no duration (McGregor 2014). We intend to apply online machine learning (Bifet et al. 2010) for link prediction and similarity search by learning and updating node feature representations on the fly from graph streams. The principal task of online machine learning is to learn a concept incrementally by processing data immediately after creation (Widmer and Kubat 1996), for example, after each mention in a Twitter mention graph. Traditional, batch learners build static models from finite, static data sets, which do not change over time. By contrast, stream learners build models that evolve over time. For example, in graphs, more recent edges can form a more relevant picture of the current network structure than older ones. The final model will strongly depend on the order of examples generated from a continuous, non-stationary flow of data. Modeling is therefore affected by potential concept drifts or changes in distribution (Gama et al. 2013). Online learning seems more restricted than batch learning, which can iterate over the data set several times, and thus one could expect inferior results from online methods. By contrast, in some cases (Frigó et al. 2017), online methods perform surprisingly strongly. To track node properties in a graph stream, we adapt the highly successful technique of node representations. Representation learning methods on graphs encode the nodes of the network to points in a low-dimensional vector space. In general, representations in the embedded space should reflect the structure of the original graph. The research area of node embeddings has been recently catalyzed by the Word2Vec algorithm , developed for natural language processing. Several node embedding methods have been proposed recently (Perozzi et al. 2014;Tang et al. 2015;Grover and Leskovec 2016;Qiu et al. 2018) and applied successfully for multi-label classification and link prediction in a variety of real-world networks from diverse domains. In order to generate node embeddings, we have to solve the challenge of maintaining node embeddings for tracking and measuring node properties and similarities as the edges arrive. Most graph algorithms are difficult to update online. For example, to compute random walk-based embeddings (Grover and Leskovec 2016), we have to be able to maintain not just the embedding but also the set of walks whenever a new edge appears in the stream. Time-aware relevance evaluation also becomes troublesome in the presence of fast changes in network structure. For link prediction (Liben-Nowell and Kleinberg 2007), in an edge stream we can update our model immediately after the new edge arrives, and predict a completely new list in the next step. For this so-called predictive sequential (abbreviated as prequential) evaluation (Dawid 1984), we have to define new evaluation metrics. The same difficulties for evaluating streaming recommenders was first observed in (Lathia et al. 2009). To fully utilize the power of graph embedding, our main focus for evaluation is similarity search, in which we assess the information encoded in the embedding about node pairs rather than just a global property required for predicting links. For similarity search, the prime source of difficulty lies in ground truth compilation. Static relevance measures such as precision, recall, or NDCG already require ground truth labeling, which itself often requires tedious human effort such as the effort that has been made for TREC topics (Clarke et al. 2004). In a dynamic graph stream, depending on time granularity, the same human data curation may be required in each time step. Algorithms for temporal graphs have already started to emerge in publications; however, very few graph learning algorithms are capable of immediately updating their models from edge streams. Similarly, in the literature we rarely find real graph streaming methods where node labels are highly dynamic: even link prediction tasks are evaluated in batches for sets of edges that appear over a longer period in time. Any embedding method can be applied in dynamic graphs by considering graph snapshots in time. However, such solutions do not only react slowly, but also build new representations for every snapshot, hence they require an entire model retraining for downstream machine learning tasks (Hamilton et al. 2017a). The more natural part of the task is updating the embedding: gradient descent is a commonly used optimization procedure, which naturally lends itself to online learning algorithms (Juang and Lin 1998) as well. However, walk-based embedding methods published so far could only efficiently rebuild the walks for snapshots or larger batches of insertions and deletions, and were not able to update the set of walks for every single new edge in the stream. Present work. StreamWalk, our first algorithm updates the node embedding online to track and measure node properties and similarity from a graph stream (Rozenshtein and Gionis 2016). StreamWalk is based on the recent concept of temporal walks containing edges ordered in time. As illustrated in Fig. 1, the StreamWalk algorithm picks node samples for a single node from its temporal neighborhood, using temporal walks ending in the node. Given the sample, we optimize to make the embedding of the sample similar to the source node. Our algorithm performs online machine learning (Bifet et al. 2010) by continuously updating a model as we read the graph stream. Its key ingredients are online gradient descent optimization (Juang and Lin 1998) and time respecting temporal walks (Rozenshtein and Gionis 2016). StreamWalk improves over static node representations as applied to graph streams in three key ways: • It accounts for the ordering of edges by sampling only from time respecting random walks, capturing richer information about graph structure. • It includes an efficient data structure to sample random walks online without storing the entire edge set. • The node representations evolve over time to reflect changes in network structure. Our second algorithm directly learns the neighborhood similarity of node pairs in the graph stream, which we call second order similarity. By MinHash fingerprinting (Fogaras and Rácz 2005), we efficiently approximate the neighborhood Jaccard similarity of any two nodes at a given time. Then we optimize the embedding to make pairs similar, proportional to the overlap of their neighborhood. Our main results are twofold: • We design two algorithms that can update their node representations quickly in large graph streams, and outperform the baselines among others in link prediction tasks. • We design a quantitative experiment for accessing the quality of temporal node embeddings based on the Twitter tennis tournament mention graphs of (Béres et al. 2018), which include temporally changing node labels. In a supervised experiment, Fig. 1 Concept of StreamWalk. When uv edge arrives in the stream, vertices from the temporal neighborhood of u are sampled via temporal random walks. The method optimizes for the similarity of v and the sampled node w we show that online updateable embeddings capture node similarities better than static embeddings. The rest of this paper is organized as follows. First we summarize the related works in "Related works" section. In "Dynamic vector space embedding methods in edge streams" section we introduce StreamWalk and our method for learning neighborhood similarities directly from graph streams. In "Similarity search experiments" section we first examine the quality of dynamic node embeddings on the RG17 and UO17 Twitter data sets. Finally, in "Online link prediction" section we consider the online link prediction problem as another evaluation of our methods. Related works Temporal networks. A large variety of temporal network algorithms have appeared for connectivity, spanning trees, matchings, and many more, which are surveyed, for example, in (Holme and Saramäki 2012; Aggarwal and Subbian 2014). The usual approach for analyzing temporal graphs is to use timestamps to create a series of static graph snapshots (Kumar et al. 2010). High temporal granularity networks are considered in the edge or graph stream model (McGregor 2014) where edges must be processed once they arrive in the stream; for example, a random walk algorithm is described in (Sarma et al. 2011). The concept of time respecting paths, in which adjacent edges must be ordered in time, is key in our results and directly used in one of our embedding models. The concept was perhaps introduced in (Moody 2002) for analyzing diffusion in networks. In another terminology, temporal walks were used to construct time-aware centrality metrics in (Rozenshtein and Gionis 2016;Béres et al. 2018). Online machine learning. The area of online machine learning covers algorithms that work from data streams with only a limited possibility to store past data (Bifet et al. 2010). We define our models over graph streams where the data stream consists of the edges of the graph. Our models are online updateable, hence they are capable of adapting to concept drift. Link prediction from graph streams is one of our tasks where the goal is to predict the next edge appearing in the edge stream. The problem is closely related to recommender systems where the strength of online machine learning has been observed recently (Ling et al. 2012;Frigó et al. 2017). Note that for link prediction we use the prequential evaluation, which was published more than thirty years ago (Dawid 1984) but has only recently come into widespread use (Gama et al. 2013) for streaming algorithms. Representation learning on graphs. Embedding methods on graphs encode the nodes of the network to vectors in a low-dimensional vector space. In general, representations in the embedded space should reflect the structure of the original graph. Perhaps the most well-known method is Laplacian eigenmaps (Belkin and Niyogi 2002). Another class of models is based on the adjacency matrix of the graph; one popular example is graph factorization (Ahmed et al. 2013). Recently, random walk-based approaches have been proposed, like Node2Vec (Grover and Leskovec 2016), LINE (Tang et al. 2015), and Deep-Walk (Perozzi et al. 2014). These methods sample node pairs that co-occur in random walks, and then optimize for their similarity in the embedded space. Walk sampling is motivated by the skip-gram model from natural language processing . Furthermore, the aforementioned techniques can be unified under a matrix factorization framework (Qiu et al. 2018). We briefly review the methodology of the above approaches by following (Hamilton et al. 2017b). Static embedding methods learn an embedding vector q u for each node u in the graph. Usually the objective is to learn vectors that are similar for neighboring nodes. Let s(u) denote the neighborhood of u; then our goal is to satisfy q v ≈ q u for v ∈ s(u). Shallow embedding approaches for static graphs differ in the objective function they use to ensure the similarity of the embeddings, and in the definition of the network neighborhood s(u). Graph factorization (Ahmed et al. 2013), GraRep (Cao et al. 2015), and HOPE (Ou et al. 2016) optimize for the squared error (SE) over node pairs in the neighborhood: where sim(u, v) is the similarity of two nodes measured from the graph structure. The definition of the neighborhood is based on the adjacency matrix. Graph factorization calculates with adjacent neighbors, while GraRep uses higher powers of the adjacency matrix, for example, two-hop neighbors. As a different method, random walk-based approaches (Grover and Leskovec 2016;Perozzi et al. 2014) sample vertices from the neighborhood of a node. Sampling is done by initiating random walks from node u. Instead of SE, these approaches optimize for cross-entropy loss: where s * (u) is a random sample from the neighborhood of u. Many of the above mentioned algorithms use the Word2Vec model as an underlying abstraction by training the model, using sampled walks analogously to sentences, and using the learned embeddings as node embeddings. We follow this approach, and also investigate the use of either the input (W 1) or the output embedding (W 2) of the model (Press and Wolf 2016) as the vector space representation of the graph. The above models learn static embeddings on graph snapshots; however, they mention extensions towards online learning from graph streams. In DeepWalk (Perozzi et al. 2014), the possibility of an online incremental update is proposed but not analyzed. An incremental update for LINE with a batch of edge insertions and deletions is described in (Yu et al. 2018), but no attempt is made to analyze the online, single edge insertion behavior. Closest to our work is the continuous-time dynamic network embedding result (Nguyen et al. 2018), which does not learn online but computes an embedding for a single point in time. Similarly, the HTNE algorithm (Zuo et al. 2018) produces temporal node embeddings, but training is done in batch instead of executing online updates. A promising direction for computing the embedding dynamically involves recurrent neural networks, for example, Long Short-Term Memory networks ; however, the applicability for graphs is not yet explored. Dynamic vector space embedding methods in edge streams We describe two node embedding approaches that are applicable in edge streams. The input of both algorithms consists of an edge stream (u, v, t) ordered by time t in which each edge can occur multiple times. As required by the data stream algorithmic model, we process the edges in the order of arrival without storing the entire input. Our goal is to dynamically learn node representations by reflecting the current node similarity structure of the evolving graph as we dynamically change the location of the nodes in the vector space. To this end, we give two embedding methods in the next two subsections. Two nodes are required to be mapped close in the vector space whenever they lie on short paths formed by recent edges in the first model, and whenever the set of their recent neighbors is similar in the second model. Similarity based on reachability through short temporal walks In our first algorithm, our goal is to enforce that the embedding of node v be similar to the embedding of nodes with the ability to reach v across edges that appeared recently, as shown in Fig. 1. In other words, the embedding of a node should be similar to the embedding of nodes in its temporal neighborhood. We define time respecting temporal walks (Rozenshtein and Gionis 2016) in order to sample for each node u at any time t nodes from its temporal neighborhood. As seen in Fig. 2, a temporal walk consists of adjacent edges ordered in time: For example, there are three temporal walks leading to node v in Fig. 4: e 1 , e 3 , and e 2 , e 3 . Since edges can appear multiple times, we consider the edge set as a multiset and distinguish between the walk (e 2 , e 3 ), which is a temporal walk, from (e 2 , e 1 ), which is not, since e 1 comes earlier than e 2 . To define the similarity, we want to give more weight to shorter walks and more weight to fresh edges. Towards this end, for a temporal walk where edges appeared at (t 1 , t 2 , . . . , t j ), we define the probability of the walk at time t as where β ≤ 1 is an exponential decay on the length of the walk, t = t j+1 , and γ (τ ) is a time-aware weighting function that is based on the delay τ between adjacent edges. The concept of (4) is that a walk is more likely if edges along the walk appeared close to each other in time. We use exponential time weight γ (τ ) The notation is summarized in Table 1. Temporal walk sampling from edge stream Given a node v, a naive idea would be to compute the walk weight {p(z, t) : z is a temporal walk from w to v} for all other nodes w and set the embedding of w close to that of v proportional to the walk weight. The problem with this approach is that it requires a time consuming walk enumeration procedure at each time instance, and has no ability to update the similarity measure by focusing only on the new edges as they arrive. Given the new edge uv that arrives at time t, we would like to only consider walks from any w that reach v by the new edge uv. Towards this end, we propose a sampling update procedure for temporal walks as follows. We select a start node w of a random temporal walk z ending in u with probability proportional to p(z, t) in (5); see Fig. 3. We generate the walks by taking steps backwards from u. To make sure the walks are temporal, we always use edges that appeared before the previous one. Among the possible edges entering the current node, we select proportional to the time-aware weighting function γ . For example, in Fig. 3, we select t 5 backwards from u, and then t 3 backwards from the next node. Finally, we also compute a stopping probability corresponding to the length decay β so that we select no new edge from w in the example; the actual formula (10) is explained later. The actual implementation is somewhat tricky in that we have to handle multi-sets of edges. A way to illustrate the implementation is to consider an edge uv that appears before another wu and then reappears, see Fig. 4. The second instance can form a temporal walk w, u, v, while the same walk is not temporal with the first instance of uv. However, the second instance of uv has a higher edge weight γ , hence we have to store the weight of the first instance as well to be able to correctly compute the weight of all temporal walks that reach node v. The implementation of the StreamWalk algorithm In Algorithm 1, we describe StreamWalk, our implementation of temporal walk sampling. Recall that the notation is summarized in Table 1. For every edge uv in the multi-set of edges arriving in the stream, we maintain the total weight of all walks ending at v at time t(uv): where we sum over all temporal walks z ending in v using edges arriving no later than t(uv). The actual computation in procedure UPDATEWALKS accumulates the weight of the walks seen in Fig. 5. There is a new single edge temporal walk uv with weight β. Furthermore, we can continue each temporal walk z that ended in u before t(uv) with uv. The total weight of these walks is p (u, t u where t u is the most recent timestamp for which p(u, t u ) is known. In other words, t u denotes the last time an edge entering u arrived in the edge stream. The exponential term accounts for the time decay of temporal walk weights since the arrival of this last edge entering u. Finally, we add all the walks that terminated at v before, with exponential time decay. The final formula becomes where t v is the most recent timestamp for which p(v, t v ) is known. The update rule is illustrated in the last step in Figs. 4 and 5. For each edge uv in the stream, we finally update the embedding of v by sampling a fixed number of temporal walks ending in u; we do this by calling procedure SAMPLEWALKS k times as described at the end of this section. Given the start node w of a walk in the sample, we optimize for the similarity of the embedding pair (q v , q w ) with stochastic gradient descent. For loss function, we either set MSE or cross-entropy as in Eqs. (1) and (2). In the case of MSE, for each w we apply online negative sampling (Pálovics et al. 2014) by selecting pairs vw proportional to the popularity of w in the edge stream up to the current timestamp. We refer to (Kaji and Kobayashi 2017) for online incremental updates for cross-entropy based loss. Fig. 4 Computation of p(v, t) for the arrival times t 1 < t 2 < t 3 of the three edges e 1 , e 2 , and e 3 . The bottom right cell illustrates the update formula (7) Fig. 5 Whenever a new edge uv appears, a new walk starts from u (red), and each temporal walk (z 1 , z 2 , z 3 ) that ended in u up to time t u continues via uv (blue). We get p (v, t(uv)) by summing up the contribution of the previous two type of walks (red and blue) with the decayed weight of walks that have already reached node v (purple) before time t (uv) Since we train by sampling k walks per edge, time complexity is affected by the cost of sampling temporal walks. To reduce storage, we can work over a sliding window of the stream and periodically remove the oldest edges; these edges will already have a very small γ value. Finally, we describe the algorithm to sample temporal walks as implemented in Procedure SAMPLEWALKS of Algorithm 1. Our goal is to sample proportional to p(y, τ ) at a given time τ . We define a random walk backwards from y. We select a backward edge with probability proportional to the weight of walks ending with that edge, which we define as p(xy) = z p(z, t(xy)); where z are temporal walks ending with the given instance of the edge xy that appeared at time t(xy). Recall that the edges are taken from a multi-set. The value of p(xy) can be calculated as follows. From the total temporal walk weight ending in y at time t(xy), we have to subtract the total weight of all walks ending in y before t(xy); the difference contains the weight of only those paths that use the edge instance xy of timestamp t(xy): wheret < t(xy) is the timestamp of the last edge in the stream entering y before t(xy). The exponential term corresponds to the time decay of the walk weight since timet. We also define the termination for the walk, which is based on the contribution of the single node y as a zero-edge walk relative to all other walks that end at y. At any time of observation τ , the weight of the zero-edge walk is 1, and the total weight of the remaining walks is p(y, t) for the last recorded time t ≤ τ , decayed proportional to the elapsed time, τ − t. Hence with the probability below, we take no further steps but stop the walk: The steps of Procedure SAMPLEWALKS are summarized as follows. 1 We start the random walk from y ← u and set τ = now. Algorithm 1 StreamWalk. procedure UPDATEWALKS(u, v) Update the weight for all walks ending at v t u , t v ← last timestamp such that p(u, t u ) and p (v, t v Recursively sample a temporal walk ending at y t ← most recent timestamp with t ≤ τ such that p(y, t) is known p(y, τ ) ← p(y, t) · exp(−c(τ − t)) With probability 1/(1 + p(y, τ )) do return y else for all xy multi-edges with t(xy) < τ do Select x with probability p(xy) · exp(−c(τ − t(xy)))/p(y, τ ) end for return SAMPLEWALKS(x, t(xy)) end procedure procedure STREAMWALK(u, v) Update embedding for v call UpdateWalks (u, v) repeat k times w ← SAMPLEWALKS(u, now) Optimize the representations q w and q v by Eqs. (1) or (2) end procedure 2 With probability such as in Eq. 10, we stop the walk and return the current node y. 3 Optionally, we can also terminate the walk if its length reaches a predefined limit. 4 Else, we select an edge xy with t(xy) < τ with probability proportional to the time-decayed total weight of walks ending with xy, which is p(xy) · exp(−c(τ − t(xy))) by definition. 5 We repeat from step 2 by setting y ← x and τ ← t(xy). As the final implementation details, we can sample by selecting a random value between zero and p(y, τ ) and binary search in the multi-set of xy edges ordered by t(xy). For a given edge xy, we compute p(xy) by Eq. 9 and continue the binary search based on the time-decayed value p(xy) · exp(−c(τ − t(xy))). Lastly, it can happen that sampling intends to select a very old edge that was already deleted from the sliding window. This happens when binary search does not terminate at the oldest t still kept in the records. In this case, we can repeat the sampling with a new random value. Online learning of second order node similarity Our next online algorithm optimizes the embedding to match the neighborhood similarity of the nodes, which we call second order proximity by following (Tang et al. 2015). Our goal is to optimize for (1) online, by considering sim(u, x) as a time-aware Jaccard similarity of the neighborhood of u and x, as illustrated in Fig. 6. We consider the neighbors y of u as a multi-set N(u, t) in which we use the decayed weight of edge uy as the weight of y: where t(uy) is the time the corresponding instance of edge uy appeared in the stream. Whenever we add a new edge to u, we discard elements y ∈ N(u, t) with probability 1 − w(y). This way we emphasize the importance of new edges and also limit the size of N(u, t) by discarding old edges with low weight that have little effect on similarity values. In order to design a streaming algorithm to compute second order similarity, we face the same problems as in the StreamWalk algorithm: we want to focus on the increase of similarity when we add a new edge uv, and we want to avoid the costly full computation of similarities of u with all neighbors x of v. Note that the similarity of x and u depend on their neighborhood, which means that all nodes of distance two from v should be enumerated for the full computation. In the next subsection, we describe a randomized approximation method for neighborhood similarities based on (Fogaras and Rácz 2005), which will be used in our final algorithm. Approximation by fingerprinting Our algorithm relies on MinHash fingerprinting (Broder et al. 2000) to approximate the Jaccard similarity. The notations are summarized in Table 2. Let there be k independent random permutations over the nodes π i for i = 1 . . . k. We define the k fingerprints of A as (N(U, t)) the i-th fingerprint of u at time t For short, (N(u, t)). We maintain k fingerprints defined in (12) for the neighborhood of each node where the weights of the elements are defined by (11). We approximate the time-aware Jaccard similarity of any node pair with the fraction of common fingerprint values: We illustrate the fingerprinting idea in Fig. 7 for k = 2. The two fingerprints of u, h 1 (u) and h 2 (u), are defined based on two permutations π 1 and π 2 of the entire vertex set. The permutations are fixed, but the fingerprints change in time as new edges arrive and past edges become too old and get removed from N(u, t). Next, we show how the similarity of u and a neighbor x of v can be approximated in the example of Fig. 7. Assume that h 1 (x) = v and h 2 (x) = v 1 . By using formula (14), before edge uv arrives, the similarity approximation is sim(u, v, t 3 ) ≈ (0 + 1)/2 as h 2 (x) = h 2 (u) = v 1 at time t 3 . When edge uv arrives, the similarity will on one hand increase, since h 1 (u) gets assigned with v. On the other hand, the similarity can decrease as edges become too old. For example, if we drop edge uv 1 , equation h 2 (x) = h 2 (u) = v 1 will no longer hold. However, since we want to avoid the cost of updating h i (x) for all i and all neighbors x of v, we heuristically only consider the increase of similarity, which can be caused by adding v as new fingerprint of u. Fig. 7 Illustration of how the fingerprints of node u change when adding the new edge uv. Neighbors of u are ordered in time as t 1 < t 2 < t 3 < t. Two fixed random permutations π 1 and π 2 define the fingerprints h 1 (u) and h 2 (u). In π 1 (red), v has minimum value, hence the previous h 1 (u) will be reassigned to v. In π 2 (purple), the minimum is the oldest node v 1 , which becomes too old and gets removed from N(u, t). The correct value for h 2 (u) would be v 2 ( ). Instead we heuristically set h 2 (u) = v after the removal of v 1 Algorithm 2 Online learning second order similarity procedure UPDATEFINGERPRINTS(u, v) GETSIMILARITYDELTA(u, v, x) ← 0 Optimize the representations q u and q x repeated times, by using Eqs. (1) or (2) end for Repeat with v and u swapped and edge directions reversed end procedure As a final heuristic, in our implementation we always replace fingerprints corresponding to pruned neighbors by v, since obtaining the π i values of the entire neighborhood is computationally costly. In the example of Fig. 7, we drop edge uv 1 as t 1 t. The correct new value of h 2 (u) would be the next oldest vertex v 2 , however this can only be calculated by enumerating all neighbors of u. Instead, in our implementation we heuristically assign h 2 (u) ← v. Algorithm for online learning second order similarity Our method is described in Algorithm 2 by using the notations in Table 2. Our goal is to approximate the change of similarity between u and the in-neighbors x of v, and modify the embedding vectors whenever certain x gets more similar to u after adding the new edge uv. Note that x becomes more similar if the edge xv also appeared recently; in terms of fingerprints, this means that for some fingerprint index i, both x and u have v as fingerprint node. We perform the steps below to update the fingerprints of u and check for v as fingerprint in the in-neighbors x of v: Procedure UPDATEFINGERPRINTS. Fingerprint h i (u) can take the new value v for the new edge uv if it is too old or if π i (v) becomes the new MinHash value for permutation i. In the former case, we can either heuristically replace h i (u) with the new neighbor v or compute the true MinHash value argmin{π i (y) : y ∈ N(u)}. 2 Finally, for each in-neighbor x ∈ N(v), we compute the number of fingerprints that match those of u and have value v in Procedure GETSIMILARITYDELTA, and times optimize the representations q u and q x by using Eqs. Symmetrically, we also check for the similarity increase of v with the out-neighbors of u by performing the same steps, replacing u and v on the reverse direction graph. Similarity search experiments In this section, we describe our main evaluation, in which we assess how well the closeness of two nodes in the embedding reflect their similarity against an external ground truth. Towards this end, we first describe a network enriched with a time dependent external similairty ground truth information. Then, at a time instance, we compute the list of nodes closest to selected ones in the embedding, and compare these lists against the similarity ground truth. We analyze node embedding methods for similarity search over the Twitter tennis tournament collections of (Béres et al. 2018). For the quantitative analysis, we use the annotation of the nodes for the accounts of the tennis players that participate in a game on a given day. In this sense, we expect that the players of the same day are more similar than other players and non-player accounts, as we will describe in "Evaluation metrics" section. We compare the performance of StreamWalk and online second order similarity with online and static baseline methods, which we will describe in "Baseline models" section. Tennis tournament twitter collection data In (Béres et al. 2018), we compiled two separate tweet collections: RG17 for Roland-Garros, the French Open Tennis Tournament, and UO17 for US Open, the United States Open Tennis Championships, which we use in our first experiment. We use the mention graphs extracted from the last 15 and 14 days of RG17 and UO17, respectively. Based on the approximate time of the games, we consider a Twitter account n active on the given day, if it belongs to a tennis player who participated in a completed, canceled, or resumed game. Evaluation metrics We evaluate similarity search by a supervised experiment in which for each active account we consider the other similar active accounts on the given day. For each embedding algorithm, we generate 128-dimensional node representations every six hours (6:00, 12:00, 18:00, 24:00). For online methods, we perform continuous updates over the edge stream. For the static methods, we build the corresponding graph snapshots. We use NDCG (Al-Maskari et al. 2007) to evaluate how other active accounts are similar to a selected one. NDCG is a measure for ranked lists that assigns higher score if active accounts appear with higher rank in the similarity list. In our experiments, we compute the average of the NDCG@100 for the active accounts as query nodes to measure the performance of a single model in any given snapshot. Baseline models We compare StreamWalk and online second order similarity to online (or time-aware) and static (or batch) embedding methods. Online models are updated after the arrival of each edge. By contrast, static representations are only updated once every six hours when the graph snapshot ends. At hour t a static model is computed on the graph constructed from edges arriving in time window [ t − T, t] from the edge stream. For each batch baseline, we experimentally select the best value of T. We consider four static centrality measures as baseline: • DeepWalk ( • Decayed indegree, defined for node u at time t as where E(t) is the multi-set of edges that occurred up to time t with edge activation time t zu . We use the 128-dimensional representations of StreamWalk, second order similarity, DeepWalk, Node2Vec, and LINE to measure node similarity over time. For the two degree methods, we rank by degree without reference to the query node in the NDCG@100 formula. Results In our experiments, we measure how the similarity of node representations evolves over time by a supervised evaluation in which the active nodes should be similar to each other. We show two different ways to describe the performance of a single model: 1 For each day, we present the mean NDCG@100 of the snapshots evaluated at 6:00, 12:00, 18:00, and 24:00. 2 As a single global value (NDCG@100), we take the average of NDCG@100(u) for each daily player u in every snapshot. For a given parametrization of every embedding-based method, we always show the average performance of ten independent instances. During our experiments, we found that the following parameters had a great impact on the quality of online node embeddings, see Table 3: W1 and W2: In Word2Vec, we have the option to optimize node representations for the input (W 1) or the output (W 2) matrices (Press and Wolf 2016). It is application dependent whether W 1 or W 2 yields the better representation. For SW, we achieved the best results by W 2. Initialization: We experimented with Xavier (Glorot and Bengio 2010) and uniform random initialization of W 1 and W 2. Mirror: In our algorithms, the input to Word2Vec consists of node pairs. Given a training instance (x, y), we mirror if we feed both (x, y) and (y, x), not just (x, y). Decay: We heuristically map the representations of nodes with no recent activity to the null vector. Negative sampling rate and past positive samples used: Key parameters of Word2Vec analyzed separately in Figs. 8-9. Past positive samples are edges that appeared longer time ago; using such edges for negative training helps forgetting the past. We also combine the output of StreamWalk and second order similarity by using the weighted average of the corresponding inner products as similarity. This method denoted as SW+SO outperforms SW and SO, as seen in Fig. 12. The optimal weight of SO in the combination is 0.3 for both RG17 and UO17. In Table 4 we present the best global mean performance for each model. Fig. 13 shows the daily mean performance of the best models. For illustration, in Table 5 we present the 20 accounts most similar to that of Rafael Nadal for different node embeddings on 2017-May-31 18:00. Since Rafael Nadal played on this day, the active accounts (yellow) belong to tennis players who participated in a game on this day. The combined model SW+SO has the highest number of active player accounts. Furthermore, accounts present in both SW and SO columns (e.g. BMATTEK, DjokerNole, GrigorDimitrov, etc.) typically achieve higher position by SW+SO than SW. It is interesting to see that SW and SO find different active accounts, which explains why the combination SW+SO achieves superior performance. While static LINE and Node2Vec have less relevant hits than our online methods, most of the irrelevant accounts still belong to tennis players (e.g. andy_murray, stanwawrinka, etc.). The main difference is that the daily active players are better found by the online than the static methods. Online link prediction Next, we address the online, time-aware variant of the link prediction problem, in which we give a prediction for the next edge as we process the stream edge by edge. Our goal is to predict a new link at a given time based on all events that appeared before, including the arrival of the most recent edges. Compared to the predictions given at time t, at time t + 1, we can potentially reconfigure our model based on the edges appeared at time t and give a very different prediction for the next set of links. If we compare with a traditional model based on graph snapshots, the traditional model will output the exact same ranked list of links between two snapshots. While our modeling technique provides much stronger time awareness, it poses a challenge for evaluation, since we cannot compare just a single prediction against a larger set of edges, but a large set of potentially very different predictions ordered in time. To evaluate online link prediction methods, we use the prequential evaluation framework (Dawid 1984). As explained in Fig. 14, before a new edge (u, v, t) arrives in the graph stream, we first give an attempt to predict this edge, then reveal the edge and update the model using the new edge. In this way, we can incorporate information on the most recent edges in our model and evaluate potentially completely different predictions, coming from modified models, at every new time tick. Fig. 11 The effect of sampled walks (top) and hash functions (bottom) on the global mean performance (NDCG@100) for SW and SO respectively. These parameters control the number of sampled node pairs we feed to online Word2Vec at every edge arrival For evaluation, we use a single-point variant of NDCG (Pálovics et al. 2014) in which there is always exactly one relevant item, the actual new edge, and the higher the rank of the relevant edge, the higher the score. The overall evaluation of the model is the average of the single-point DCG@20 values over all events in the graph stream. We can also assess performance trends by computing daily or weekly averages of the DCG. We note that we ignore reappearing edges and only evaluate the prediction for those edges that appear the first time in the stream. Data sets We experiment on three standard network data sets from KONECT (Kunegis 2013). We selected networks from the collection with timestamped edges evenly distributed in time. For example, we discarded networks that were crawled for several weeks, but a significant part of their edges appeared within one day due to anomalies in the crawl. We discarded self-loops and similar edges with the same timestamps from each data set and processed the links in temporal order. Enron: The Enron email network consists of emails sent between employees of Enron. Nodes in the network are individual employees and edges are individual emails. The data has 308,708 edge events in 365 days between 27,972 nodes with 90,177 unique edges. Fig. 14 The online link prediction problem. For each edge in the stream, first we query the model to predict the next interaction. Then we train the model on the observed edge. Whenever node u interacts with another node, the model may generate different, updated top-k predictions Linux kernel: The communication network of the Linux kernel mailing list. The nodes are people, and each directed edge represents a reply from a user to another. The data has 487,355 edge events in 1380 days between 16,449 nodes with 88,855 unique edges. Facebook: The nodes of this network are Facebook users, and each edge represents one post, linking the user writing a post to the user whose wall the post is written on. The data has 16,868 edge events in 658 days between 16,868 nodes with 61,582 unique edges. Baseline methods We compare our methods to three batch baselines: Node2Vec, DeepWalk (DW), and Graph Factorization (GF). For these three models, we retrained the model over all past data periodically. We set the periodicity for one day on the Enron data set. We trained models with weekly batch updates on the Linux and Facebook data sets. Furthermore, we give two online baselines. Besides simply updating the degree of each node and using it as a predictor, we experimented with the online version of Graph Factorization. This corresponds to the version of StreamWalk where we do not take any samples but optimize only for the similarity of the node pairs along the edges in the stream. Results We analyze the results for online link prediction for the best parameter setting of each algorithm. The results are summarized in Table 6 and in Fig. 15. Our key observation is that the online learning methods, online graph factorization (GF), StreamWalk (SW) and SecondOrder (SO) show very strong performance compared to the batch methods. Note that batch methods such as GF read the list of edges several times to perform stochastic gradient descent, while online methods must process the edges in the order of arrival, without the possibility to access past edges again. The surprisingly good performance of the online methods is due to the fact that they put emphasis on the more recent edges. For gradient descent with negative sampling, the embedding is always optimized towards the freshly arrived edges, while negative sampling has the effect of forgetting the past. The advantage of model freshness is strikingly strong for the link prediction experiments where the low time granularity prequential evaluation method is used. Online methods also outperform batch embeddings for the Twitter tennis data sets. Note that ground truth data is only available daily in our collections. Higher performance difference can be obtained with labels of higher time granularity, as seen for the link prediction. When comparing the methods that update the embeddings from an edge stream, online graph factorization (GF), StreamWalk (SW), and SecondOrder (SO), we observe that they perform very similarly and their relative order depends on the data sets. Also note that GF is a special case of SW with walks of a length of one. Conclusion We introduced two online machine learning algorithms to extract temporal node representations from graph streams. The StreamWalk algorithm optimizes for the similarity of node pairs extracted along temporal walks from the data stream, whereas online second order similarity efficiently learns neighborhood similarity over graph streams by MinHash fingerprinting. We measured the quality of these models in two tasks. In the RG17 and UO17 Twitter collections, we analyzed the similarity of node representations over time for both online and static node embedding algorithms. Our methods SW and SO significantly outperformed static Node2Vec, LINE, and simple degree related baselines. The combination of SW and SO achieved superior performance in the supervised evaluation task that we implemented using daily changing node relevance labels. In a second experiment, we addressed the temporal link prediction task in three commonly used network data sets. We observed that online learning methods are superior to the snapshot-based batch algorithms. For some graphs, our walk-based embedding methods performed better than online matrix factorization. Abbreviations DCG: Discounted cumulative gain; DW: DeepWalk (Perozzi et al. 2014); GF: Graph factorization (Ahmed et al. 2013); NDCG: Normalized discounted cumulative gain; RG17: Our twitter data set about Roland-Garros 2017, the French Open Tennis Tournament; SO: online second order similarity, our model based on temporal node neighborhoods; SW: StreamWalk, our proposed model based on temporal walks; SW+SO: The combination of StreamWalk and online second order similarity; TREC: Text retrieval conference; UO17: Our twitter data set about US Open 2017, the United States Open Tennis Tournament
10,925.6
2019-08-23T00:00:00.000
[ "Computer Science", "Mathematics" ]
Fufang Zhenzhu Tiaozhi Capsule Prevents Intestinal Inflammation and Barrier Disruption in Mice With Non-Alcoholic Steatohepatitis Nonalcoholic steatohepatitis (NASH) has become a major cause of liver transplantation and liver-associated death. Targeting the gut–liver axis is a potential therapy for NASH. The Fufang Zhenzhu Tiaozhi (FTZ) capsule, a traditional Chinese medicine commonly used in clinical practice, has recently emerged as a promising drug candidate for metabolic diseases such as NASH. The present study aimed to investigate whether FTZ exerts an anti-NASH effect by targeting the gut–liver axis. Mice were fed with a high-fat diet (HFD) for 20 weeks to induce NASH. HFD-fed mice were daily intragastrically administrated with FTZ at 10 weeks after tbe initiation of HFD feeding. The mRNA levels of genes associated with the intestinal tight junction, lipid metabolism, and inflammation were determined by the q-PCR assay. Hepatic pathology was evaluated by H&E staining. The gut microbiota was analyzed by 16S rRNA gene sequencing. FTZ attenuated HFD-induced obesity, insulin resistance, and hepatic steatosis in mice. FTZ treatment decreased the elevated levels of serum aminotransferases and liver triglyceride in NASH mice. Furthermore, FTZ treatment reduced hepatic inflammatory cell infiltration and fibrosis in mice. In addition, FTZ attenuated the intestinal inflammatory response and improved intestinal barrier function. Mechanistically, FTZ-treated mice showed a different gut microbiota composition compared with that in HFD-fed mice. Finally, we identified eight differential metabolites that may contribute to the improvement of NASH with FTZ treatment. In summary, FTZ ameliorates NASH by inhibiting gut inflammation, improving intestinal barrier function, and modulating intestinal microbiota composition. INTRODUCTION Nonalcoholic fatty liver disease (NAFLD) is a disease spectrum that ranges from nonalcoholic fatty liver (NAFL) to nonalcoholic steatohepatitis (NASH) characterized by progressive fibrosis (1). Hepatic steatosis and chronic low-grade inflammation caused by prolonged exposure of liver lobules to high-flux free fatty acid (FFA) lead to metabolic liver injury in patients with NASH. NASH can progress over time to irreversible cirrhosis or hepatocellular carcinoma (HCC), and there is no better treatment than liver transplantation for them (2)(3)(4). With constant lifestyle changing, the burden of NAFLD in China is accordingly increasing. From 1999 to 2018, the prevalence of NAFLD in China has increased by 8%-9%, with the total prevalence now reaching 29.1%, which makes it an essential social health issue (5). East Asian populations may have a higher risk of developing NASH due to a more frequent mutation of lipid metabolism genes (6). The improvement of metabolic inflammation and progressive liver fibrosis caused by disorders of lipid metabolism is a major clinical therapy for NASH (7,8). Although the pharmacology industry has invested heavily in this in recent years, there are still no approved therapeutic drugs for NASH on the market (9). At present, the only effective treatment is changing a poor lifestyle and diet or engaging in physical activity or bariatric surgery (10,11). Therefore, there is a severe challenge for the development of drugs to treat NAFLD, particularly targeting the inflammation and progressive fibrosis of NASH. There is growing evidence that demonstrated intestinal mucosal inflammation, intestinal flora, and barrier function are implicated in the pathogenesis of NASH (12,13). Mice with defects in intestinal epithelial permeability developed a more severe steatohepatitis. Additionally, the clinical studies revealed that patients with NASH exerted increased gut epithelial permeability, a decreased expression of tight junction proteins, and higher levels of inflammation (14)(15)(16), contributing to the progression of NAFLD. Furthermore, NASH is closely associated with changes in the composition of the gut microbiota. Many species in human gut microbiota are thought to be associated with the progression of NAFLD such as Ruminococcus, Roseburia, and Bacteroidia (17). In summary, the complex interaction of intestinal flora, intestinal permeability, and inflammation-mediated "liver-gut axis" regulate the progression of NAFLD and NASH (14,18,19). Fufang Zhenzhu Tiaozhi (FTZ), a traditional Chinese medicine formula for treating metabolic syndromes, is composed of eight traditional Chinese medicinal herbs, including Rhizoma coptidis, Fructus Ligustri Lucidi, Herba cirsii japonici, Radix Salvia miltiorrhiza, Radix Notoginseng, Cortex Eucommiae, Fructus Citri Sarcodactylis, and Radix Atractylodes macrocephala (20). Our previous studies have shown that FTZ effectively alleviates hyperglycemia and reduces hyperlipidemia by the regulation of cytochrome P450 family 7 subfamily A member 1 (CYP7A1), cytochrome P450 family 7 subfamily A member 1 (HMG-CoA), and insulin receptor substrate 1 (IRS1-GLUT2) (21,22). Previous studies on FTZ for NASH tended to be less systematic and focused only on the liver, neglecting the important role played by extrahepatic tissues such as the gut (23,24). However, whether FTZ prevents NASH through the regulation of the liver-gut axis is still unclear. In the present study, we demonstrated that FTZ attenuated hepatic triglyceride (TG) accumulation, hepatic inflammation, and fibrosis in mice with NASH. In addition, FTZ showed a more potent anti-NASH property than atorvastatin involved in the attenuation of hepatic inflammation and fibrosis. Furthermore, FTZ ameliorates the intestinal mucosal barrier and intestinal microbiota disorder in mice with NASH. Preparation and Quality Control of FTZ FTZ was obtained from the First Affiliated Hospital of Guangdong Pharmaceutical University (Guangzhou, China). The preparation of FTZ was consistent with the protocol described previously (22). The quality analysis of the FTZ extract was performed by UPLC-MS/MS as described previously (25). Glucose Tolerance Tests Glucose tolerance test (GTT) and insulin tolerance test (ITT) were performed in mice after an overnight fast. Blood glucose concentration was measured after 0, 15, 30, 45, 60, 90 and 120 min with a glucometer. Serum Assays Serum TG, TC, high-density lipoprotein (HDL), low-density lipoprotein (LDL), ALT, and AST levels were measured by a commercial kit according to the manufacturers' instruction (Jiancheng Bioengineering Institute, Nanjing, China). Quantitative Analysis of Hepatic Triglyceride Hepatic TG was extracted from liver tissues with a mixture of chloroform and methanol. The contents of hepatic TG were measured by a commercial kit (Jiancheng Bioengineering Institute, Nanjing, China) and normalized by the liver wet weight. Analysis of Body Composition Lean and fat mass were determined by the EchoMRI body composition analyzer (EchoMRI ™ , Shanghai, China) in mice according to the manufacturers' instruction. Histopathology Liver and ileum tissues were routinely fixed in 4% paraformaldehyde solution (4°C) overnight, embedded in paraffin, and then sectioned (4 mm) for H&E staining. Lipid droplets were detected by Oil Red O staining in frozen liver sections. Frozen livers were embedded in optical coherence tomography (OCT) and divided into 12 mm sections. All sections were stained by 0.3% Oil Red O for 15 min and hematoxylin for 2 min. Picrosirius red (PSR, 26357-02; Hede Biotechnology Co., Ltd., Beijing, China) staining was carried out to visualize the degree of liver fibrosis. The positive areas were quantified using the Image J. Histological images of section tissues were captured with a light microscope (Olympus, Tokyo, Japan). Immunohistochemistry Liver specimens fixed in 4% paraformaldehyde solution were embedded in paraffin blocks. Liver sections (4 mm thick) were processed using a standard immunostaining protocol. For immunohistochemical analyses, liver sections were separated, rehydrated, and sequentially incubated with primary antibodies and secondary antibodies. The area of positive staining was measured in high-power fields on each slide and quantified using Image J. Western Blot Analysis Tissues were homogenized in radioimmunoprecipitation (RIPA) lysis buffer. The supernatant was collected after centrifugation at 12,000 rpm for 30 min at 4°C. Total proteins (20-40 µg) were electrophoresed on sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) gels and transferred to polyvinylidene fluoride membranes (Millipore, MA, Boston, USA; Bio-Rad, CA, Berkeley, USA; New Life Science Products, NY, New York City, USA). Afterwards, the membranes were blocked with 10% nonfat dry milk, followed by incubation with primary and secondary antibodies. Membranes were detected by Clarity Western electrochemiluminescence (ECL) Substrate (Bio-Rad, USA) in conjunction with a chemiluminescence system (New Life Science Products, USA). Quantitative Real-Time RT-PCR The total RNA of liver or ileum tissues was extracted using a TRIzol reagent (Invitrogen, MA, Waltham, USA) and then subjected to reverse transcription and quantitative real-time PCR (qPCR). From the extracted mRNA, cDNA was synthesized using a PrimeScript ™ RT reagent kit with gDNA Eraser (Takara, Beijing, China). All the sequence information was shown in Supplementary Table 1. Quantitative real-time PCR measurements were performed using the SYBR Green Supermix (Bio-Rad, CA, Berkeley, USA). The primers used were described in Supplementary Table 1. The relative amount of each mRNA was calculated by using the comparative threshold cycle (Ct) method. Ct values were normalized to gapdh. 16S rRNA Gene Sequencing In this study, 16S rRNA gene sequencing was used to extract microbial DNA from mice feces, and V3 and V4 were selected for amplification to detect the variation and relative abundance of the target region and to obtain information on the bacterial population for analysis. The PCR reaction system was configured by taking 30 ng of qualified genomic DNA samples and the corresponding fusion primers, setting the corresponding PCR reaction parameters for PCR amplification, using Agencourt AMPure XP magnetic beads to purify the PCR amplification products and dissolve them in an elution buffer to complete the library construction. The libraries were tested for fragment range and concentration using an Agilent 2100 Bioanalyzer. Libraries that pass the assay are sequenced on the appropriate platform (HiSeq/iSeq) depending on the insert size. The data are filtered to remove the low-quality reads, and the remaining high-quality clean data can be used for later analysis; the reads are stitched into tags by the overlap relationship between the reads; the tags are clustered into operational taxonomic units (OTUs) at a given similarity, and then, the OTUs are compared with the database, and they are annotated with species; based on the OTUs and species annotation results, the complexity of the sample species and the species differences between groups were analyzed. UPLC-QTOF-MS-Based Metabolomics To clarify the changes produced by the metabolites of the intestinal flora of NASH mice after treatment with FTZ, we selected three representative samples from the HFD and FTZ (600 mg/kg) groups for a metabolomic study among the mice feces samples in the 16S rRNA gene sequencing by combining the results of previous experiments. The sample and extract solution (acetonitrile: methanol:water = 2:2:1) (1,000 ml) containing the internal standard (L-2-chlorophenylalanine, 2 mg/ml) were mixed, then homogenized, sonicated, and centrifuged. The supernatant was dried in a vacuum concentrator. The samples were reconstituted in 200 ml of 50% acetonitrile and centrifuged for 10 min, and the supernatant was ready for UPLC-MS/MS analysis. The UPLC and MS/MS instrument settings are shown in Supplementary Tables 2, 3. The UPLC separation was carried out using a 1290 Infinity series UPLC System (Agilent Technologies, CA, Palo Alto, USA). Statistical Analysis All data were presented as mean ± standard deviation (mean ± SEM) and statistically analyzed using Graphpad Prism 8.0 and SPSS 19.0 software. The differences between the two groups were compared using a t-test. A Mann-Whitney U test or Wilcox. test was used for two groups with different variances. The differences between multiple groups were compared using one-way ANOVA. Pathway-associated metabolite sets Small Molecule Pathway Database (SMPDB) were calculated by a hypergeometric test to obtain the P-value. p < 0.05 indicates a statistically significant difference. FTZ Ameliorates HFD-Induced Obesity and Insulin Resistance in Mice The occurrence of NASH is closely related to systemic metabolic disorders such as obesity and insulin resistance (26). To determine the therapeutic effects of FTZ on NASH, we first investigated whether FTZ administration affected the systemic metabolic state. Mice with NASH were induced by HFD for 20 weeks ( Figure 1A). Body weight was significantly increased in HFD-fed mice compared with NCD, whereas FTZ and ATV-treated mice had less weight gain than HFD-fed mice administrated with vehicle ( Figure 1B). Similarly, FTZ and ATV treatment reduced the total fat mass in HFD-fed mice ( Figure 1C). These data suggested that both FTZ and ATV treatment alleviated obesity in HFD-fed mice. Furthermore, FTZ treatment decreased the fasting blood glucose in HFD-fed mice ( Figure 1D). GTT and ITT assays showed that FTZ and ATV markedly alleviated impaired glucose intolerance and insulin resistance induced by HFD ( Figures 1E, F). Collectively, these data suggest that FTZ attenuates obesity and insulin resistance in mice with NASH. FTZ Attenuates Hepatic Steatosis and Lipotoxic Injury Induced by HFD in Mice The size and weight of the livers from HFD-fed mice were significantly increased compared with those in the NCD group, while they were significantly reversed by FTZ treatment (Figures 2A, D). Liver morphology analysis showed that notable hepatic lipid deposition was observed in HFD-fed mice, whereas it was significantly attenuated in the FTZ treatment group. Additionally, NAS scores also indicated that FTZ treatment ameliorated liver lipotoxicity and injury in the mice induced by HFD. Oil red O staining also showed that the density of hepatic lipid droplets as well as the size of the lipid droplets was obviously decreased by FTZ treatment compared to those of HFD-fed mice ( Figures 2B, C). Moreover, the upregulation of hepatic TG, a key indicator of causing hepatic lipotoxicity, was significantly rolled back by FTZ ( Figure 2E). In addition, FTZ attenuated the upregulation of serum ALT and AST in mice caused by HFD feeding ( Figure 2F). Furthermore, FTZ alleviated the abnormal serum levels of TG, TC, HDL, and LDL caused by HFD ( Figures S1A-D). These data suggest that FTZ attenuates HFD-induced hepatic steatosis and lipotoxic injury in mice. FTZ Normalizes Hepatic Lipid Metabolism Genes in Mice Fed With HFD To verify the role of FTZ in hepatic lipid metabolism, we examined the genes involved in lipid biosynthesis, lipolysis, and uptake. Indeed, the mRNA levels of lipogenic genes such as Srebp-1c, Hmgcr, Acca, Fasn, Scd1, and Ppar-g were increased by HFD treatment, whereas they were decreased by FTZ and ATV treatment ( Figure 3A). In addition, FTZ and ATV treatment reduced the mRNA levels of hepatic lipid transport genes such as Fatp4, Fabp1, and Cd36 in HFD-fed mice ( Figure 3B). Conversely, the hepatic mRNA levels of oxidative phosphorylation genes such as Atgl, Hsl, and Ppar-a were decreased by HFD, whereas they were upregulated by FTZ and ATV treatment ( Figure 3C). Immunoblotting analysis confirmed that the protein expression of ATGL and PPAR-g was markedly restored in mice treated by FTZ or ATV, compared with mice fed by HFD ( Figure 3D). These data suggested that FTZ ameliorates the disorder of hepatic lipid metabolism induced by HFD in mice. FTZ Attenuated Liver Metabolic Inflammation and Progressive Fibrosis Induced by HFD in Mice To further investigate the protective effects of FTZ against NASH in vivo, we examined the effects of FTZ on NASH-associated inflammation and fibrosis. The activation of hepatic stellate cells (HSCs) and collagen deposition are important distinctions between NAFL and NASH (27). Consistently, picrosirius red (PSR) staining showed less collagen deposition in the liver sections of FTZ treatment. Furthermore, FTZ attenuated hepatic inflammatory cell infiltration, as demonstrated by CD68 immunohistochemistry staining ( Figure 4A). The q-PCR assay showed that FTZ treatment significantly attenuated the increase in the hepatic mRNA levels of inflammatory genes such as Il-b, Il-6, Tnf-a, Cxcl10, Ccl2, and Ccl5 ( Figure 4B). Additionally, FTZ attenuated the hepatic mRNA levels of profibrotic genes such as Col1a1, Tgf-b1, and Timp1 in NASH mice ( Figure 4C). The protein levels of CD68 and TLR4 were significantly downregulated by FTZ treatment ( Figure 4D). Furthermore, the expression of a-SMA, a marker of activated hepatic stellate cells, was also attenuated by FTZ treatment (28). In parallel, the phosphorylation of Smad2/3 was suppressed in FTZtreated mice ( Figure 4E). These data suggested that FTZ attenuated hepatic steatosis, inflammation, and fibrosis in mice under a metabolic stress condition. FTZ Restores Intestinal Mucosal Barrier Disruption and Inflammation Induced by HFD in Mice Increasing evidence demonstrated that increased gut inflammation and compromised intestinal epithelial permeability promote the enhanced translocation of a multitude of gut microbial products involved in the progression of NAFLD (29). Next, the H&E staining of intestinal histopathology showed that intestinal mucosal barrier disruption and inflammation induced by HFD were effectively mitigated by FTZ and ATV treatment ( Figure 5A). Epithelial tight junction molecules such as ZO-1, occludin, claudin 4 and claudin 2, and E-cadherin, are the markers of the epithelium integrity (30). As expected, the mRNA levels of Cldn-2, Cldn-4, Zo-1, E-cadherin, and Ocln were significantly downregulated in HFD-fed mice compared with those in NCD-treated mice ( Figure 5B). In contrast, FTZ and ATV treatment significantly reversed the decrease in these tight junction molecules in HFD-fed mice. Immunoblotting analysis confirmed that the protein expression of ZO-1 and E-cadherin was markedly restored in mice treated by FTZ or ATV, compared with mice fed by HFD ( Figure 5C). Moreover, intestinal inflammatory cell infiltration was markedly increased in NASH mice but remarkably decreased by FTZ and ATV treatment. The q-PCR assay showed that FTZ and ATV treatments significantly attenuated the HFD-induced interstitial mRNA levels of inflammatory genes such as Tlr4, Tlr2, Tab1, Il-1b, Il-6, and Ccl2 in mice ( Figure 5D). With regard to the gut-liver axis, abnormal lipid metabolism in the intestine may lead to excessive lipid flow into the portal circulation, resulting in more hepatic lipid accumulation (13). As expected, FTZ significantly reduced the abundance of ileum lipid transport-related genes induced by HFD, such as Cd36, Fabp1, Fatp4, and Ffar3 ( Figure S2). Collectively, these data suggested that FTZ treatment protected mice against intestinal mucosal barrier disruption and inflammation induced by HFD treatment. FTZ Reverses the Differences in the Composition of the Intestinal Flora of HFD-Induced NASH Mice Strategies to restore the intestinal microbiota to prevent and cure metabolic diseases have been proposed (31). To better understand the complex interactions between FTZ treatment and gut microbiota in NASH mice, the phylogenetically informative 16S rRNA genes (high-variability regions V3-V4) were amplified from the total DNA extracted from mouse fecal samples. The principal component analysis (PCA) clearly separated the samples from the NCD-fed mice, HFD-fed mice, and FTZ-treated mice. ( Figure 6A). Furthermore, the separation of non-metric multidimensional scaling (NMDS) and the principal coordinate analysis (PCoA) of The expression of PPAR-g, ATGL, and GAPDH was analyzed by Western blotting. GAPDH served as a loading control (n=3). Data are represented as means ± SEM. # indicates a significant difference between the NCD group and the HFD group (t-test); * indicates a significant difference between the FTZ (600 mg/kg)/FTZ (1,200 mg/kg)/ATV (10 mg/kg) group and the HFD group (one-way ANOVA). # P < 0.05, ## P < 0.01 versus NCD mice; *P < 0.05, **P < 0.01 versus mice fed by HFD. ns indicates no significance. the weighted UniFrac was consistent with PCA ( Figures S3A-B). The calculated relative abundance of bacteria at the class level (top 15) in the five groups is shown in Figure 6B. The intestinal flora composition of NASH mice was changed compared with control mice. Especially, the abnormal elevation of class Bacteroidia, Verrucomicrobiae, and Epsilonproteobacteria in NASH mice was significantly attenuated by FTZ treatment. In contrast, the decrease in Clostridia from NASH mice was significantly increased by FTZ treatment (Figures 6C, D). Consistently, a similar recovery was observed at the family level of the flora ( Figure S3C). In addition, the abnormal reduction of the difference in alpha diversity in NASH mice was significantly attenuated by FTZ treatment (Figures S3D, E). These data suggested that FTZ alleviated the gut microbiota disorder in NASH mice. FTZ Treatment Exhibits Different Intestinal Flora Metabolites From HFD-Induced NASH Mice Dramatic differences in intestinal metabolites were demonstrated between NCD and HFD mice by metabolomics analysis, suggesting that intestinal microbiota readily mediated the pathogenesis of NASH in mice (Supplementary Figures 4A-C). To further analyze the differences in metabolic profiles between the FTZ treatment group and the HFD group, a supervised discriminant model, orthogonal partial least squares discriminant analysis (OPLS-DA), was used to perform the analysis. Based on the OPLS-DA model, score plots and the permutation test illustrated an obvious separation between the HFD and FTZ groups ( Figure 7A). A statistical stacked histogram of the relative abundance of medium values for each metabolite category in each group of samples demonstrates the significant differences in intestinal flora metabolites, including amino acids, fatty acids, bile acids, and SCFAs, between the HFD and FTZ treatment groups ( Figure 7B). In contrast, the Z-score heat map shows the trend in the concentration of each class of metabolites in the samples of NASH mice versus FTZ-treated mice. The results showed that the FTZ group of mice had greater differences in the content of intestinal flora metabolites compared to the HFD group, with higher levels of bile acids, fatty acids, and SCFAs overall, as well as greater variability ( Figure 7C). The above results indicate that the intestinal metabolites of NASH mice appear significantly different after FTZ treatment compared to HFD, suggesting the presence of potential biomarkers. Metabolite Profile Associated With Bile Acid Metabolism Was Mediated by FTZ Treatment in NASH Mice Taking the merges or intersections of differential metabolites obtained unidimensionally and multidimensionally can be used to select potential biomarkers that may have a biological significance based on unidimensional and multidimensional analysis. Based on the results of the OPLS-DA model, the volcano plot was used to screen for reliable metabolic markers. The results showed that a total of 66 metabolites met Variable importance in projection (VIP) > 1 in the HFD compared to the FTZ treatment group, and the top 10 differential metabolites have been color-coded ( Figure 8A). We also used a unidimensional test (We selected the t-test or Mann-Whitney U test based on the normality and chi-square of the data) to obtain the differential metabolites between the two groups mentioned above. The results showed that a total of eight differential metabolites were obtained from this unidimensional analysis ( Figure 8B). For the combined analysis of the two tests above, there were 8 identical differential metabolites in the OPLS-DA and unidimensional analyses; a further 58 differential metabolites were present only in the OPLS-DA results, and 0 differential metabolites were present only in the unidimensional analysis results ( Figure 8C). The Z-score heat maps for the eight metabolites that met the screening criteria showed significant differences with NASH mice after FTZ treatment: NorDCA, UDCA, DCA, CDCA, mandelic acid, and 3-(3-hydroxyphenyl)-3-hydroxypropanoic acid were elevated, while alpha-aminobutyric acid and dodecanoic acid The comparison of the taxonomic abundance of the indicated groups (n=5). Data are represented as means ± SEM. # indicates a significant difference between the NCD group and the HFD group (t-test); * indicates a significant difference between the FTZ (600 mg/kg)/FTZ (1,200 mg/kg) group and the HFD group (one-way ANOVA). # P < 0.05, ## P < 0.01 versus NCD mice; *P < 0.05, **P < 0.01 versus mice fed by HFD. ns indicates no significance. were decreased ( Figure 8D). We used the differential metabolites obtained that met the screening criteria for pathway enrichment analysis. The pathway enrichment analysis of differential metabolites using selected SMPDB libraries showed multiple pathways including bile acid metabolism, the beta oxidation of very long chain fatty acids, mitochondrial beta oxidation of medium-chain saturated fatty acids, fatty acid biosynthesis, and propanoate metabolism ( Figure 8E). Finally, the absolute concentrations of the above potential biomarkers were measured in the samples to verify and obtain results consistent with the previous section (We selected the t-test or Wilcox. test based on the normality and chi-square of the data) ( Figures 8F-J). DISCUSSION In the present study, we demonstrated that the FTZ treatment markedly ameliorated NASH in mice. In addition to its commonly accepted protective effect on hepatic lipid metabolism, metabolic inflammation, and progressive fibrosis, we found that the mechanisms by which FTZ alleviated NASH are involved in the attenuation of gut inflammation and microbiota disorders, as well as the improvement of intestinal barrier function. Lifestyle modification is beneficial for the treatment of NAFLD and should be managed on a long-term basis (32). However, poor adherence to lifestyle modification makes NAFLD management a difficult task. Unfortunately, there are no specific drugs to date for the treatment of NASH (33). FTZ is a clinical formula created by Professor GJ, a renowned Chinese herbalist, based on the theory of "Tiaogan Qishu Huazhuo Rule," which has been used to treat hyperlipidemia and metabolic syndrome, and related complications such as atherosclerosis and non-alcoholic fatty liver disease. "Tiaogan Qishu Huazhuo Rule" offers its distinctive treatment options based on the "differentiation of symptoms and signs." The complexity of the material basis of Chinese medicine determines the multiplicity of its effects, which leads to the diversity of its medicinal properties (34). In a recent study, some components of FTZ were found to prevent the development of fatty liver in rats (22,23). In the present study, we have demonstrated that FTZ alleviates metabolic stressinduced NASH by regulating the gut-liver axis and preventing steatosis, inflammation, and fibrosis. Furthermore, FTZ is more effective than ATV in combating metabolic inflammation and progressive fibrosis, which may be attributed to the large number of active ingredients in it, such as berberine and oleanolic acid (35,36). Our data are consistent with previous studies that FTZ has a hypolipidemic effect under metabolic stress conditions, and its therapeutic effect is similar to that of ATV, which is recommended for NAFLD treatment in several national clinical guidelines, including the United States, and can effectively reduce lipid deposition and lipotoxicity caused by HFD (37)(38)(39). Our results revealed that FTZ-regulated hepatic fatty acid synthesis by the suppression of fatty acid synthesis regulators such as FAS, acetyl-CoA carboxylase (ACC), and SREBP-1c, suggesting that FTZ may limit hepatic TG availability by inhibiting lipogenesis. FTZ also regulates a wide range of genes involved in lipid catabolism and transport. PPAR-g expression is increased in high-fat diet-induced hepatic lipid accumulation, and the hepatic deficiency of PPAR-g expression inhibits HFD-induced NAFLD progression in obese mice (40,41). Meanwhile, PPAR-g directly regulates CD36 transcript levels, thus promoting hepatic lipid uptake and affecting lipid metabolism, resulting in hepatic lipid deposition and steatosis in NAFLD (42,43). In this study, our data showed that FTZ alleviated the expression of PPAR-g and CD36 in livers from HFD-fed mice at least in part, which may impact the severity of hepatic steatosis as well as improve whole-body insulin sensitivity. As a hallmark of the pathogenesis of NASH, the inflammatory response involves two critical events: the release of numerous types of proinflammatory cytokines and damage to hepatocytes (44). In terms of liver inflammation, the number of CD68positive cells reflected macrophage recruitment; the mRNA levels of inflammatory factors such as Il-1b, Il-6, Ccl2, Ccl5, Cxcl10, and Tnf-a were significantly increased in response to HFD diet induction (45)(46)(47). TLR4 is capable of triggering the rapid activation of its downstream signaling, NF-kB, which upregulates the production of proinflammatory cytokines such as IL-1b and IL-6. Previous studies have extensively reported the TLR signaling pathway as a key target to communicate with the liver and intestine (48,49). The infectious agent/LPS-mediated activation of TLR leads to the release of hepatic inflammatory factors that exacerbate NASH, which is one of the biological bases for FTZ-mediated liver-gut interaction in the treatment of NASH (50). Hepatic fibrosis is one of the key histopathological features in NASH patients, suggesting a more severe and progressive liver injury (51). The vast majority of drugs developed for NASH have been rejected because they do not reverse progressive fibrosis (9,52). In the present study, FTZ attenuated collagen deposition and the hepatic expression of profibrotic growth factors in mice fed by the HFD diet, suggesting that FTZ protects against NASHrelated fibrosis through the suppression of HSC activation. Interestingly, the protein expression of p-Smad3 and Smad2/3 was significantly attenuated by FTZ treatment. These results suggest that FTZ reduces HFD-induced progressive liver fibrosis in mice by inhibiting the activation of Smad3 pathways and HSCs. Consistently, a large number of studies, in both NASH patients and animals, have confirmed the involvement of the dysbiosis of the intestinal flora in the progression of NAFLD (53). In this study, we found that the composition of the gut microbiota of FTZ-treated mice was different from that of HFDfed mice but similar to that of NCD-fed mice. It is noteworthy that in previous reports, the relative proportion of the phylum Bacillus mimicus in the intestinal flora of obese individuals was reduced in comparison with slim individuals, and that this proportion recovered after weight loss on a controlled diet (54). However, a clinical study of microbiota composition in NAFLD reported an increased proportion of Bacteroidetes in obese and NASH patients compared with healthy controls (55). These species differences may be caused by interspecies metabolic differences between humans and mice, the composition and brand of the diet, and the modeled feeding times. Furthermore, Epsilonproteobacteria, named after the Greek god Proteus, is a common pathogenic organism and p r e v i o u s s t u d i e s h a v e f o u n d a n i n c r e a s e i n Epsilonproteobacteria in obese patients and NASH patients compared to healthy controls, and it is reassuring to note that FTZ treatment can counteract these adverse effects of HFD (55). We also observed favorable improvements in the differential strains at the family level of taxonomy consistent with previous reports, such as Porphyromonadaceae, Bacteroidaceae, Ruminococcaceae, and Verrucomicrobiaceae (56-58). In addition, there are numerous animal experiments with results consistent with our differential strain changes. Previous studies have noted that the oral administration of branched-chain amino acids (3% kcal) induced a signifi cant increase in Ruminococcaceae and portal acetic acid levels, and it reduced hepatic fat accumulation in HFD rats (59). Another NASH study suggested that a Chinese herbal formula reduced hepatic steatosis maybe through decreasing certain gut bacteria (such as Verrucomicrobiaceae), alleviating intestinal endotoxemia, and r e d u c i n g N L R P 3 i n fl a m m a s o m e a c t i v a t i o n ( 6 0 ) . Supplementation with prebiotic VSL#3 may improve NASH by reducing the relative abundance of Porphyromonadaceae and Bacteroidia and increasing the relative abundance of Ruminococcaceae, thereby reshaping the intestinal flora of NASH mice (61). Our results confirm that FTZ impairs dyslipidemia, hepatic steatosis, and lipotoxicity in NASH mice, and FTZ was found to achieve the remodeling of intestinal flora in NASH mice by affecting the above-mentioned differential strains. Consistent with expectations, as FTZ restored homeostasis to the intestinal flora of NASH mice, it also produced significant changes in the composition of their metabolites. We subsequently identified eight potential biomarkers by taking intersections between univariate analysis and multivariate analysis. Bile acid-related pathways may be involved given that the gut microbiota, through defined enzymatic activities (such as deconjugation, dehydroxylation, oxidation, and epimerization, among others), is a critical modulator of the pool size and composition of bile acids and can significantly modify the chemical and signaling properties of bile acids (62,63). Of the differential strains we identified for the FTZ treatment of NASH, Clostridium, Bacteroides, and Ruminococcaceae were reported to be involved in the metabolic process of bile acids (64,65). The role of bile acid metabolism in regulating glucose and lipid metabolism is well established (58,66,67). Interestingly, we found four bile acids (CDCA, UDCA, NorDCA, and DCA) from the eight differential metabolites. Of these, the primary bile acid CDCA has been reported as a natural ligand for FXR, confirming its involvement in lipid metabolism, immunomodulation, and anti-inflammatory protection (68). Furthermore, UDCA is considered a potential metabolite for the treatment of NASH because of its anti-inflammatory, anti-apoptotic, and antioxidant properties (69). In another study of Chinese herbal compounds for the treatment of NASH, NorDCA was also observed to increase in the feces of mice (70). In addition, a human study showed that serum cholesterol levels decreased in every subject (an average of 15%) during DCA administration (71). LC/MS-based metabolomics has led to new biomarker discoveries and a better mechanistic understanding of FTZ for NASH. However, many major pertinent challenges remain. Partial, incomplete metabolomes persist due to factors such as limitations in mass spectrometry data acquisition speeds, a wide range of metabolite concentrations, and intestinal flora-specific changes that confound our understanding of metabolite perturbations (72,73). Since LC/MS-based metabolomics data usually varied greatly, it should be cautious and validated on making conclusions based on only three samples. Larger samples are required for metabolic analysis in future studies to make the data and conclusion more convincing. Collectively, our current findings demonstrate that FTZ ameliorates the pathogenesis of NASH by inhibiting hepatic lipid accumulation, inflammation, and fibrogenesis while improving intestinal flora disruption and barrier function disruption. Therefore, our findings will provide insights into the development of Chinese medicine treatments for NASH ( Figure 9). CONCLUSION In conclusion, our current study provides compelling evidence that FTZ can attenuate lipid deposition, metabolic inflammation, and hepatic fibrosis in NASH, possibly by modulating intestinal bacterial microbes and their metabolic homeostasis. FTZ may be a potential candidate for the treatment of patients with NASH. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi. nlm.nih.gov/, SRP350756. ETHICS STATEMENT The animal study was reviewed and approved by Research Ethical Committee of Guangdong Pharmaceutical University.
7,691.6
2022-06-16T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Control Algorithm for Parallel Connected Offshore Wind Turbine Generators A control algorithm for Parallel Connected Offshore Wind Turbines with permanent magnet synchronous Generators (PCOWTG) is presented in this paper. The algorithm estimates the optimal collective speed of turbines based on the estimated mechanical power of wind turbines without direct measurement of wind speed. In the proposed topology of the wind farm, direct-drive Wind Turbine Generators (WTG) is connected to variable low-frequency AC Collection Grids (ACCG) without the use of individual power converters. The ACCG is connected to a variable low-frequency offshore AC transmission grid using a step-up transformer. In order to achieve optimum wind power extraction, the collective speed of the WTGs is controlled by a single onshore Back to Back converter (B2B). The voltage control system of the B2B converter adjusts voltage by keeping a constant Volt/Hz ratio, ensuring constant magnetic flux of electromagnetic devices regardless of changing system frequency. With the use of PI pitch compensators, wind power extraction for each wind turbine is limited within rated WTG power limits. Lack of load damping in offshore wind parks can result in oscillatory instability of PCOWTG. In this paper, damping torque is increased using P pitch controllers at each WTG that work in parallel with PI pitch compensators. Introduction The first offshore wind farm installed on the coast of Sweden, with a rated power of 220 kW, dates back to 1990. Since then, the number and size of offshore wind farms have constantly been increasing, and their installation has become increasingly remote from the mainland. The main reasons for installing offshore wind turbines are the reduction of visual and acoustic impact on the environment, less wind turbulence, and higher average wind speeds than onshore wind turbines. Possible grid topologies for offshore windfarms include wind parks with High Voltage AC (HVAC) offshore transmission lines, and wind parks with High Voltage DC (HVDC) offshore transmission lines [1][2][3]. Most offshore wind parks installed today use the HVAC offshore transmission grid. In offshore wind parks with HVAC transmission lines, Wind Turbine Generators (WTG) are connected to an AC collector grid using an AC/AC power converter. The offshore substation contains a step-up transformer by which the collector grid voltage is raised to the high voltage of the transmission AC grid. Onshore substations also have a transformer. Wind parks with HVDC transmission grids usually have an identical collector grid solution as wind parks with HVAC transmission lines. However, the offshore transmission line is DC, where the collector grid frequency is decoupled from the mainland AC frequency. In this case, offshore and onshore substations contain AC/DC and DC/AC power converters in addition to transformers. Deployment of power electronics in harsh offshore environments has been identified as a major challenge associated with any offshore wind system. Recent trends in offshore wind industry have been directed toward minimising the complexity of offshore assets, aiming to provide an adequate level of reliability in operation of wind farms. In Parallel (directly) Connected Offshore Wind Turbine Generators (PCOWTG) wind park configuration, proposed in this paper, directly-driven Permanent Magnet Synchronous Generators (PMSG) are connected to variable low-frequency AC Collection Grid (ACCG) without using a power converter. By using a step-up transformer, the ACCG is connected to a Variable Low-Frequency offshore AC transmission grid (VLFAC). The Back to Back (B2B) power converter acts as the point of connection for the VLFAC grid to the mainland grid, regulating the Wind Turbine Generator's (WTG) collective speed and voltage at the B2B Point Of Interconnection (POI) to the VLFAC grid. The control system of the mainland grid side part of the B2B converter is responsible for controlling the DC link voltage of the B2B converter and reactive power injected (absorbed) to (from) mainland grid. DC link voltage of the B2B converter, without decreasing generality, is regarded as constant, and consequently, the control system of the mainland grid side part of the B2B converter is outside the interest of this paper. A similar wind park (farm) configuration is proposed and analysed in [4,5] with the constant speed operation of Wind Turbine Generators. Another similar configuration of offshore wind parks under the name "slim wind turbine concept" is proposed and analysed in [6] at the level of feasible study, without designing the system's controller. This paper extends this concept by developing a detailed simulation model using the Matlab/Simulink and Simscape Specialized Power System toolbox, designing the closedloop control system and solving the problem of undamped oscillations of rotor angles. Each component of the system is carefully designed to make it as close to real-world components as possible. Due to the absence of gearboxes and offshore power converters, the proposed wind park configuration concept decreases both capital expenditure (CAPEX) and operational expenditure (OPEX), reduces subsystem complexity, and increases overall system reliability [7,8]. According to [9,10], the main reasons for offshore wind park outages are gearboxes and power electronics failures. Underwater offshore AC cables have very high shunt capacitance, so the use of HVAC transmission cables is limited to 50-80 km distances due to large charging currents [11]. Consequently, the cables' capability to transfer active power decreases either due to thermal limits or voltage stability. The simplified formula for charging cable currents calculation is: where f is frequency; C is capacitance of the cable per unit length; l is the length of the cable; V is voltage. From (1), it can be seen that the reduction of cable charging currents is achieved by decreasing system frequency. Reactive power generated by the transmission cable with prevailing capacitive reactance by simplification can be calculated as: From (2), it can be seen that Q c depends on the product of frequency and cable length. Accordingly, the transmission cable length can be increased by decreasing system frequency for the same reactive power generated by the cable. The typical choice for nominal frequency in Low-Frequency AC systems (LFAC) is 50/3 Hz. This frequency in some countries is used for railway power supply (fractional frequency transmission systems). LFAC transmission systems are proposed and analysed in many papers [12][13][14]. In [15], a comparison of energy losses, total CAPEX and OPEX of LFAC and HVDC system was performed. It has been concluded that LFAC, with the extended length of the transmission, is comparable to HVDC (CAPEX costs are 6% lower than HVDC, and OPEX costs are 3% lower than HVDC). In a general case, the wind speed on each wind turbine may be different; this means that, in PCOWTG wind park Energies 2021, 14, 4670 3 of 28 configuration, the MPPT of the individual WTG's cannot be achieved. The decreased amount of extracted wind power of individual WTs is compensated due to the absence of thermal losses of individual WTG power converters because they are not present in this configuration. A simpler system with less equipment also reduces the number of potential failures and reduces WTG's downtime. Consequently, it can be concluded that the proposed configuration not only reduces the CAPEX and OPEX but could also potentially improve the Levelised Cost of Energy (LCOE). A medium-frequency windfarm and modular multi-level converter-based high-voltage direct current transmission (MMC-HVDC) integration system are proposed in [16]. One of the features of the proposed scheme is the adoption of the rectifier station and inverter station due to high scalability and low switching frequencies with satisfactory output harmonics characteristics. One of the most important features of the proposed scheme is the increase of the operating frequency of the offshore AC system to the medium-frequency range (100-400 Hz) instead of the conventional 50/60 Hz. The feasibility of employing the LFAC system for subsea transmission and distribution of 58 MW power is investigated in [17]. The authors developed simulation model of the LFAC-based subsea transmission and distribution system, which includes a hexverter as a frequency converter. A novel control strategy was developed to optimise its zero-sequence circulating current. The authors designed a dedicated control system to reduce the impact of real-time disturbances, such as wind speed fluctuations and harmonics due to heavy inductive load operating at 16 Hz. In [18], the main characteristics of the control algorithm of PCOWTG are presented. The control algorithm is applied to an offshore wind park with a transmission grid 20 km in length and used for the limited wind speed region between cut-in wind speed and nominal wind speed. The wind speed is estimated from the Wind Turbine (WT) speed and active power of the Wind Generator (WG). Due to lack of load damping, a significant amplitude of oscillations of WG rotor angles was observed. The rest of the paper is organised as follows. Section 2 provides an overview methodology of the paper. A detailed PCOWTG system description is given in Section 3, including a problem statement and control system architecture. In Section 4, the proposed PCOWTG control system is validated based on a realistic wind park simulator with a detailed discussion of the simulation results. Finally, Section 5 summarises concluding remarks and provides indications for further research. Methodology In this paper, the control algorithm is implemented on PCOWTG with a 150 km length of transmission grid. Additionally, a control algorithm is applied in case of wind speed changes from cut-in wind speed to above nominal wind speed. Instead of using the WGs active power, which was the case of the PCOWTG control system presented in [18], the calculated mechanical power of the WTG is used for estimating the wind speed of each WT. By using a P pitch controller with adequately adjusted parameters, the damping torque of the WTG is increased, and significant deviations of rotor angles are prevented, ensuring wind generators' angle stability. In order to compensate for the reactive power generated by the transmission cable, shunt reactors are added on both sides of the transmission grid cable. In order to achieve optimal wind power extraction (Maximum Power Point Tracking-MPPT), the collective rotational speed of WTGs is controlled by the B2B converter depending on wind speed. In order to keep magnetic flux in electromagnetic devices (transformers etc.) constant regardless of changes in system frequency, constant Volt/Hz ratio operation is adopted. This control strategy is well known in scalar control of asynchronous motors. It is also proposed in some papers that consider the variable frequency operation of wind parks [16,17], where the authors implemented the constant Volt/Hz operation on the AC collection grid for offshore wind with an HVDC transmission grid. Additionally, the wind speed is measured, which was not estimated in the approach proposed in [19,20]. In this Energies 2021, 14, 4670 4 of 28 paper, constant Volt/Hz operation is implemented in both collection and transmission offshore grids. In order to find the optimal collective rotational speed of WTG, the wind speed at each wind turbine is estimated by applying the original method, which is presented in this paper for the first time. Overview A schematic diagram of PCOWTG is presented in Figure 1. In Figure 1, n is the number of WTGs (or the number of clusters of WTGs); V r is the space vector of regulated voltage (voltage at POI); I is the space vector of converter current; θ f1 , θ f2 , . . . , θ fn are PMSGs rotor field angles; ω t1 , ω t2 , . . . , ω tn are rotational speeds of WTGs; β 1 , β 2 , . . . , β n are pitch angles of WTs; ω opt is the optimal collective speed of WTGs in wind parks; ω t is the collective (average) speed of WTGs in wind parks; I 1 , I 2 , . . . , I n are the space vectors of currents of wind generators; V 1 , V 2 , . . . , V n are the space vectors of voltages of wind generators; P mc1 , P mc2 , . . . , P mcn are estimated mechanical powers of WTs; v w1 , v w2 , . . . , v wn are wind speeds of each WT; R c and L c represent resistance and inductance for the converter coupling inductor, respectively. The rotor field angles and turbine speeds are measured using encoders. Additionally, the wind speed is measured, which was not estimated in the approach proposed in [19,20]. In this paper, constant Volt/Hz operation is implemented in both collection and transmission offshore grids. In order to find the optimal collective rotational speed of WTG, the wind speed at each wind turbine is estimated by applying the original method, which is presented in this paper for the first time. Overview A schematic diagram of PCOWTG is presented in Figure 1. In Figure 1, n is the number of WTGs (or the number of clusters of WTGs); Vr is the space vector of regulated voltage (voltage at POI); I is the space vector of converter current; θf1, θf2, …, θfn are PMSGs rotor field angles; ωt1, ωt2, …, ωtn are rotational speeds of WTGs; β1, β2, …, βn are pitch angles of WTs; ωopt is the optimal collective speed of WTGs in wind parks; ωt is the collective (average) speed of WTGs in wind parks; I1, I2, …, In are the space vectors of currents of wind generators; V1, V2, …, Vn are the space vectors of voltages of wind generators; Pmc1, Pmc2, …, Pmcn are estimated mechanical powers of WTs; vw1, vw2, …, vwn are wind speeds of each WT; Rc and Lc represent resistance and inductance for the converter coupling inductor, respectively. The rotor field angles and turbine speeds are measured using encoders. Problem Statement Generally speaking, the PCOWTG is a complex nonlinear multiple input multiple output systems with spatially distributed elements and a random source of mechanical (wind) energy. Due to the variable speed operation, the reactance of electrical elements of the system is changing with the change of system frequency, causing the analysis of the system and control system design to be more complex. Although the exclusion of power converters at WTGs reduces overall system complexity, at the same time, it reduces the controllability of the system. Consequently, the design of a control system of PCOWTG is a demanding task. The main controlled variables of the system are the collective speed of WTGs, the voltage at POI of the B2B converter, and pitch angles of WTs. The PCOWTG control system should simultaneously ensure the system's angle stability of WGs and voltage stability, ensuring that voltages and currents are within rated limits of all equipment. Reactive power flow should be minimised in order to reduce thermal losses. Additionally, by adjusting the speed of the turbines to the wind speed, maximum extraction of wind power should be ensured. In order to accomplish these tasks, the novel control system architecture combined with reactive power compensation and constant Volt/Hz operation is proposed. Problem Statement Generally speaking, the PCOWTG is a complex nonlinear multiple input multiple output systems with spatially distributed elements and a random source of mechanical (wind) energy. Due to the variable speed operation, the reactance of electrical elements of the system is changing with the change of system frequency, causing the analysis of the system and control system design to be more complex. Although the exclusion of power converters at WTGs reduces overall system complexity, at the same time, it reduces the controllability of the system. Consequently, the design of a control system of PCOWTG is a demanding task. The main controlled variables of the system are the collective speed of WTGs, the voltage at POI of the B2B converter, and pitch angles of WTs. The PCOWTG control system should simultaneously ensure the system's angle stability of WGs and voltage stability, ensuring that voltages and currents are within rated limits of all equipment. Reactive power flow should be minimised in order to reduce thermal losses. Additionally, by adjusting the speed of the turbines to the wind speed, maximum extraction of wind power should be ensured. In order to accomplish these tasks, the novel control system architecture combined with reactive power compensation and constant Volt/Hz operation is proposed. Control System Architecture A block diagram of the offshore B2B converter control system of the Voltage Source Converter (VSC) on the wind park side is presented in Figure 2. Inputs into the control system are measured using WTG speeds and PMSG rotor field angles, space vectors of currents from wind generators (that is, phase currents of PMSGs), space vectors of regulated voltage V r (that is, phase voltages of regulated voltage), and space vectors of the current I of the B2B converter (that is, phase currents of B2B converter). The control system is realized as a multi-loop control system. The inner control loops represent a decoupled current controller of direct (d) and quadrature (q) components of the current converter. The outer loops realize the WTG (collective) Speed controller and the B2B converter Voltage controller. The WTG Speed controller calculates reference I d current command (I dcmd ) for the Current controller while the Voltage controller calculates I q current command (I qcmd ) for the Current controller (see Figure 3). Control System Architecture A block diagram of the offshore B2B converter control system of the Voltage Source Converter (VSC) on the wind park side is presented in Figure 2. Inputs into the control system are measured using WTG speeds and PMSG rotor field angles, space vectors of currents from wind generators (that is, phase currents of PMSGs), space vectors of regulated voltage Vr (that is, phase voltages of regulated voltage), and space vectors of the current I of the B2B converter (that is, phase currents of B2B converter). The control system is realized as a multi-loop control system. The inner control loops represent a decoupled current controller of direct (d) and quadrature (q) components of the current converter. The outer loops realize the WTG (collective) Speed controller and the B2B converter Voltage controller. The WTG Speed controller calculates reference Id current command (Idcmd) for the Current controller while the Voltage controller calculates Iq current command (Iqcmd) for the Current controller (see Figure 3). Control System Architecture A block diagram of the offshore B2B converter control system of the Voltage Source Converter (VSC) on the wind park side is presented in Figure 2. Inputs into the control system are measured using WTG speeds and PMSG rotor field angles, space vectors of currents from wind generators (that is, phase currents of PMSGs), space vectors of regulated voltage Vr (that is, phase voltages of regulated voltage), and space vectors of the current I of the B2B converter (that is, phase currents of B2B converter). The control system is realized as a multi-loop control system. The inner control loops represent a decoupled current controller of direct (d) and quadrature (q) components of the current converter. The outer loops realize the WTG (collective) Speed controller and the B2B converter Voltage controller. The WTG Speed controller calculates reference Id current command (Idcmd) for the Current controller while the Voltage controller calculates Iq current command (Iqcmd) for the Current controller (see Figure 3). The WTG Speed controller, the Voltage controller, and the Current controller are realized in a rotating dq frame, where d axis is aligned with the space vector of voltage V r . The Phase-Locked Loop (PLL) is used to calculate system frequency and calculate angle θ r (angle of space vector V r ) of the abc to dq transformation. In this way, the q component of voltage V r equals zero. Consequently, active and reactive powers of the B2B converter are (per unit-pu) where V rd is the d component of regulated voltage V r ; I d and I q are the d and q components of the B2B converter currents, respectively. The reference value of the regulated voltage V r is found according to a constant Volt/Hz ratio operation using the formula: where V nom is the nominal converter voltage (pu); ω tn is the nominal WTG's rotational speed (pu), and ω t is the WTG's (collective) rotational speed (pu). The PI Voltage controller calculates reactive power order Q ord while I q cmd signal is found using Formula (4). The reference value of WTG collective speed ω opt is found by the Calculator block shown in Figure 2. The Calculator block calculates WT mechanical power, wind speed at each WT, and optimal collective speed of wind turbines ω opt . WTG Speed controller calculates active power order P ord , by means of the I d cmd signal found using Formula (3). The block diagram of a decoupled Current controller of the B2B converter is shown in Figure 4. In Figure 4, R c and L c represent resistance and inductance of the converter coupling inductor, respectively; ω is the system rotational frequency. The "P, Q priority" block in Figure 4 limits the I q converter current component (P priority) or I d converter current component (Q priority) in order to limit the converter current according to the P, Q priority algorithm described in Appendix A, Figure A1. In the same Figure, V d cmd and V q cmd represent command signals for the d and q components of voltage V c for the B2B converter, as shown in Figure 1. These components are inputs into the PWM modulator of the B2B converter. I d and I q represent d and q components of the space vector for current I of the B2B converter. The converter coupling circuit is driven by the following set of equations: From this set of equations, receiving a block diagram of the decoupled Current controller of the B2B converter is straightforward, as shown in Figure 4. The block diagram of a decoupled Current controller of B2B, as shown in Figure 4, is a "standard" block, which can be found in many textbooks discussing power converters in wind energy conversion systems and is not be further explained. Figure 5 shows the block diagram representing the Pitch control system. It consists of a PI Pitch compensator and a P pitch controller, which work in parallel. If the extracted mechanical power of WT is larger than rated WT power Prated, then the PI pitch compensator increases pitch angle β and limits extracted mechanical power to Prated. The algorithm for estimating WT mechanical power Pmci (Pmcipu is an abbreviation for Pmci pu) is explained in section 3.4. The P Pitch controller increases the damping torque of WTG, preventing oscillatory instability of wind generators. The reference signal for this controller is an averaged rotational speed of WTG ωt. In Figure 5, Pmci and ωti represent the estimated mechanical power and rotational speed of ith WT, respectively. In order to mitigate blade passing effects and prevent excessive pitch activity in the practical realization of the proposed control system, adaptive notch filters for both error signals in Figure 5 should be used, i.e., adaptive notch filter blocks should be added before the P and PI blocks. The notch frequencies of these filters should be selected as multiples of the WTG's rotational speed and as the resonant frequencies of the WTG's drive system. Figure 5 shows the block diagram representing the Pitch control system. It consists of a PI Pitch compensator and a P pitch controller, which work in parallel. If the extracted mechanical power of WT is larger than rated WT power P rated , then the PI pitch compensator increases pitch angle β and limits extracted mechanical power to P rated . The algorithm for estimating WT mechanical power P mci (P mcipu is an abbreviation for P mci pu) is explained in Section 3.4. The P Pitch controller increases the damping torque of WTG, preventing oscillatory instability of wind generators. The reference signal for this controller is an averaged rotational speed of WTG ω t . In Figure 5, P mci and ω ti represent the estimated mechanical power and rotational speed of ith WT, respectively. In order to mitigate blade passing effects and prevent excessive pitch activity in the practical realization of the proposed control system, adaptive notch filters for both error signals in Figure 5 should be used, i.e., adaptive notch filter blocks should be added before the P and PI blocks. The notch frequencies of these filters should be selected as multiples of the WTG's rotational speed and as the resonant frequencies of the WTG's drive system. Figure 5 shows the block diagram representing the Pitch control system. It consists of a PI Pitch compensator and a P pitch controller, which work in parallel. If the extracted mechanical power of WT is larger than rated WT power Prated, then the PI pitch compensator increases pitch angle β and limits extracted mechanical power to Prated. The algorithm for estimating WT mechanical power Pmci (Pmcipu is an abbreviation for Pmci pu) is explained in section 3.4. The P Pitch controller increases the damping torque of WTG, preventing oscillatory instability of wind generators. The reference signal for this controller is an averaged rotational speed of WTG ωt. In Figure 5, Pmci and ωti represent the estimated mechanical power and rotational speed of ith WT, respectively. In order to mitigate blade passing effects and prevent excessive pitch activity in the practical realization of the proposed control system, adaptive notch filters for both error signals in Figure 5 should be used, i.e., adaptive notch filter blocks should be added before the P and PI blocks. The notch frequencies of these filters should be selected as multiples of the WTG's rotational speed and as the resonant frequencies of the WTG's drive system. Estimation of WTG Mechanical Power, Wind Speed and Optimal WTG Speed The extracted mechanical wind power P mi of ith WT is provided as: where: ρ is air density (kg/m 3 ); A is the area swept by blades of WT (m 2 ); v wi is wind speed at ith WT (m/s); C p is the power coefficient (coefficient of performance); λ i is the tip speed ratio (ratio of speed at the tip of blades and wind speed); β i is the pitch angle (deg); n is the number of WTs. In Equation (7), it is assumed that uniform air density over the area of the offshore wind park and all WTGs are identical. According to [21], the power coefficient is provided as: where c 1 , c 2 , . . . , c 9 are constants that depend on the type of WT used. Different approximations of Formula (8) can be found in the literature. For convenience, polynomial approximations provided in [22] are used in this paper: where R is the radius of the blade of WT (m); ω ti is ith WTG speed (rad/s); and coefficients α jk are provided in [22]. These coefficients depend on the type of WT used. If coefficients α jk are chosen according to [22], the maximum of power coefficient C pmax = 0.517 at tip speed ratio λ opt = 8.805 and β i = 0 (deg) is achieved. A common control strategy of variable speed WTG is shown in Figure 6, which is also applied in the PCOWTG control system [23]. In the region of wind speed [0, v wcut-in ] (region I), the wind turbine is turned off by moving the blades toward the feather position. When the wind speed is between v wcut-in and the base wind speed v wb (region II), the WTG control system adjusts WTG speed in a way that achieves optimum tip speed ratio: where at v w = v wb nominal WTG speed ω tn is achieved. In this region, the PCOWTG control system keeps pitch angle β i = 0 (deg). In this way, the optimum mechanical power (P m ) extraction from wind power is achieved. At v w = v wb , the extracted mechanical power of WTG is P mb . Estimation of WTG Mechanical Power, Wind Speed and Optimal WTG Speed The extracted mechanical wind power Pmi of ith WT is provided as: where: ρ is air density (kg/m 3 ); A is the area swept by blades of WT (m 2 ); vwi is wind speed at i th WT (m/s); Cp is the power coefficient (coefficient of performance); λi is the tip speed ratio (ratio of speed at the tip of blades and wind speed); βi is the pitch angle (deg); n is the number of WTs. In Equation (7), it is assumed that uniform air density over the area of the offshore wind park and all WTGs are identical. According to [21], the power coefficient is provided as: where c1, c2, …, c9 are constants that depend on the type of WT used. Different approximations of Formula (8) can be found in the literature. For convenience, polynomial approximations provided in [22] are used in this paper: where R is the radius of the blade of WT (m); ωti is ith WTG speed (rad/s); and coefficients αjk are provided in [22]. These coefficients depend on the type of WT used. If coefficients αjk are chosen according to [22], the maximum of power coefficient Cpmax = 0.517 at tip speed ratio λopt = 8.805 and βi = 0 (deg) is achieved. A common control strategy of variable speed WTG is shown in Figure 6, which is also applied in the PCOWTG control system [23]. In the region of wind speed [0, vwcut-in] (region I), the wind turbine is turned off by moving the blades toward the feather position. When the wind speed is between vwcut-in and the base wind speed vwb (region II), the WTG control system adjusts WTG speed in a way that achieves optimum tip speed ratio: where at vw = vwb nominal WTG speed ωtn is achieved. In this region, the PCOWTG control system keeps pitch angle βi = 0 (deg). In this way, the optimum mechanical power (Pm) extraction from wind power is achieved. At vw = vwb, the extracted mechanical power of WTG is Pmb. When the wind speed is between v wb and nominal wind speed v wn (region III), the PCOWTG control system keeps WT speed at nominal level ω tn and pitch angle at 0 deg. Consequently, the tip speed ratio is not at the optimal level; that is, C p is slightly below C pmax . This region was introduced to achieve a better transition from region II to region IV. In region IV, the PCOWTG control system limits extracted mechanical power to nominal WTG mechanical power P mn while keeping WTG speed at nominal level ω tn ; this is achieved by increasing the pitch angle. In region V, WTG is turned off by moving the blades toward the feather position. According to previous considerations, it follows: where S b is base system power (W). The subscript "pu" in (12) is added to differentiate per-unit quantities. In direct-drive, WTG generator speed ω g equals turbine speed ω t . If at base WTG speed is chosen ω b = ω tn , then for any wind speed and ith WTG we have: where ω tpu is the collective rotational speed of turbine per-unit of the base rotational speed ω b ; v wipu is wind speed of ith WT per-unit of base wind speed v wb . By combining (12) and (13) we have: The overall extracted mechanical power of the wind park, which consists of n parallel connected WTG, is: where From (14) it follows: Formula (17) can be written in the form: . . , 4, i = 1, 2, . . . , n. The left-hand side of Equation (18) is a polynomial of the 4th order in v wipu . Positive zero of this polynomial, between v cut-in and v cut-out , represents a wind speed of ith WT. The issue with Equation (18) is that mechanical power P mipu of WT is challenging to measure. In this paper, the estimation of the mechanical power of WT by using the WTG swing equation is proposed as follows: where T eipu is the electromagnetic torque (pu) of ith WG; T mipu is the mechanical torque (pu) of ith WT; H is WTG inertia constant in seconds (MWs/MVA), and D is the WTG damping coefficient. Electromagnetic torque of a non-salient pole PMSG can be found using formula: T eipu = λ f pu i qipu (20) where λ fpu is rotor flux linkage (pu) produced by permanent magnets; i qipu (pu) is q component of ith PMSG stator current in the rotor field synchronous dq frame, whose angle θ fi is measured using an encoder. From (19), it follows that the mechanical power of ith WT can be found from: Since the filtered derivation (approximation of ideal derivation) is used in simulations, the estimated P mcipu is an approximation of P mipu . After estimating wind speed at each WT, the optimal collective speed of WT can be found from (18) as follows: The overall extracted mechanical power of the wind park is: = P mbpu · c 10 v 2 w1pu + · · · + c n0 v 2 wnpu + c 11 v 2 w1pu + · · · c n1 v 2 wnpu ·λ opt ·ω tpu + c 12 v w1pu + · · · c n2 v w2pu ·λ 2 opt ·ω 2 tpu + (c 13 + · · · + c n3 )·λ 3 opt ·ω 3 tpu + c 14 · 1 v w1pu + · · · + c n4 · 1 v wnpu ·λ 4 opt ·ω 4 tpu The right-hand side of (23) is the fourth-order polynomial in ω tpu . The maximum of P m is achieved for ω t = ω opt , which can be determined by using derivate or non-derivate optimisation algorithm to find the maximum value of the right-hand side of (23). The estimated ω opt is used as a reference value of the WTG Speed controller. A schematic diagram of the Calculator block is shown in Figure 7. In Figure 7, variable s represents the Laplace operator and T f is the time constant of a low-pass filter. System Parameters The simulations are based on realistic models of all wind park components using Matlab/Simulink and Simscape Specialized Power System toolbox. The wind park consists of 3 WTGs to only simulate the main characteristics of the system. The model of the WTGs is based on the NOWITECH 10 MW direct-drive WTG model [24]. In this model, the optimum (maximum) tip speed ratio is 7.8, the number of poles of the WG is 198 and rated WTG speed is 13.54 rpm. In order to comply with the new maximum (optimum) tip speed ratio λopt = 8.805 used in this paper, some parameters of WTG were changed, which is explained in the following text. These changes do not influence the main results achieved in this paper in any way. Since it is supposed that system nominal frequency is fn = 16.67 Hz, it follows that the nominal WTG speed is: For convenience, the number of poles equal to 130 is proposed in this paper. Accordingly, it follows ωtn = 1.61 rad/s = 15.39 rpm. It is also supposed that the base wind speed is vwb = 12 m/s. From (11) the radius of WT blades is found to be R = 65.75 m, while WT mechanical power, achieved at base wind speed, is Pmbpu = 0.755 pu found using (12). From (18), by setting Pmipu = 1 pu, nominal wind speed is equal to vwn = 13.24 m/s. Since we changed the nominal WTG speed compared with the original NOWITECH WTG, the inertial constant of WTG was also changed to H = 3.48 MWs/MVA. It is also assumed that the air density is ρ = 1.25 kg/m 3 . The rest of the parameters of the WTG are shown in Table 1. The base system power is Sb = 10 MVA. System Parameters The simulations are based on realistic models of all wind park components using Matlab/Simulink and Simscape Specialized Power System toolbox. The wind park consists of 3 WTGs to only simulate the main characteristics of the system. The model of the WTGs is based on the NOWITECH 10 MW direct-drive WTG model [24]. In this model, the optimum (maximum) tip speed ratio is 7.8, the number of poles of the WG is 198 and rated WTG speed is 13.54 rpm. In order to comply with the new maximum (optimum) tip speed ratio λ opt = 8.805 used in this paper, some parameters of WTG were changed, which is explained in the following text. These changes do not influence the main results achieved in this paper in any way. Since it is supposed that system nominal frequency is f n = 16.67 Hz, it follows that the nominal WTG speed is: For convenience, the number of poles equal to 130 is proposed in this paper. Accordingly, it follows ω tn = 1.61 rad/s = 15.39 rpm. It is also supposed that the base wind speed is v wb = 12 m/s. From (11) the radius of WT blades is found to be R = 65.75 m, while WT mechanical power, achieved at base wind speed, is P mbpu = 0.755 pu found using (12). From (18), by setting P mipu = 1 pu, nominal wind speed is equal to v wn = 13.24 m/s. Since we changed the nominal WTG speed compared with the original NOWITECH WTG, the inertial constant of WTG was also changed to H = 3.48 MWs/MVA. It is also assumed that the air density is ρ = 1.25 kg/m 3 . The rest of the parameters of the WTG are shown in Table 1. The base system power is S b = 10 MVA. The parameters of transformers T r1 , T r2 and T r3 , are similar to parameters of transformers in the Matlab/Simulink "power_wind_dfig" example, which are given in Table 2. Resistance and inductance of the converter coupling inductor are R c = 0.0015 pu and L c = 0.15 pu. These values are also taken from the same example. The transmission and collection cables used are three core cross-linked polyethylene (XLPE) cables, modelled using a three-phase π section line in Matlab/Simulink. The length of the transmission network used in this paper is 150 km, and the lengths of collection network lines are 1 km, 3 km, and 5 km. The parameters of the XLPE cables are given in Table 3 [25]. In order to compensate for the reactive power generated by the cables, shunt reactors L s1 = 0.5 pu and L s2 = 1.25 pu were applied on both ends of the transmission grid cable. All simulations were performed based on per-unit system base quantities. Simulation time was set to 360 s. Figure 8 shows voltages of nodes from the system at a steady-state for different wind speeds. In Figure 8, E int1 represents the internal voltage of PMSG1. Approximate reactive power absorbed by reactors L s1 and L s2 can be represented as: where f is system frequency and V is the voltage applied to shunt reactors. Since constant Volt/Hz ratio operation is employed and system frequency is proportional to wind speed, it follows: Figure 8 shows voltages of nodes from the system at a steady-state for different wind speeds. In Figure 8, Eint1 represents the internal voltage of PMSG1. Approximate reactive power absorbed by reactors Ls1 and Ls2 can be represented as: where f is system frequency and V is the voltage applied to shunt reactors. Since constant Volt/Hz ratio operation is employed and system frequency is proportional to wind speed, it follows: From (2), (25) and (26) it follows: When wind speed decreases, capacitive reactive power production decreases at a higher rate than the decreasing inductive reactive power consumption; this is the reason why, in Figure 8, by decreasing wind speed, the voltage at the inner nodes of the system decreases to a more significant extent than voltages at the outer (generators and converter) nodes of the system. In order to achieve even better voltage profiles than those presented in Figure 8, tapped reactors can be used [26] (p. 630). Figure 9 displays wind profiles applied to the wind park; as shown in Figure 9, the wind speed changes from 9.5 m/s to 14 m/s. In this way, wind speed varies from region II to region IV (see Figure 6), in which the control system is active. Regions I and region V were not used since WTGs are shut down in these regions. Additionally, the wind profiles are delayed relative to each other to consider the spatial distribution of WTGs and the limited travel speed of the wind disturbance front. The time between decreasing and increasing wind profiles is long enough to allow the system to settle in a steady-state. From (2), (25) and (26) it follows: Dynamic Responses When wind speed decreases, capacitive reactive power production decreases at a higher rate than the decreasing inductive reactive power consumption; this is the reason why, in Figure 8, by decreasing wind speed, the voltage at the inner nodes of the system decreases to a more significant extent than voltages at the outer (generators and converter) nodes of the system. In order to achieve even better voltage profiles than those presented in Figure 8, tapped reactors can be used [26] (p. 630). Figure 9 displays wind profiles applied to the wind park; as shown in Figure 9, the wind speed changes from 9.5 m/s to 14 m/s. In this way, wind speed varies from region II to region IV (see Figure 6), in which the control system is active. Regions I and region V were not used since WTGs are shut down in these regions. Additionally, the wind profiles are delayed relative to each other to consider the spatial distribution of WTGs and the limited travel speed of the wind disturbance front. The time between decreasing and increasing wind profiles is long enough to allow the system to settle in a steady-state. Dynamic Responses Deviations of rotor angles around the average rotor angle of all WTGs in the wind park, and responses from the B2B converter current and WGs currents for using the control system of PCOWTG with a disabled P pitch controller are shown in Figures 10 and 11, respectively. Deviations of rotor angles around the average rotor angle of all WTGs in the wind park, and responses from the B2B converter current and WGs currents for using the control system of PCOWTG with a disabled P pitch controller are shown in Figures 10 and 11, respectively. Deviations of rotor angles around the average rotor angle of all WTGs in the wind park, and responses from the B2B converter current and WGs currents for using the control system of PCOWTG with a disabled P pitch controller are shown in Figures 10 and 11, respectively. Figure 11. Currents of the B2B converter and WGs for control system with disabled P pitch controller. Due to the lack of load damping, oscillations of rotor angles in Figure 11 increase, leading to system instability (unbounded oscillations). In this case, simulation time is shortened to 260 s since, due to oscillatory instability, the signals rapidly increase after 220 s. Figures 12-19 show simulation results with the enabled P pitch controller. Figure 12 shows the power coefficients of WTs. It can be seen in the steady-state and for the case vw ≤ vwn, that power coefficients are close to the optimal value Cpmax = 0.517. Pitch angles of WTs are shown in Figure 13. When wind speed is below vwn, it can be seen that pitch angles are equal to zero for most of the time. During wind disturbance, P pitch controllers slightly increase their pitch angles in order to damp rotor angle oscillations. If wind speed is above vwn, PI pitch compensators increase their pitch angles to limit the extracted mechanical power of WT to a rated power of WTGs. Figure 11. Currents of the B2B converter and WGs for control system with disabled P pitch controller. Due to the lack of load damping, oscillations of rotor angles in Figure 11 increase, leading to system instability (unbounded oscillations). In this case, simulation time is shortened to 260 s since, due to oscillatory instability, the signals rapidly increase after 220 s. Figures 12-19 show simulation results with the enabled P pitch controller. Figure 12 shows the power coefficients of WTs. It can be seen in the steady-state and for the case v w ≤ v wn , that power coefficients are close to the optimal value C pmax = 0.517. Pitch angles of WTs are shown in Figure 13. When wind speed is below v wn , it can be seen that pitch angles are equal to zero for most of the time. During wind disturbance, P pitch controllers slightly increase their pitch angles in order to damp rotor angle oscillations. If wind speed is above v wn , PI pitch compensators increase their pitch angles to limit the extracted mechanical power of WT to a rated power of WTGs. The reference (optimal) speed (estimated by the Calculator block) and speeds o WTGs are shown in Figure 14. Since PMSGs rotate synchronously, only a one-time re sponse from the speed of WTGs can be observed. It should be emphasised that the speeds of WTGs may be slightly different during disturbances of active power, that is, during wind disturbances. Otherwise, WTGs rotate with the same collective speed or ωt. The reference (optimal) speed (estimated by the Calculator block) and speeds o WTGs are shown in Figure 14. Since PMSGs rotate synchronously, only a one-time r sponse from the speed of WTGs can be observed. It should be emphasised that the speed of WTGs may be slightly different during disturbances of active power, that is, durin wind disturbances. Otherwise, WTGs rotate with the same collective speed or ωt. Waveforms of regulated voltage Vr and PMSG voltages V1, V2, and V3 are presented in Figure 15. It can be seen that, due to "electrical vicinity" and voltage control by the B2B converter, the voltages of all generators and the B2B converter are in close proximity to each other. Figures 14 and 15 show that waveforms of the WTG's speed, voltages of WTGs and the B2B converter are very similar, which is to be expected. The internal voltage of WG is inherently proportional to WTG speed, so the time constant of the internal voltage of WG equals the mechanical time constant of WTGs. Due to constant Volt/Hz ratio operation at steady-state, regulated voltage Vr is proportional to WTG speed. Since we have chosen the same parameters for the PI WTG Speed controller and the PI Voltage controller, the waveform of regulated voltage is very similar to the WTG speed waveform during wind disturbances (that is, during transients). Figure 15 shows that voltages achieve their maximum values during wind disturbances and if the wind speed is close to vwn or above vwn. The achieved maximum is less than 110% of Vnom, which is acceptable in most practical realisations. In Figure 16, the active power of the B2B converter and PMSGs is shown. Due to the chosen direction of the B2B converter's active power, shown in Figure 1, the active power of the B2B converter is negative. It is interesting to note that after the decrease in wind speed of the WTG1, the active power of PMSG2 and PMSG3 immediately start to increase. In this case, the PMSG1 temporarily runs slower than the other two generators, so the angular position of the PMSG1, relative to the faster PMSG2 and PMSG3, lags behind. Similar behaviour can also be observed from Figure 17, where deviations of rotor angles of WGs around the average rotor angle of WGs are shown. As stated in [26] (p. 22), the resulting angular difference transfers part of the load from a slow machine to a faster machine. The situation is reversed if wind speed increases. Similar behaviour can also be observed from Figure 17, where angles of WGs around the average rotor angle of WGs are shown. As the resulting angular difference transfers part of the load from a slow machine. The situation is reversed if wind speed increases. The reference (optimal) speed (estimated by the Calculator block) and speeds of WTGs are shown in Figure 14. Since PMSGs rotate synchronously, only a one-time response from the speed of WTGs can be observed. It should be emphasised that the speeds of WTGs may be slightly different during disturbances of active power, that is, during wind disturbances. Otherwise, WTGs rotate with the same collective speed or ω t . Waveforms of regulated voltage V r and PMSG voltages V 1 , V 2 , and V 3 are presented in Figure 15. It can be seen that, due to "electrical vicinity" and voltage control by the B2B converter, the voltages of all generators and the B2B converter are in close proximity to each other. Figures 14 and 15 show that waveforms of the WTG's speed, voltages of WTGs and the B2B converter are very similar, which is to be expected. The internal voltage of WG is inherently proportional to WTG speed, so the time constant of the internal voltage of WG equals the mechanical time constant of WTGs. Due to constant Volt/Hz ratio operation at steady-state, regulated voltage V r is proportional to WTG speed. Since we have chosen the same parameters for the PI WTG Speed controller and the PI Voltage controller, the waveform of regulated voltage is very similar to the WTG speed waveform during wind disturbances (that is, during transients). Figure 15 shows that voltages achieve their maximum values during wind disturbances and if the wind speed is close to v wn or above v wn . The achieved maximum is less than 110% of V nom , which is acceptable in most practical realisations. In Figure 16, the active power of the B2B converter and PMSGs is shown. Due to the chosen direction of the B2B converter's active power, shown in Figure 1, the active power of the B2B converter is negative. It is interesting to note that after the decrease in wind speed of the WTG1, the active power of PMSG2 and PMSG3 immediately start to increase. In this case, the PMSG1 temporarily runs slower than the other two generators, so the angular position of the PMSG1, relative to the faster PMSG2 and PMSG3, lags behind. Similar behaviour can also be observed from Figure 17, where deviations of rotor angles of WGs around the average rotor angle of WGs are shown. As stated in [26] (p. 22), the resulting angular difference transfers part of the load from a slow machine to a faster machine. The situation is reversed if wind speed increases. Waveforms of B2B converter currents and PMSG currents are shown in Figure 18. These waveforms are qualitatively similar to waveforms from the active power of the B2B converter and the active powers of PMSGs. The current of the B2B converter is negative due to the chosen direction of the current. Due to the use of P pitch controller, oscillations of rotor angles and currents are mitigated, which can be seen in Figures 17 and 18, respectively. Waveforms of reactive power from the B2B converter and PMSG are shown in Figure 19. It can be seen that, according to the chosen directions of reactive power in Figure 1, reactive powers are negative-that is, the B2B converter and generators absorb reactive power. A unit variance is a band-limited white noise signal passed through a forming filter, an approximation of the Von Kármán velocity spectra (MIL-F-8785C) used as input into the system, as shown in Figure 20 [27]. The mean value of this random (turbulent) wind disturbance is 12 m/s while variance is 1.5 (m/s) 2 , that is, turbulence intensity is 10.2%. Responses from the system are shown in Figures 21-28. In Figure 22, it can be noticed that more pitch action occurs than in non-turbulent cases, which can lead to bearing and actuator wear as well as mechanical fatigue. One of the measures to reduce these effects is introducing a dead zone in the P pitch controller. In this way, the P pitch controller will not react to small wind disturbances. Other techniques to increase system damping will be the subject of future investigation. In Figure 21, it can be seen that the power coefficient in the region below nominal wind speed, as a result of non-zero pitch angle, is below the optimal level for most of the time. This behaviour is quite different from wind parks with variable speed WTGs that use a power converter per WTG, leading to decreased amounts of extracted wind power for individual WTs than is optimal. This loss is partially compensated by the absence of thermal losses in individual WTG power converters in PCOWTG configurations. Conclusions and Further Work This paper presents the new control algorithm for PCOWTG with a 150 km len transmission grid for cases where wind speed changes from cut-in wind speeds of nominal wind speed. The problem of large charging currents in underwater cables i igated by decreasing system frequency. In order to achieve maximum extraction of power, WTG speed changes as the wind speed changes, while reference WTG sp determined using an optimisation algorithm. Instead of measuring wind speed, the rithm for wind speed calculation is used at each wind WT. Shunt reactors are add both sides of the transmission grid cable to compensate for the reactive power gene by the transmission cable. The magnetic flux of electromagnetic devices is kept con regardless of changing of system frequency by applying the constant Volt/Hz ratio troller. Damping torque is increased by using P pitch controllers at each WTG that parallel with PI pitch compensators. The use of a well-tuned P pitch controller guara the stability of the PCOWTG. The main contributions and novelty of this paper are summarised as follows:  The first published control algorithm for control of PCOWTG;  A novel algorithm for estimation of wind speed and optimal collective spee PCOWTG. In further research, more analytical studies of PCOWTG and the stabil PCOWTG will be investigated. Additionally, more advanced control algorithms su adaptive control and fuzzy logic control of PCOWTG will be designed. Analysis of mal losses in PCOWTG will be conducted, and a comparison with other wind topo will be performed. Due to the low and variable voltages and frequencies associated PCOWTG, we will consider the use of nonstandard equipment and power conversi pologies. Further increase in system damping using alternative methods combined pitch angle control (to decrease pitch action and turbine elements fatigue) will also Conclusions and Further Work This paper presents the new control algorithm for PCOWTG with a 150 km length of transmission grid for cases where wind speed changes from cut-in wind speeds of above nominal wind speed. The problem of large charging currents in underwater cables is mitigated by decreasing system frequency. In order to achieve maximum extraction of wind power, WTG speed changes as the wind speed changes, while reference WTG speed is determined using an optimisation algorithm. Instead of measuring wind speed, the algorithm for wind speed calculation is used at each wind WT. Shunt reactors are added to both sides of the transmission grid cable to compensate for the reactive power generated by the transmission cable. The magnetic flux of electromagnetic devices is kept constant regardless of changing of system frequency by applying the constant Volt/Hz ratio controller. Damping torque is increased by using P pitch controllers at each WTG that work parallel with PI pitch compensators. The use of a well-tuned P pitch controller guarantees the stability of the PCOWTG. The main contributions and novelty of this paper are summarised as follows: • The first published control algorithm for control of PCOWTG; • A novel algorithm for estimation of wind speed and optimal collective speed for PCOWTG. In further research, more analytical studies of PCOWTG and the stability of PCOWTG will be investigated. Additionally, more advanced control algorithms such as adaptive control and fuzzy logic control of PCOWTG will be designed. Analysis of thermal losses in PCOWTG will be conducted, and a comparison with other wind topologies will be performed. Due to the low and variable voltages and frequencies associated with PCOWTG, we will consider the use of nonstandard equipment and power conversion topologies. Further increase in system damping using alternative methods combined with pitch angle control (to decrease pitch action and turbine elements fatigue) will also be the subject of future research, together with an investigation of alternative methods to reduce bearing wear and mechanical fatigue.
13,223
2021-08-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Enhanced Light Extraction from a GaN-based Light Emitting Diode with Triangle Grating Structure We propose a simple method to improve the light extraction in GaN based light emitting diode. Conventional light emitting diode has an extraction limitation due to the total internal reflection which occurs at the interface between GaN and air. By using periodic grating etched at the GaN layer, we can couple more emitting light out of the active layer. Tapering the grating structure would facilitate the impedance matching between GaN light emitting diode and air, which can enhance broadband light extraction. We use finite difference time domain method to numerically find the best tapering grating structure. The numerical experiment demonstrate an enhance factor 4 of our proposed structure compared with the conventional one over broad band specctrum. INTRODUCTION Illumination revolution is now on the way to shape the conventional way that we light up our life.Being one of the important semiconductor material, Gallium nitride (GaN) based Light Emitting Diode (LED) receives a lot of attention due to its applications in digital system, flat panel displays, signal transformation and optical interconnects in optoelectronic system (Nakamura et al., 1994;Nakamura et al., 1995;Nakamura et al., 1997;Choet al., 2006;Kojimaet al., 2007;Kimet al., 2007;Wang et al., 2009;Wierer et al., 2009).As a particular and important application, using GaN based LED to realize general illumination become popular for its long-life time, robust and energy conservation properties of GaNlED (Schubert, 2006). Although GaN based LED has several advantages compared with other material, it, however, also suffers from some limitations.On the early stage of GaN based LED's development, the rapidly reduction of output efficiency is because the degree of transparency become worse along with the increasing of the device's temperature.It is well know that the ultraviolet radiation, which arises together with the interbandradiative recombination of GaN material, would degrade the transparency degree of polymer.Metal electromigration is also a crucial factor which could even destroy the working device.There are other factors which would degrade the performance of the GaN based LED, such as P type ohmic contact degradation and deep level and the radiation recombination center increase etc.All the mentioned factors are electric properties of the device which instinctively has an effect on the efficiency of the LED. On the other hand, the high dielectric constant (5.76) of the p-GaN would pose a limitation on the optical extraction efficiency of GaN based LED.For the general LED device, air is the medium of the output region.Therefore, the high dielectric constant contrast between p-GaN (5.76) and air (1) would result in a critical angle at the interface between p-GaN and air.It means that most of the radiations whose direction of emitting does not locate inside the cone with the critical angle of the total internal reflection are suffered from reflected at the interface.Large amount of the energy is confined at the p-GaN layer.As a result, the External Quantum Efficiency (EQE) of the conventional p-GaN based LEDs is as low as a few percent (Sheu et al., 2007). There were certain efforts to enhance the extraction ratio of the LED.Modification of the emitting surface (Huh et al., 2003;Fujii et al., 2004;Jin et al., 2012) could be the most convenient way to improve the efficiency.The mechanism of improvement is to enhance the interaction between guided modes inside the LED and the outer space.With the grating attached with the LED, outside propagating waves could be coupled into the LED more easily.Because of the time reversal symmetry, the radiation inside the LED should obtain extraction ratio enhancement due to the presence of the grating. At the same time, attached tapering structure on the top of the LED could facilitate the impedance matching between the LED and the free space.In this study , we investigate the effect of different grating structure fabricated at the surface of LED. METHODOLOGY Finite Difference Time Domain (FDTD) method is based on the finite difference method.It uses finite difference to represent derivative and solves the electromagnetic boundary problem which can realize the discrete sampling of spatial and temporal derivative. The distributions of electric and magnetic field are restricted to the Yee unit cell, which is the basic form for numerically solving the Maxwell equation using FDTD method.Let's consider the Maxwell equation in the medium without the electric current and magnetic current. For the isotropic medium, we have D . In the Cartesian coordinates, we can write Eq. ( 1) as: For the two dimensional (x-y plane) case where ,,/ and ,,/ , we can obtain two sets of decoupled equations: Which is referred to the transverse magnetic polarization and the other one is the transverse electric polarization with electric fields laying in the two dimensional plane (x-y plane). x z It means that in the two dimensional case, the polarization states can be represented by two kinds of polarization: transverse electric polarization (TE) and transverse magnetic polarization (TM).The combination of these polarizations can be used to qualitatively present the three-dimensional effect.These two sets of equations can be solved by applying difference method in space and time domain.The finite difference scheme is shown in the Fig. 1.As can be seen from Fig. 1, there is displacement between electric field and magnetic field which facilitates the calculation of the finite difference.And the sampling time between electric field and magnetic field is shifted by half time step.The grids of TE and TM polarization are similar, where electric field is substituted by the magnetic field.The discretization of arbitrary function is denoted as: Therefore the derivative of function f (x, y, t) can be represented by the central difference method: Let's take the first equation of Eq. (4) as an example.By using the FDTD, this equation is read as: Equation ( 7) takes the form: Fig. 1: Yee unit cell for the two-dimensional FDTD scheme of transverse electric polarization and transverse magnetic polarization.(i, j) stands for the location of the spatial grid.And the components of the electromagnetic field are indicated by the insets The other equations of Eq. ( 4) can be written as the similar form: Equation (8-10) shows the case of FDTD realization of the transverse electric polarization.Using the formulism, we can numerically calculate the electromagnetic dynamics of the two dimensional system.The transverse magnetic polarization case is similar as follow: By summing up the results of two polarizations, we can getthe qualitative result of the three dimensional case.We use the Perfect Matching Layer for simulating infinite system, which can absorb any radiation wave from the source and scatter.The FDTD scheme should obtain the Courant stable condition: And the dielectric constant can be averaged over the nearest spatial grid when we come across certain dielectric boundary: The LED structure is shown in Fig. 2a.It is consisted a multi-quantum-well (MQW) structure where the active layer locates.The height of the p-GaN layer is denoted as h 1 while the n-GaN is indicated as h 3 .h 2 represents the height of the MQW.In our interesting wavelength, the refractive index of the GaN layer is denoted as h 1 while the n-GaN is indicated as h 3 .h 2 represents the height of the MQW.In our interesting wavelength, the refractive index of the GaN is set to be 2.4.Without lost of generality, we choose h 1 = h 3 = 500 nm and h 2 = 200 nm as a reference system and we neglect the conventional thin transparent ITO layer for the convenience of modeling.As can be seen from Fig. 2a, air is the output medium.It means that some light emitted from the active layer would undergo total internal reflection.Such part of light would be confined in the GaN layer which induces the reduction of extraction ratio.If we add some microstructure at the surface of the p-GaN, we can modify the condition of the total internal reflection. In order to determine the extraction ratio, we calculate the electromagnetic problem by the FDTD method.We choose the source at the top of the MQW layer.And we choose the size of the source similar to a point source which mimics all the possible real-k vectors.The extraction ratio is determined by normalized the transmission of the structure of Fig. 2b to d to the case of Fig. 2a, i.e., non-pattern one.As can be seen from the figure, h 1 an h 2 denote the thickness of the p-GaN and n-GaN, respectively.And h 1 represents the thickness of the MQW.The period of the grating is set by p while the features of the grating unit are marked by t and d.For the case of fixed p = 500 nm and d = 200 nm, we calculate the enhanced factor when t is varying. Fig. 3a to c show the results which are corresponding to the situation of Fig. 2b to d.As can be seen from these figures, the grating structure can definitely enhance the extraction ratio except for the wavelength near 500nm.All of them can obtain broad and flat response of the enhancement when the wavelength is larger than 550nm.We can also identify that the case when the triangle grating structure with feature t=100nm is the best one for the enhancement over 530 nm.It can also be concluded from these figures that such grating structure can give an extraction ratio enhancement up to 6 times at the wavelength near 400nm.It means that such surface texturing method can also be applied to the field of solar cell where the enhancement of light harvest near the ultraviolet is needed.The dips of the enhancement factor near 500nm are due to geometry of this structure.The size of the source and the period of the grating p are related to this phenomenon. The upper results show that by choosing suitable features of the grating structure, we can control the extraction ratio of the LED.As the next step, we consider the influence of the grating period.Fig. 4 shows the result when we fix t = 100 nm and d = 200 nm.As can be seen from the figure, the best period for the triangle grating structure is 100nm which can obtain more than 4 times enhancement at the wavelength larger than 500nm.The enhancement factor reduces together with the increasing of the period p. DISCUSSION For a representative example, we show the electric field distribution when t = 100 nm and p = 500 nm in Fig. 5.As can be seen from the figure, the triangle grating structure can facilitate the impedance matching between the GaN layer and the free space.The enhancement of the near field electric field is also demonstrated. Adding grating structure on the top of the LED can facilitate the enhancement of extraction ratio.However, it would impose a difficulty on the structure fabrication.At the same time, it would also raise the cost of topical LED.At this moment, electric lithography is the most precise method to fabricate such grating structure.However, it takes long time for fabricating large sample and it is expensive as well.Therefore, searching new and simple method for texturing the surface of the LED is one of the important problems facing at this field. CONCLUSION We propose a simple method to enhance the extraction ratio of GaN LED.The triangle grating structure is found to be the best candidate for extraction ratio enhancement among the square gating structure, triangle grating structure and sphere grating structure.By choosing suitable parameters, we can obtain more than 4 times enhancement at the wavelength larger than 550nm.The enhancement is due to the enhanced interaction between the guided mode and radiation.We expect this kind of method can be generalized to other fields such as solar cell etc. Fig. 2 : Fig. 2: Cross section views of the conventional LED, square grating LED and tapered grating LED.The thickness of MQW is h 2 =150 nm while the thichnesses of p = GaN and n-GaN layer are h 1 = h 3 = 500 nm.We can change the grating period p and the thicknesses of square, triangle and sphere to optimize the extraction ratio Fig. 3 :Fig. 4 :Fig. 5 : Fig. 3: (a) Enhanced factor vs different t which is corresponding to Fig. 2b, square grating structure; (b) Enhanced factor vs different t which is corresponding to Fig. 2c, triangle grating structure; (a) Enhanced factor vs different t which is corresponding to Fig. 2d, sphere grating structure; the unit of t is nm
2,858.8
2013-05-30T00:00:00.000
[ "Physics" ]
Improving the efficiency of mass-exchange between liquid and steam in rectification columns of cyclic action 2 Introduction Technical progress in the alcohol industry is inextricably linked to the development and implementation of highly efficient column apparatuses (Shyian et al., 2009;Kyziun et al., 2006) and energy-saving ways of mass transfer between the liquid and steam on their plates (Martseniuk et al., 2019).One of the ways of solving the mass transfer process problem is the use of cyclic mode of phase motion, which is based on alternate change of two periods: the steam passing up the column period and the period of liquid pouring on its plates (Maleta, et al., 2011;Kiss et al., 2012).Implementation of controlled cycles of liquid retention on the plates allows to prolong the time of its contact with steam, to create conditions in order to achieve a phase state close to equilibrium and to bring the efficiency of each real plate closer to the theoretical one (Buliy et al., 2019).This significantly reduces the specific consumption of heating steam, decreases the volume of alcohol-containing waste and minimizes the cost of equipment (Kiss, 2015). There are well-known ways of increasing the residence time of liquid on the plates by organizing the flow of separate steam-liquid jets with their mutual collision (Pǎtruţ et al., 2014) or additional installation of baffles and reflectors, directing the steam through the appropriate bypass pipelines, etc. (Krivosheev et al., 2015).Despite the obtained positive results in reducing energy costs, the known methods and apparatuses of cyclic operation have not found wide practical application due to the lack of mass exchange in the steam period (Lita et al., 2012), the steam pressure dependence of pouring devices' operation (Toftegard et al., 2016), the fluctuations of the steam pressure in the collector, the inability to stabilize the hydrodynamic mode of plates (Flodman et al., 2012), the mixing of liquid on adjacent plates during its pouring, the low apparatuses' steam and liquid throughput capacity, and the complexity of constructive solutions (Bastian et al., 2018). The authors proposed an innovative rectification technology, which excludes earlier mentioned disadvantages (Buliy et al., 2016) and provides periodic liquid pouring from one plate to another at continuous supply of liquid and heating steam into the column (Ukrainets et al., 2018).To implement the technology, the design of a rectification column equipped with plates with variable free cross-section was developed (Bulii et al., 2019).For stable operation of plates in the column hydrodynamic regimes were maintained, providing effective mass transfer between liquid and steam without entrainment of liquid on upper plates during the fluid retention period and its intensive pouring through pouring and barbotage holes after the end of the retention time. The aim of the work was to study the efficiency of mass-exchange between liquid and steam in column apparatuses of cyclic action: to determine the grade of extraction and the concentration ratio of volatile impurities of alcohol during its extraction from alcoholcontaining fractions and to identify the specific rate of heating steam in the studied rectification column. Research objectives: 1.To determine the grade of extraction and the concentration ratio of alcohol impurity concentrations under conditions of typical and cyclic rectification (in columns equipped with moving valves and turning plate sections); 2. To determine the optimal technological parameters of the studied column and the residence time of the liquid on its plates, by which the maximum extraction of volatile impurities is provided without reducing the liquid throughput of the column; 3. To determine the specific rate of heating steam in a rectification column of cyclic action. Rectification columns of cyclic action with moving valves (RC) The RC is made of stainless steel AISI 304, equipped with flaky plates of arched type.Technical characteristics: diameter -426 mm; number of plates -30; distance between the plates -300 mm; the cross-sectional area of flakes' holes -19,42 mm 2 ; thickness of the plate fabric -2 mm; free cross-section of the plate: 5,5%during the residence of the liquid on the plates -5,5%; during the liquid pouring -51,7%. A fragment of the RC with movable rods, valves and hydraulic shutters is shown in Figure 1a (patent UA 116565.Rectification column with controlled cycles).The operation of the column provided the conducting of the adjustable in time cycles of liquid residence on the plates and its synchronous pouring from one plate to another over the entire height of the column in two successive stages, repeating periodically in time, alternately, according to the specified algorithm without interrupting the liquid and steam supply (patent UA 89874.Method of liquid pouring on plates of column apparatus in the process of mass transfer between steam and liquid).The interval of liquid retention was being determined experimentally depending on the grade of extraction of volatile alcohol impurities and their concentration ratio. The experimental RC was included in the scheme of the bragorectificational plant (BRP).The column contained corps 1, plates 2 with contact elements 3, movable rods 4 and 5, on which valves 6 and springs 7 were mounted.The rods moved up and down under the action of drive mechanisms (double-acting pneumatic cylinders of DNT type manufactured by FESTO).At that, valves 6 closed and opened the holes of pouring pipes 8 alternately.The operation of pneumatic cylinders was managed in accordance with the М340 controller program of 'Schneider Electric' company.Pipes 8 were inserted into sleeves 9 and together with them served as water traps, which prevented steam breakthrough through all the holes during liquid pouring. Figure 1b shows an experimental RC with movable rods and valves without hydraulic shutters (patent UA 139228.Column mass-exchange apparatus of cyclic action).The technical solution allowed one-stage (full) and two-stage methods of liquid pouring on plates (Figure 2).The one-step method involved pouring all the liquid from one plate to another (Figure 2a).According to the two-stage method (patent UA 141245.Method of pouring the liquid on the plates of mass-exchange column apparatus) part of the liquid had been pouring from the upper plate to the lower one (30-70% of its volume), and after a specified delay time, its remnants were poured (Figure 2b). Plant for ethyl alcohol extraction from alcohol-containing fractions The scheme of the implementation of the studied RC into the BRP one is shown in Figure 3. The plant included the experimental column 6, the upper and lower parts of which are connected to the vacuum breakers 4, evaporator 5, dephlegmator 7, condenser 8, alcoholcollecting vessel (trap) 9, softened water collector 1, intermediate collectors of still residue 15 and alcoholic fractions 18, flow-meters 3, 11, 12 and 13, centrifugal pumps 2, 16, 17 and decantator 10.The 950 mm diameter RC was equipped with flaky plates with pivoting sections connected to the pneumatic cylinders and modern computer-integrated means (patent UA 136561.Mass-exchange contact plate) (Figure 4).The moving sections opened and closed the pouring holes of the plates so that the liquid pouring occurred periodically.The coaxial placement of the flakes made it possible to eliminate the 'one-way' steam and liquid flow and the chance of forming stagnant zones.Pneumatic cylinders and technological parameters operation control (i.e. temperature, pressure) was carried out with the help of automatic sensors, the signal from which was transmitted to the microprocessor controller. The head fraction of ethyl alcohol, steam condensate from the condensers of the distillation column and carbon dioxide separator, as well as fusel alcohol and fusel rinse water were served on the feeder plate of the column in total amount of 688.3 dm 3 /h (250 dm 3 /h in terms of anhydrous alcohol (a.a.).The aldehyde-methanol concentrate from the condenser and the fusel-ester-aldehyde concentrate from the upper part of the decanter were sorted to the impurity concentrate collector. Research methods Liquid consumption.The consumption of alcohol-containing fractions, water for hydroselection, the still residue and rectified alcohol was monitored using PM flow-meters (Yarovenko et al., 1981). Concentration of ethyl alcohol in water-alcohol solutions.The concentration of ethyl alcohol in the still residue of the RC was determined by areometric method (Polygalina, 1999). Concentration of volatile alcohol impurities. The concentration of volatile impurities in the head fraction, in the condensate steam from the condensers of the distillation column and carbon dioxide separator, in fusel alcohol, in fusel rinse water and in the feed of the column were implemented on a gas chromatograph with an HP FFAP 50 m × 0.32 m column (Plutowska et al., 2008;Polyakov, 2007).Three-time repetition samples were taken for chromatographic analysis.The mean values were chosen as the determining ones. Grade of extraction and concentration ratio of volatile alcohol impurities. The grade of extraction (α) and concentration ratio (β) of key organic alcohol impurities were calculated as follows: where Хfp, Хic, Хsrthe concentration of volatile alcohol impurities on the feed plate, impurities concentrate and still residue, mg/dm 3 in terms of a.a.(Shyian et al., 2009). Studied modes It is known, that for flaky plates the lower critical speed of steam in holes, at which liquid spilling stops, is 6,5-7,5 m/s, linear speed in free сross-section of the column in barbotage mode is 0,5-0,9 m/s, in transitional 0,9-1,3 m/s and in jet 1,3-2,0 m/s.Upper critical speed of steam is 15-16 m/s (Stojkovic et al., 2018).Intensive liquid pouring through the holes of the plates occurs at steam velocities of 1.5-1 m/s (Gerven et al., 2009). Considering the above, the velocity of steam in the holes of flakes during the liquid residence on the plates of the studied RC was maintained within 12-14 m/s. The extraction of ethyl alcohol from alcohol-containing fractions was carried out under the circumstances of moderate and deep hydroselection.Therefore, the upper plate of the column was provided with steam condensate, the temperature of which was 90-92 °C.The condensate consumption was being increased from 2000 to 4500 m 3 /h.Yet the concentration of ethyl alcohol in the still residue of the column varied from 2.8 to 8% vol.Depending on the quantity of liquid, the residence time on the plates was being varied from 20 to 60 s, the pouring timefrom 7 to 1.7 s.The height of the liquid layer on the plates was 35-40 mm.Depending on the quantity of alcohol-containing fractions and water for hydroselection the pressure at the bottom of the column was being varied between 12 and 18 kPa.For an effective separation of the heterogeneous mixture, the decanter temperature of the RC was being maintained around 30-35 °C (Shyian et al., 2009).The aldehyde-methanol concentrate and the fusel-ester-aldehyde concentrate were being changed from 12 to 1 dm 3 /h, while controlling the quality parameters of the RC still residue and rectified ethyl alcohol. Stages of research At the first stage, the efficiency of the mass transfer process in the typical (Mishchenko et al., 2020) and cyclic (Maleta et al., 2015) rectification in the existing and experimental RC with hydraulic gates was investigated (Figure 1a).The head fraction of ethyl alcohol, steam condensate from the condensers of the distillation column and carbon dioxide separator, as well as fusel alcohol were served on the feeder plate of the column in total amount of 96 dm 3 /h in terms of anhydrous а.а.Heating steam was continuously provided to the lower part of the column and hot softened waterto the upper plate in order to hydroselect the impurities, which ranged the concentration of ethanol in the still residue from 4-5% vol.The residence of the liquid on the plates was 23 s and the pouring time through the hydraulic shutters was 7 s. At the second stage the efficiency of mass exchange between liquid and steam in an experimental RC of cyclic action without hydraulic shutters was investigated (Figure 1b).The technical solution suggested by the authors provided time-controlled cycles of liquid residence on the plates and its pouring from the upper plates to the lower ones, thanks to instantaneous change of steam velocity in the holes from 12-14 to 1,5-1 m/s by changing free cross-section of the plates from 5,5 to 51,7 %.While the valves were being lifted at the moment the pouring holes were opened, the steam velocity in the holes became lower than critical and the liquid was pouring simultaneously through all the holes to the underlying plates. At the third stage of the research the optimal parameters of mass exchange process of the experimental RC operation (Figure 3), equipped with flaky plates with turnaround sections, presented in Figure 4.The research included liquid sampling at the feeder plate (FL) as well as samples of the head fraction (HF), fusel alcohol (FA), fusel rinse water (FRW), fractions from the distillation column condenser (DCC) and carbon dioxide separator condenser (CDSC).To determine the efficiency of processing alcohol-containing fractions in a given hydrodynamic mode, the concentration of volatile impurities of alcohol in the still residue (SR), impurities concentrate (IC), epyurate (E), and rectified ethyl alcohol (REA) were studied.The results of the chromatographic analysis of the studied samples are shown in Tables 1 and 2. Study on the efficiency of mass exchange between liquid and steam in RC, equipped with moving valves and plates with hydraulic shutters. Studies have shown that in the experimental column the esters were completely removed.The grade of extraction of higher alcohols of fusel alcohol and methanol in the cyclic mode increased by 25%, the concentration ratio of head impuritiesby 21%, higher alcohols and methanolby 30% in comparison with the column operating in the stationary mode.That said, it reduced the specific heating steam consumption by 38% and 1.2 kg/dal of a.a.introduced into the column.This is explained by the fact that when the phase contact time had been prolonged from 13 to 23 s, the difference in concentration of volatile impurities in steam and liquid decreased, thus increasing the grade of phase equilibrium (Bozeya et al., 2013). The disadvantages were the low liquid capacity of the column (750 dm 3 /h), its mixing on adjacent plates during pouring and a 15% reduction in the working area of the plate due to the presence of the hydraulic shutters. Study on the efficiency of mass exchange between liquid and steam in RC, equipped with moving valves and plates without hydraulic shutters Design changes allowed to increase the liquid throughput by 34% (750 to 1000 dm 3 /h) without reducing the liquid retention time by lessening the pouring time from 7 to 2 s.Due to the absence of hydraulic shutters, the contact area of the phases on each plate has increased by 15%, which has improved the performance of the plates and the efficiency of the mass exchange: the grade of extraction of higher alcohols of fusel alcohol and methanol was increased by 29%, the concentration ratio of aldehydes was increased by 23%, higher alcoholsby 33 % and methanol by 34 % compared to a column operating in stationary mode. One-stage (full) and two-stage pouring methods The one-stage (full) pouring method did not provide an even distribution of liquid on the plates due to a lack of liquid on the paired plates while it being held on the unpaired plates and vice versa (Figure 2a).This technical decision made it impossible to maintain a stable hydrodynamic regime along the height of the column (Chu et al., 2013). In order to optimize the operation of the RC and to increase the efficiency of mass exchange, the pouring of liquid from plate to plate was carried out in two stages (Figure 2b).The method allowed to operate all the plates simultaneously, to ensure that the liquid level on the plates is the same throughout the height of the column and to stabilize the hydrodynamic mode of their operation.At that, the RC liquid throughput has increased by 20% (from 1000 to 1200 dm 3 /h), the grade of extraction of higher alcohols of fusel alcohol and methanolby 38%, the concentration ratio of head impurities has increased by 25%, higher alcoholsby 38%, methanolby 37% compared to a column operating in stationary mode. The disadvantage of the one-and two-stage methods of liquid pouring is the impossibility of autonomous regulation of liquid residence time on each individual plate, because moving elements of pouring devices of paired and unpaired plates were set in motion by one drive mechanism. Studies on the efficiency of mass exchange between liquid and steam in RC, equipped with plates with rotary sections To eliminate the disadvantages mentioned above, the authors have proposed a method of processing alcohol-containing fractions in a column equipped with plates with rotary sections (patent UA 136560.Method of mass-exchange between liquid and steam in a column apparatus).The results of chromatographic analysis of alcohol-containing fractions entering the column and the distribution of impurities in its still residue, concentrate, epyurate and rectified alcohol are presented in Tables 1 and 2. The criterion for the RC optimization was the concentration of acetaldehyde, higher alcohols of fusel alcohol (including isopropyl alcohol) and methanol in the still residue and in the rectified ethyl alcohol.The determinants of mass exchange efficiency between liquid and steam were the grade of extraction and concentration ratio of volatile alcohol impurities in the studied RC.According to the results of the study, optimal technological parameters of RC operation were: the liquid retention time on the plates is 40 s; the time of liquid pouring from the upper plate to the lower one is 1.7 s; the pressure at the bottom of the column is 11.5-12 kPa; pressure at the top of the column is up to 0.03 kPa; the temperature at the bottom of the column is 100.5-101.5 о С; the temperature in the steam phase above the upper plate is 93.5-94 о С; temperature in the steam phase on the plate of feed is 93.2-94 о C; the water temperature for hydroselection is 95-98 о С; the temperature of the mixture in the decanter is 30-35 о С; water consumption for hydroselection is 4050-4500 dm 3 /h; the temperature in the tube space of the condenser is 45-50 о С; water temperature for cooling after the dephlegmator is 85-87 о С; concentration of ethyl alcohol in the still residue is 3-4 % vol.; withdrawal of aldehyde-methanol concentrate (AMC) from the RC is 7-9 dm 3 /h; concentration of ethyl alcohol in the AMC is 70.5% vol.; withdrawal of the fusel-ester-aldehyde concentrate (FEAC) from the decanter is 2-3 dm 3 /h. The calculated values (α) and (β) at RC operation in the selected hydrodynamic mode and the specified optimal technical parameters are shown in Table 3. Result analysis Analysis of the obtained results showed that by increasing the contact time of steam and liquid on the RC plates to 40 s the grade of extraction and concentration ratio of volatile alcohol impurities increased by 25-38%.At the same time, complex esters, methylacetate and isopropyl alcohol are completely extractedthose are the impurities that significantly degrade the quality of rectified alcohol in small amounts.This can be explained by the fact that there was more of a complete steam saturation with it volatile components on the plates of the column and liquid with volatile steam components, the mixing of liquid on adjacent plates during its pouring was excluded, so that the grade of phase equilibrium achievement was increased (Chen et al., 2010;Shyian et al., 1991).The prolonging in the residence time of the liquid on the plates longer than 40 s proved to be impractical due to an increase in the specific heating steam consumption without a significant increase in the grade of impurity extraction. Specific consumption of heating steam in experimental RC decreased by 40% (from 20 to 12 kg/dal of a.a.injected to the feed plate) compared to the column operating in the stationary mode.This is explained by the fact that the free cross-sectional area of the plates in the column of cyclic action was 50-75% smaller than that of the column operating in the stationary mode, and was 2.5-5.5% (Bausa et al., 2001). After the experimental distillation column for concentrating impurities was put into operation, the yield of rectified ethyl alcohol increased by 3.8% due to its extraction from the head fraction and other alcohol-containing waste without deteriorating its qualitative indicators.The use of the RC liquid purified from volatile impurities for carrying out hydroselection in the epyurating column made it possible to reduce the consumption of hot softened water by 2000 dm 3 /h (patent UA 119277.Method of producing rectified alcohol; Ukrainets et al., 2006). Conclusion 1. To increase the efficiency of mass exchange between liquid and steam in rectification columns the expediency of using a cyclic rectification technology that provides periodic pouring of liquid from plate to plate at continuous supply of alcohol-containing fractions and steam in the column is proved.2. To implement the technology, the plates have to be equipped with moving sections connected to driving mechanisms (e.g., pneumatic cylinders), which are controlled according to the program of the controller in consonance with a predetermined algorithm.3. Equipping the columns with flaky plates allows to increase their capacity by 34% due to intensification of liquid pouring by doing so simultaneously through the pouring and barbotage holes.4. At the moment of liquid pouring, steam velocity in barbotage holes should be 1,5-1 m/s.At this speed the pouring occurs within 1.7 s. 5. To ensure stable operation of the plates during the period of liquid retention and in order to intensify its pouring, their free cross-section area should instantly change from 5.5 to 51.7%.6.In working environment, the optimal technological parameters of column operation of cyclic action were established.It is experimentally proven, that prolonging the contact time of steam and liquid up to 40 s allows to increase the grade of extraction and concentration ratio of volatile impurities of alcohol by 25-38% compared to a column operating in stationary mode.In doing so, the complete extraction of esters, methylacetate and isopropyl alcohol is provided.7. The coaxial placement of the flakes on the plate fabric allows to eliminate the possibility of formation of stagnant zones and intensify the mass transfer between steam and liquid. 8. The use of innovative technology makes it possible to reduce the specific consumption of heating steam during processing of alcohol-containing fractions by 40% compared to the known ones.9.It is advisable to use the results of the research to design column mass exchange apparatuses of cyclic action.
5,111.4
2021-06-01T00:00:00.000
[ "Engineering", "Agricultural and Food Sciences" ]
A Study of Vanadate Group Substitution into Nanosized Hydroxyapatite Doped with Eu3+ Ions as a Potential Tissue Replacement Material In this study, nanosized vanadate-substituted hydroxyapatites doped with 1 mol% and 2 mol% Eu3+ ions were obtained via the precipitation method. To evaluate the structure and morphology of the obtained compounds, the XRPD (X-ray powder diffraction) technique, Rietveld refinement, SEM-EDS (scanning electron microscopy-energy-dispersive spectrometry) and TEM (transmission electron microscopy) techniques as well as FTIR (Fourier transform infrared) spectroscopy were performed. Moreover, the chemical formula was confirmed using the ICP-OES (Inductively coupled plasma optical emission spectroscopy spectroscopy). The calculated average grain size for powders was in the range of 25 to 90 nm. The luminescence properties of vanadium-substituted hydroxyapatite were evaluated by recording emission spectra and excitation spectra as well as luminescence kinetics. The crucial step of this research was the evaluation of the biocompatibility of the synthesized nanomaterials. Therefore, the obtained compounds were tested toward sheep red blood cells and normal human dermal fibroblast to confirm the nontoxicity and biocompatibility of new nanosized Eu3+ ion-doped vanadate-hydroxyapatite. Moreover, the final step of the research allowed us to determine the time dependent ion release to the simulated body fluid environment. The study confirmed cytocompatibility of vanadium hydroxyapatite doped with Eu3+ ions. Introduction The skeletal system of vertebrates is a very complex structure, not only due to the number and the variety of size or shape of the bones themselves, but also due to the complexity of tissues that are in the constant and inseparable neighborhood of bone [1,2]. The functional unit of bone is formed by concentric circles that surround a Haversian canal; the whole structure is called the osteon or Haversian system. This system creates space for nerves and blood vessels, thus enabling neurotransmission and nutrient delivery as well as the removal of metabolic products [2]. An equally important and inseparable part of bone structure is cartilage tissue, which adheres to bone structure and forms the articular surface. Bones also provide an attachment point for tendons, ligaments, and skeletal muscles and via the cooperation of all of these components, we can move our bodies around in a three-dimensional space [3]. Bone injuries, especially in the case of serious breakage such as open fractures are a problem not only in the regeneration of the bone tissue itself, but also in the tissues adjacent to the damaged bone such as cartilage, muscle, or nervous tissue as well as skin tissue [4][5][6]. In optimal conditions, repair processes can lead to the complete renewal of bone and soft tissue structure, however, the repair capacity of the nervous tissue is quite limited [7]. It is associated with a long recovery process and often does not bring the desired effects, cess of apoptosis [41,42]. Another study confirmed that vanadium plays an essential role in the metabolism of rodents and determines proper physiological development in rats [43,44]. Regarding this study, it was also confirmed that local treatment with vanadate leads to strengthening of healing wounds by increasing cellular organization in the tissue structure [45,46]. The aim of this study was to obtain the europium doped hydroxyapatite materials substituted with vanadate groups (VO 4 3− ) and evaluate their luminescence and biological properties primarily as a tissue filler in the wound healing process and their eventual use as a bioimaging material. For the first time, hydroxyapatite doped with Eu 3+ ions and substituted with (VO 4 3− ) groups were obtained. Structural study, luminescence properties as well as biocompatible features were evaluated, and the results clearly showed that two phased materials such as vanadate hydroxyapatites doped with Eu 3+ ions are promising biocompounds for medical use as a skin tissue filler material. The example synthesis for 2 g of Ca 9.9 Eu 0.1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 nanopowder material, 4.5142 g of Ca(NO 3 ) 2 ·4H 2 O was dissolved in 50 mL of distilled water and then 1.2748 g of (NH 4 ) 2 HPO 4 was dissolved separately in 50 mL of distilled water. To dissolve NH 4 VO 3 in distilled water, the substrate (0.2258 g) was mixed together with 75 mL of distilled water and then placed in a Teflon vessel. The dissolving process was carried out in a microwave reactor (ERTEC MV 02-02) for 30 min at a temperature of 150 • C and under autogenous pressure (8-11 bar). The stoichiometric amount of Eu 2 O 3 (0.0339 g) was digested in 0.2 mL HNO 3 (≥65.0%, Sigma-Aldrich, Saint Louis, MO, USA) and 3 mL of distilled water to obtain water-soluble europium nitrate (Eu(NO 3 ) 3 ). The product was recrystallized three times to eliminate HNO 3 residues by adding distilled water three times and evaporating at the temperature of 100 • C. Then, the obtained Eu(NO 3 ) 3 was dissolved in 25 mL distilled water and mixed with a water solution of Ca(NO 3 ) 2 ·4H 2 O and then both substrates were mixed with a previously amalgamated water solution of (NH 4 ) 2 HPO 4 and water solution of NH 4 VO 3 . After rapidly amalgamating all substrates together, the pH was adjusted with 1.5 mL of ammonia (NH 3 ·H 2 O 25% Avantor, Gliwice, Poland) to achieve pH = 9. Synthesis was conducted using magnetic stirring (500 rpm) at a temperature of 150 • C. After the synthesis, the obtained composites were washed out in distilled water to obtain pH = 7 and were further dried for 2 days at 70 • C. Afterward, powders were thermally treated at a temperature of 600 • C for 6 h and the build-up temperature and cooling temperature was set up at 3 • C per minute. The syntheses of the remaining nanomaterials were analogous. Material Characterization With the use of the X-ray diffraction (XRD) technique, the vanadium-substituted hydroxyapatite doped with Eu 3+ ion powders were examined to determine the crystalline structure of the obtained compounds. X-ray diffraction patterns were performed using a PANalytical X'Pert Pro X-ray diffractometer (Malvern Panalytical Ltd., Malvern, UK) with Ni-filtered Cu Kα radiation (U = 40 kV, I = 30 mA) in the 2θ range of 5-70 • . The step time for XRD analysis was estimated with 0.05 and the time per step was estimated as 0.7 second per step. The XRD-recorded patterns were compared with the reference hydroxyapatite pattern from the Inorganic Crystal Structure Database (ICSD). The concentrations of Eu, Ca, V, and P in the resulting sample solutions were determined by the inductively coupled plasmaoptical emission spectrometer (ICP OES) Agilent 720 (Santa Clara, CA, USA)(with standard setting). The samples were prepared by dissolving 100 mg of nanopowder material in 2 mL of 70% HNO 3 (ASC, Sigma-Aldrich, Saint Louis, MO, USA) at the temperature 120 • C and by gradual adding of deionized water to the volume of 50 mL. The concentration of P, Eu, and V ions were measured in the solutions diluted 20 times and the concentration of Ca ions was measured in the solution diluted 500 times. For the measurements, three parallel samples of the solution were prepared and analyzed by the ICP-OES method (Agilent, model 720, Santa Clara, CA, USA), (with standard setting) and compared with standard curves in the concentration range of 0.05 to 5.00 mg/mL for Ca, Eu, and 100 to 200 mg/mL for P ions. To evaluate the presence of phosphate and vanadate groups in the structure of obtained compounds, IR spectra were measured in the range of 4000-400 cm −1 (mid-IR) at 295 K. The measurements of attenuated total reflectance ATR-FTIR were recorded with resolution 4 cm −1 (32 scans) using a Nicolet iS50 infrared spectrometer (Thermo Fisher Scientific, Waltham, MA, USA). The analysis of the elemental mapping of the selected sample was determined by using an FEI Nova NanoSEM 230 scanning electron microscope operating at an acceleration voltage in the range 3.0-15.0 kV and spot size of 4.0-4.5. The samples were prepared by evenly spraying a layer of graphite before observation. The morphology and nanostructure of the nanoparticles were investigated via high resolution transmission electron microscopy (HRTEM) using a Philips CM-20 Super Twin microscope operated at 200 kV. The selected material samples were prepared by the dispersion of powders in methanol. Then, a drop of suspension was deposited on a copper microscope grid covered with perforated carbon. Luminescence Properties The luminescence kinetics, emissions, and excitation spectra of vanadium-substituted apatite compounds doped with Eu 3+ ions were determined with an FLS980 fluorescence spectrometer (Edinburgh Instruments, Kirkton Campus, UK). During the measurements of emission and excitation spectra, a 450W Xenon lamp was used as an excitation source and the radiation from the lamp was filtrated with a 300 mm monochromator equipped with a holographic grating (1800 grooves per mm, a blaze of 250 nm). To record the luminescence kinetics, a microsecond flashlamp (uF2) was used as a source of excitation and a Hamamatsu R928P photomultiplier (Hamamatsu, Hamamatsu City, Japan) was used as a detector. Both excitation and emission spectra were adjusted to the intensity of the excitation source according to the specifications of the device. The excitation spectra and luminescence kinetics were recorded at 618 nm according to the most intense electric dipole transition (from level 5 D 0 → 7 F 2 level) and excited at 396 nm [47][48][49]. Preparation of Nanosized Vanadium-Substituted Hydroxyapatite Suspension The stocks of nanosized vanadium-substituted hydroxyapatites doped with Eu 3+ ions were prepared by the suspension of the used compounds in distilled water in the concentration of 1 mg/mL. Then, each stock was bath-sonicated for 1h at RT. Freshly prepared colloids were used in biological experiments. Cell Culture and Cytotoxicity Assay Normal human dermal fibroblasts (NHDF, Sigma-Aldrich, Saint Louis, MO, USA) cell line was maintained in high glucose Dulbecco's modified Eagle medium (DMEM) with L-glutamine (Biowest, Nuaillé, France) and supplemented with 200 U/mL penicillin and 200 µg/mL streptomycin and 10% heat-inactivated fetal bovine serum (FBS, South America origin, Biowest, Nuaillé, France). The cell line was incubated in standard conditions in a humified atmosphere of 95% air and 5% CO 2 at 37 • C. The cell line was passaged three times before the experiments were performed. To evaluate their potential nontoxicity, the obtained compounds were tested on normal human dermal fibroblasts (NHDF) cell line (Sigma-Aldrich, Saint Louis, MO, USA) via the MTT cell viability assay. MTT, also known as the cytotoxicity assay, is a colorimetric assay used for establishing the percent of metabolically living cells. NHDF cells were seeded at a density of 10,000 cells per well in a 48-well plate and, after 24 h, when confluency was obtained, 60% to 70% cells were treated with selected compounds at two different concentrations of 50 µg/mL and 100 µg/mL. After 24 h of treatment with vanadium substitutes, hydroxyapatite composites doped with Eu 3+ ions, the medium containing the tested compounds was removed and cells were washed out twice with sterile PBS (Biowest, Nuaillé, France) to remove detached and dead cells and to accurately rinse nanoparticles. After washing, freshly prepared MTT (Sigma-Aldrich, Saint Louis, MO, USA) reagent at a concentration of 0.5 mg per 1 milliliter was dissolved in sterile PBS and added to the cells that were treated with compounds and to the non-treated cells, which were used as a control group and were set up at cell viability of 100%. Cells were incubated for 3 h at 37 • C in a humified atmosphere of 95% air and 5% CO 2 . After the incubation process, PBS containing MTT was removed, and formazan crystals produced by metabolically active cells were dissolved by adding DMSO (Chempur, PiekaryŚląskie, Poland). Absorbance was read at 560 nm and 670 nm (background reference). The experiment was conducted three times. The viability of the used cell lines was estimated using the following formula: Hemolysis Assay Sterile and defibrinated sheep blood (Pro Animali) was washed out three times in sterile PBS and ultimately suspended in sterile PBS (Biowest, pH 7.4) at a ratio of 1:1. Selected vanadium-substituted hydroxyapatite nanoparticles were tested toward sheep red blood cells at concentrations of 50 µg/mL and 100 µg/mL. To establish positive control, sheep erythrocytes were combined with 10% SDS (sodium dodecyl sulfate) and treated as 100% of hemolysis, negative control was obtained by mixing sheep erythrocytes with sterile PBS. After 2 h of incubation at 37 • C, positive and negative control and red blood cell samples treated with selected hydroxyapatite-based composites were centrifuged (5000 RPM, 5 min) to obtain supernatant and the optical density was measured at 540 nm (Varioscan Lux). The hemolysis percentage was calculated using the formula below: Red blood cell morphology as well as the integrity of cell membrane were observed via confocal microscopy (Olympus IX83 Fluoview FV 1200, 10× magnification with additional 4× digital magnification). Sheep erythrocytes were prepared as described above and treated with selected compounds; positive and negative controls were also prepared. After 2 h of incubation at 37 • C, red blood cells were centrifuged (5000 RPM, 5 min), the supernatant was gently removed, and cell precipitate was suspended with sterile PBS at a ratio of 1:1. A blood smear was prepared by transferring 5 µL of the sample onto a microscope slide and using a coverslip to obtain the smear. Time Dependent Ion Release to SBF Simulated body fluid (SBF) solution with an optimal value of pH and physiological temperature closely mimics blood plasma in the human body. By using SBF, the rate of ion release from the tested biomaterials to the fluid environment can be easily evaluated, especially when further in vitro and in vivo tests are planned [50,51]. To evaluate ion release, the two representatives from the two obtained nanopowder series materials were selected. Ca 9.9 Eu 0,1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 and Ca 9.8 Eu 0,2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 nanopowder materials were used in this experiment because the XRD diffractograms showed a clear hexagonal structure of hydroxyapatite. These were placed in the Falcon tubes separately and the previously prepared simulated body fluid (pH = 7.40) was gently added to obtain a final concentration of 1 mg/mL. The simulated body fluid was prepared by accurately following the procedure created by Kokubo et al. [50]. The samples were placed in the shaker incubator and the temperature was set to 37 • C with a rotation of 100 rpm. The period when samples were collected was set to 0 min, 5 min, 15 min, 30 min, 45 min, 60 min, 360 min, and 1440 min of incubation with simultaneous rotation. Each time, 3 mL of the fluid sample was collected in a new, separated Falcon tube, and 3 mL of fresh SBF was added to the remaining solution to refill the missing volume. Subsequently, when all fluid samples were collected, 0.2 mL of 70% HNO 3 (ASC, Sigma-Aldrich) was added to all representatives and deionized water was added to obtain a final volume of 25 mL. When all samples were prepared, the presence of investigated ions such as Ca, P, Eu and V was identified by the inductively coupled plasma-optical emission spectrometer (ICP OES) Agilent 720 instrument. Analysis of Structure and Morphology Two series of hydroxyapatite-based nanopowders doped with 1 mol% Eu 3+ ions and 2 mol% Eu 3+ ions and substituted with different amounts of vanadate groups were synthesized using the precipitation method. XRD diffractograms clearly showed the hexagonal structure of hydroxyapatite for powder materials that contained 1 mol% Eu 3+ ions and 2 mol% Eu 3+ ions and substituted with up to two vanadate groups substituted in the place of phosphate groups (Figure 1a,b). The delicate signals of hydroxyapatite hexagonal structure can be noticed among samples that contain 1 mol% Eu 3+ and 2 mol% Eu 3+ and are substituted with three (VO 4 3− ), however, the more vanadium appeared in the sample, the more the calcium pyrovanadate phase was visible. The results for two series of nanosized materials containing up to two vanadate groups, corresponded to the standard ICSD database diffractogram pattern for hydroxyapatite crystals (ICSD-262004). For the above-mentioned powders, signals in the range of 32 • to 34 • in the experimental patterns corresponded to distinctive phosphate groups of the hydroxyapatite crystal structure (ICDS-262004). Broader bases of the peaks, especially in the range of 32 • to 34 • , may indicate the nanosized structure of the nanosized materials substituted with (VO 4 3− ) 1-3 and doped with 1 mol% and 2 mol% of Eu 3+ ions (Figure 1a,b) [52,53]. The gradual increase in the number of vanadate groups in samples of the obtained nanopowder materials eventually led to the gradual decrease in the intensity of the signal from the phosphate groups and the increase in the intensity of the signal from the vanadate groups, which is certainly observed in the XRD diffractograms (Figure 1a,b). Interestingly, the occurrence of another phase was noticeable in the range of 27 • -32 • for the XRD experimental patterns of both series of nanopowder materials that were substituted with three and more vanadate groups. It appears that a progressive increase in the intensity of the signal from the range 27 • -32 • came from Ca 2 V 2 O 7 (calcium pyrovanadate) (Figure 1a,b). Experimental data of the obtained materials substituted with (VO 4 3− ) 3-6 groups were compared with the ICSD XRD pattern of Ca 2 V 2 O 7 (ICSD-421266). Our results indicate that the precipitation method used in the experiment was sufficient to obtain materials substituted with up to two (VO 4 3− ) groups in place of (PO 4 3− ) groups in the crystal hydroxyapatite structure. The hydrothermal synthesis method seems to be more adequate to incorporate more than three vanadate groups into the hydroxyapatite crystalline framework [54]. Nonetheless, there were also some data that indicate that the ammonia environment is not suitable to obtain vanadate-substituted hydroxyapatite. To obtain an alkaline environment of a chemical reaction during the synthesis of vanadate-substituted hydroxyapatite, NaOH should be substituted for NH 3 ·H 2 O [54,55]. Moreover, other substrates can be used during the synthesis of vanadate-substituted hydroxyapatite. Some data suggest that V 2 O 5 could be used as a substitute of NH 4 VO 3 and P 2 O 5 as a substitute of (NH 4 ) 2 HPO 4 [54][55][56][57]. 2 , where x is equal 1, 2, 3, 4, 5, and 6. The obtained materials were thermally treated at a temperature of 600 • C for 6 h. The XRD results were compared with the ICSD database hydroxyapatite and calcium pyrovanadate patterns. The signals from hydroxyapatite are labeled with (*) and signals from calcium pyrovanadate are labeled with (•). The structural refinement was calculated by the Maud program (version 2.99) and was based on the hexagonal structure of hydroxyapatite and triclinic calcium pyrovanadate crystals indexing of the CIF (Crystallographic Information File) [58,59]. The quality of the structural refinement was evaluated via R-values (see Table S1 and Figure S1). Presence of apatite structure as well as secondary phase formation of calcium pyrovanadate among the nanopowder materials Ca 9.9 Eu 0.1 (PO 4 ) 6−x (VO 4 ) x (OH) 2 , and Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 (where x is equal 1, 2, 3, 4, 5 and 6) was confirmed. Moreover, the calculated average grain size for powders was in the range of 25 to 90 nm. More details regarding Rietveld refinement are presented in the Supplementary Materials. The FTIR spectra of the second series of nanopowder materials containing Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 (x = 1, 2, 3, 4, 5 and 6) confirmed the crystalline hydroxyapatite structure due to the presence of characteristic active vibrational bands that refer to hydroxyl groups (OH − ) and most importantly to phosphate groups (PO 4 3− ) ( Figure 2). The absorption bands of the phosphate group at 560.70 cm −1 and 600.24 cm −1 corresponded to the double degenerate bending mode (ν2) of the P-O-P bonds and triply degenerate bending mode (ν4) of the P-O bonds, respectively [18,60]. The absorption bands of the phosphate group at 962.34 cm −1 and 1086.20 cm −1 correlated with the non-degenerative symmetric stretching mode (ν 1 ) of P-O and the triply degenerative asymmetric stretching mode (ν 3 ) of the P-O bond, respectively [60][61][62]. All positions of the bands corresponded exactly to the hydroxyapatite structure, but only in compounds that contain up to three (VO 4 3− ) groups. It is also noticeable (Figure 2) that the additional incorporation of (VO 4 3− ) groups into the hydroxyapatite framework unalterably entails the shift in the absorption bands toward lower wavelengths [63]. Gradual increment of the number of vanadium groups substituted for phosphate groups leads to the appearance of characteristic vibrational bands that invoke the appearance of Ca 2 V 2 O 7 crystal structure [63]. It is particularly observed for the sample with the highest number (x = 6) of vanadate groups substituted for phosphate groups. Typical vibrational bands at 417.03 cm −1 (ν 4 ) and 561. The TEM images clearly show the nanostructure of Ca 9.8 Eu 0.2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 , Ca 9.8 Eu 0.2 (PO 4 ) 4 (VO 4 ) 2 (OH) 2 , and Ca 9.8 Eu 0.2 (PO 4 ) 3 (VO 4 ) 3 (OH) powders ( Figure 3). Additionally, according to the SAED (selected area electron diffraction) technique, all selected materials presented well developed spotty rings, which signify the crystalline structure of the obtained powders ( Figure 3) [67,68]. As also observed on the images (Figure 3), nanosized particles had the tendency to form into larger agglomerates. The results of ICP-OES measurements showed the presence and concentration of Eu 3+ ions in the compounds containing 2 mol% of the doped lanthanide in the samples of Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 where "x" is equal 1, 2, and 3. The presence of the vanadium element was also confirmed as well as calcium and phosphorous (Table 1) and the content of the elements approximately matched the theoretical number of particular elements in the obtained nanosized materials. Indeed, all desired ions were present in the nanomaterials, and the ICP-OES measurements showed an almost identical content of elements when compared with theoretical calculations. However, the content of vanadium ions seemed to be less when compared to the formula that was previously established. These data resulted from the formula for the hydroxyapatite crystalline calculation, hence it can be observed that phosphorus is indeed substituted for vanadium, but in a lower amount than assumed. Simultaneously, the second phase of calcium pyrovanadate appeared (Figures 1 and 2) and was probably the result of more vanadium ions being incorporated into this structure and not into the hydroxyapatite lattice. Moreover, the SEM-EDS mapping of Ca 9.8 Eu 0.2 (PO 4 ) 4 (VO 4 ) 2 (OH) 2 confirmed the presence of all theoretical components such as oxygen, phosphorous, vanadium, and europium in the hydroxyapatite crystalline structure. The performed analysis also showed that all the components were equally distributed over the entire surface of the tested material (Figure 4). Luminescence Properties Based on the XRD results ( Figure 1) and partially on the FTIR spectra results (Figure 2), it is clear that in both series of obtained compounds, only those that contained 1 mol% and 2 mol% of europium (III) ions and were substituted with one and two (VO 4 3− ) groups characterized by crystalline hydroxyapatite structure. The more phosphate groups is substituted by vanadate groups the less of pure hydroxyapatite structure is visible and more of calcium pyrovanadate appears. Taking into consideration the duality of the obtained compounds, we wanted to evaluate whether the different crystalline phases influenced the luminescence properties of Eu +3 ions and whether they decreased or increased these features. Therefore, the presence of Eu 3+ ions incorporated into the structure of materials based on hydroxyapatite was confirmed by performing luminescence studies. The good quality emission spectra of both series of vanadium hydroxyapatite compounds Ca 9.9 Eu 0.1 (PO 4 ) 6−x (VO 4 ) x (OH) 2 and Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 ( x= 1, 2, 3, 4, 5 and 6) were measured in the spectral range of 500 to 750 nm (Figure 5a,b). During the measurements of both series of materials, an excitation wavelength of 396 nm was set as a function of the concentration of optically active ions. The recorded spectra were normalized to the characteristic europium transition 5 D 0 → 7 F 1 . Five typical transitions of Eu 3+ ions were present in the spectra at wavelengths of 575 nm, 585 nm, 618 nm, 660 nm, and 710 nm, which corresponded to the transition from the excited level of 5 D 0 to the levels of 7 F 0-4 . The transitions were assigned as 5 D 0 → 7 F 0 , 5 D 0 → 7 F 1 , 5 D 0 → 7 F 2 , 5 D 0 → 7 F 3 , and 5 D 0 → 7 F 4 , respectively, with increasing wavelength value. The most intense peak corresponded to the 5 D 0 → 7 F 2 transition, for which emission was observed at wavelengths in the range of 600-625 nm, while the maximum intensity was observed at 618 nm (see Figure 5a,b) [47,60,69]. A clear red emission from Eu 3+ ions incorporated into vanadate hydroxyapatite materials was observed. According to the Judd-Ofelt theory, the 5 D 0 → 7 F 0 transition is strictly forbidden and its occurrence indicates a violation of the selection rules of the above-mentioned theory [47]. By analyzing the canonical transition 5 D 0 → 7 F 0 , the number of crystallographic sites substituted by europium ions into the structure of the host material can be assumed. The appearance of this transition indicates the location of Eu 3+ ions at the low-symmetry environment and its observation is enabled when Eu 3+ ions occupy sites with local symmetry of C n , C nv , C s [70]. It can be seen that in the case of transition from the level 5 D 0 to 7 F 0 for the materials that contained only one (VO 4 3− ) group substituted for the (PO 4 3− ) group, the band clearly stood out from the spectra of other compounds. The transition 5 D 0 → 7 F 0 of materials Ca 9.9 Eu 0.1 (PO 4 ) 6−x (VO 4 ) x (OH) 2 ; Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 (x = 1) was noticeably divided into three splits, which indicates the occupancy of three different crys-tallographic sites with the local symmetry of C n , C nv , C s in the hydroxyapatite structure by Eu 3+ ions [71]. As the 7 F 0 level is degenerate, it does exhibit crystal field splitting and our results showed at least three different emitting species [48]. In the case of further transition from the level 5 D 0 to 7 F 1 , the same tendency was observed that for the above-mentioned materials with only one substituted (VO 4 3− ) group, the band was broad and not visibly split as it was in the other materials that contained two and more (VO 4 3− ) groups. For compounds Ca 9.9 Eu 0.1 (PO 4 ) 6−x (VO 4 ) x (OH) 2 ; Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 where x < 2, 3, 4, 5, 6, the transition 5 D 0 → 7 F 1 is divided into three splits and these results correspond to other studies, where such characteristic splits occurred [72,73]. The most intensive transition for both series of the tested nanomaterials appeared to be the so-called hypertensive transition from level 5 D 0 to 7 F 2 . For this transition, the same trend was maintained as for compounds Ca 9.9 Eu 0.1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 and Ca 9.8 Eu 0.2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 , where one broad band was visible. The hypertensive transition for the rest of the compounds was divided into two distinct bands, which correspond to the results obtained by other studies [63,72]. The excitation spectra were measured in the wavelength range of 240-600 nm for which the emission was monitored at 618 nm (see Figure 6). The recorded spectra were normalized to the characteristic europium transition 7 F 0 → 5 D 2 . The spectra for both series of compounds Ca 9.9 Eu 0.1 (PO 4 ) 6−x (VO 4 ) x (OH) 2 and Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 (x = 1, 2, 3, 4, 5, 6) showed peaks from the transitions 7 F 0 → 5 H (3-7) , 7 F 0 → 5 L 6 , and 7 F 0 → 5 D 2 . In the spectra recorded for Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 , where x ranged from 1 to 6, peaks corresponding to the transitions 7 F 0 → 5 F (1-4) and 7 F 0 → 3 P 0 could also be seen. In the (VO 4 3− ) x spectra, where x ranged from 1 to 5, apart from the transition's characteristic for Eu 3+ ions, an intense peak was visible at a wavelength of approximately 270 nm, corresponding to the charge transfer of an electron between the ionized oxygen atom and the europium ion (O 2− →Eu 3+ ). Our results are compatible with other results that provided data of charge transfer between O 2− and Eu 3+ ions in the hydroxyapatite structure [60,70,74]. For (VO 4 3− ) 6 spectra, this peak was masked with a more intense peak, for which the maximum of intensity was visible at a wavelength lower than 240 nm. With the increasing number of vanadate groups in the hydroxyapatite structure, the intensity of the broad signal originating from the electron transfer from the oxygen ion O 2− to the vanadium ion, V 5+ , (O 2− → V 5+ ) increases. This is a natural tendency, and it is natural for the charge transfer to be increased with an increased number of vanadium groups substituted for phosphate groups. Similar results have been presented in different studies, and it seems to strongly correspond to our results [63,72,73,75,76]. The maximum intensity of this peak was noticeable on the spectra corresponding to the samples of Ca 9.9 Eu 0.1 (VO 4 ) 6 (OH) 2 and Ca 9.8 Eu 0.2 (VO 4 ) 6 (OH) 2 and appeared at wavelengths in the range 380-400 nm. It is clearly visible for both series of obtained samples as for the compounds: Ca 9.9 Eu 0.1 (PO 4 ) 6−x (VO 4 ) x (OH) 2 and Ca 9.8 Eu 0.2 (PO 4 ) 6−x (VO 4 ) x (OH) 2 (x = 1); the charge transfer O 2− → Eu 3+ was the most visible and clear characteristic of excitation spectra for europium ions. The more vanadate groups appeared, the more intense the charge transfer O 2− → V 5+ appeared and the results referred to both series of materials [72,73,75]. Moreover, the charge transfer from oxygen to vanadium seems to be slightly shifted toward the highest wavelength number, however, it can be caused by the incorporation of the vanadium groups into the hydroxyapatite framework. Nevertheless, the peak positions for the excitation and emission spectra were in agreement with those expected for Eu 3+ ions incorporated in exchange for calcium(II) ions into the hydroxyapatite structure [30,31,47,74]. For both 1 mol% and 2 mol% Eu 3+ ion-doped hydroxyapatites, there was a significant difference in the decay time when more than one vanadate group was substituted instead of the phosphate group. For samples recorded for 1 mol% Eu 3+ :CaHAp with (VO 4 3− ) x groups and 2 mol% Eu 3+ :CaHAp with (VO 4 3− ) x groups where x ranges from 2 to 6, the decay time was similar, much shorter than for hydroxyapatites containing only one vanadate group (see Figure 7). Such a tendency corresponds to the emission and excitation spectra recorded for both series of compounds (Figures 5 and 6) where the difference can be observed between compounds containing only one vanadium group in the hydroxyapatite structure and the rest of the compounds with a higher number of (VO 4 3− ) groups. Biological Properties It has been decided that the biocompatibility of selected compounds from both series would be evaluated. Therefore, from the first series, which contained a constant concentration of Eu 3+ ions (1 mol%), compounds with 1, 2, 3, and 6 vanadate groups were selected. From the second series with a 2% concentration of europium(III) ions, the compounds were selected analogously to the first series. The compounds with 1-3 vanadate groups substituted with phosphate groups were selected for the evaluation of potential toxicity because it was noted that our compounds maintained the hexagonal structure of hydroxyapatite only up to three substituted groups. Moreover, compounds that contained the greatest number of vanadate groups were chosen for this experiment to establish whether the increased concentration of vanadium elements showed a potential toxic effect. It was found that the hemolysis assay ( Figure 8) showed nontoxicity of the tested compounds and hemoglobin release was estimated below 5% of acceptable hemolysis, which naturally occurs in the blood system [77]. The results were compared to red blood cells treated with 1% SDS (dodecyl sulfoxide), which was established as the positive control and caused 100% of hemoglobin release and cell membrane disruption (data not shown). The negative control, which was maintained at~1% of hemoglobin release, was obtained by treating purified erythrocytes with PBS buffer (pH = 7.4). Data revealed that all of the compounds showed biocompatibility. Moreover, the shape of red blood cells remained round, and no pathological alterations of the cell membrane were observed ( Figure 9). The results clearly showed that the selected compounds showed biocompatible properties toward the NHDF cell line (normal human dermal fibroblasts). It was decided that marginal compounds of the two series would be tested because of the obtained hemolysis assay results. The viability of cells was maintained at around 100 percent even when cells were treated with the highest concentration of prepared double distilled water-based colloids ( Figure 10). The safest concentration of the tested compounds was 50 µg/mL where the viability of the NHDF cell line was slightly more increased when compared to cells treated with a concentration of 100 µg/mL. While many studies indicate the toxicity of the vanadium compounds such as ammonium metavanadate, calcium orthovanadate, and calcium pyrovanadate toward mammalian gastrointestinal, respiratory, urinary, and reproductive systems, our compounds showed non-toxic properties toward red blood cells and normal human dermal fibroblasts [78][79][80]. The toxicity of the vanadium element is mainly caused by the overdosage of this component, as has been evaluated in some research [78]. However, on the other hand, there is much evidence that indicates the positive influence on living organisms, for example, neuroprotective and neuroregenerative properties [41,42,44]. Nevertheless, neither the high concentration used in the case of our materials (100 µg/mL) nor the increased number of vanadium groups incorporated into the hydroxyapatite structure caused a harmful effect on erythrocytes and the NHDF cell line. 2 , where x = 1, 2, 3, 6. The concentration of the tested compounds was estimated at 50 µg/mL and 100 µg/mL. The red line was equal to 5% of acceptable hemoglobin release. The results were compared with red blood cells treated with PBS buffer (1% of hemolysis-negative control) and 1% SDS (100% of hemolysis-positive control). Figure 9. Red blood cell smear was performed with the use of purified sheep erythrocytes. For this experiment, selected compounds were used from the first series: Ca 9.9 Eu 0.1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 , Ca 9.9 Eu 0.1 (VO 4 ) 6 (OH) 2 and from the second series: Ca 9.8 Eu 0.2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 , Ca 9.8 Eu 0.1 (VO 4 ) 6 (OH) 2 . The final concentration of the tested compounds was estimated at 100 µg/mL. The morphology of red blood cells treated with the tested compounds was compared with that of red blood cells treated with PBS buffer (negative control) and 1% SDS (positive control). Figure 10. MTT cytotoxicity assay of compounds selected from the first series: Ca 9.9 Eu 0.1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 ,Ca 9.9 Eu 0.1 (VO 4 ) 6 (OH) 2 and second series: Ca 9.8 Eu 0.2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 ,Ca 9.8 Eu 0.1 (VO 4 ) 6 (OH) 2 . MTT assay was performed by using the NHDF cell line, the final concentration of the tested compounds in culture medium was estimated at 50 µg/mL and 100 µg/mL. Ion Release to SBF The ion dependent release to the simulated body fluid environment was evaluated from two selected nanopowder materials Ca 9.9 Eu 0,1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 and Ca 9.9 Eu 0,2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 (Tables 2 and 3). Ca 9.9 Eu 0,1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 and Ca 9.8 Eu 0,2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 nanopowder materials were used in this experiment because the XRD diffractograms showed a clear hexagonal structure of hydroxyapatite ( Figure 1). Therefore, during the experiment, the ion release from the solid hydroxyapatite crystalline materials was determined. The results showed that Ca and P ions were already present in the fluid in both samples in the concentration 11 ± 0.5 ppm for calcium and 2.45 ± 0. ppm to 2.84 ± 0.15 ppm for phosphorus ions (Tables 2 and 3). Their appearance comes from the components of simulated body fluid itself [50,51]. Nevertheless, a gradual increase was noticed during longer incubation and after 1440 h, calcium concentration was maintained at 13.84 ± 0.7 ppm for Ca 9.9 Eu 0,1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 to 14.9 ± 0.7 ppm Ca 9.9 Eu 0,2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 , and for phosphorus ions, it was maintained around 2.8 ± 0.15 ppm for Ca 9.9 Eu 0,1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 and 2.9 ± 0.15 ppm for Ca 9.9 Eu 0,2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 nanopowder materials (Tables 2 and 3). Slightly different observations were seen for vanadium ion release because for Ca 9.9 Eu 0,1 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 , vanadium ion release was already observed after 45 min of incubation and for Ca 9.8 Eu 0,2 (PO 4 ) 5 (VO 4 ) 1 (OH) 2 after 30 min and also had a tendency for gradual release (Tables 2 and 3). There was no release observed for Eu ions, however, slight detection was observed for the sample doped with higher lanthanide concentration (Tables 2 and 3). For the two tested representatives, all investigated ions appeared to have a relatively slow release to the simulated body fluid, which is a promising result, and they can be used in our future in vitro and in vivo tests. Conclusions This paper presents the structural characterization, luminescence, and biological properties of vanadium hydroxyapatite doped with 1 mol% and 2 mol% of Eu 3+ ions. The samples obtained via the precipitation method and thermally treated at 600 • C showed a hydroxyapatite hexagonal structure up to three vanadate groups substituted for phosphate groups. It was confirmed by X-ray diffractometry and FTIR spectra that the gradual increase in (VO 4 3+ ) groups in the obtained nanopowder materials eventually led to the gradual decrease in the intensity of the signal from (PO 4 3+ ) groups of the hydroxyapatite framework and an increase in the intensity of the signal from (VO 4 3− ) groups of calcium vanadate. The luminescence study showed the characteristic red emission spectra of Eu 3+ ions doped among all samples. Our study also presented how the number of vanadate groups in europium-doped hydroxyapatite influences the emission spectra, excitation spectra, and luminescence kinetics. Finally, the evaluation of the potential toxicity of the obtained nanomaterials confirmed hemocompatibility toward sheep red blood cells even in the highest tested concentration. Furthermore, our study confirmed cytocompatibility of vanadium hydroxyapatite doped with Eu 3+ ions and our materials exhibited biocompatibility even when the highest number of vanadium groups was incorporated into hydroxyapatite. The time dependent ion release experiment showed slow and gradual element release to the SBF solution and additionally confirmed the potential biological application of the obtained nanopowder materials.
9,651.6
2021-12-28T00:00:00.000
[ "Materials Science" ]
Selective Loss of either the Epimerase or Kinase Activity of UDP-N-acetylglucosamine 2-Epimerase/N-Acetylmannosamine Kinase due to Site-directed Mutagenesis Based on Sequence Alignments* N-Acetylneuraminic acid is the most common naturally occurring sialic acid, as well as being the biosynthetic precursor of this group of compounds. UDP-GlcNAc 2-epimerase/N-acetylmannosamine kinase has been shown to be the key enzyme of N-acetylneuraminic acid biosynthesis in rat liver, and it is a regulator of cell surface sialylation. The N-terminal region of this bifunctional enzyme displays sequence similarities with prokaryotic UDP-GlcNAc 2-epimerases, whereas the sequence of its C-terminal region is similar to sequences of members of the sugar kinase superfamily. High level overexpression of active enzyme was established by using the baculovirus/Sf9 system. For functional characterization, site-directed mutagenesis was performed on different conserved amino acid residues. The histidine mutants H45A, H110A, H132A, H155A, and H157A showed a drastic loss of epimerase activity with almost unchanged kinase activity. Conversely, the mutants D413N, D413K, and R420M in the putative kinase active site lost their kinase activity but retained their epimerase activity. To estimate the structural perturbation effect due to site-directed mutagenesis, the oligomeric state of all mutants was determined by gel filtration analysis. The mutants D413N, D413K, and R420M as well as H45A were shown to form a hexamer like the wild-type enzyme, indicating little influence of mutation on protein folding. Histidine mutants H155A and H157A formed mainly trimeric enzyme with small amounts of hexamer. Oligomerization of mutants H110A and H132A was also significantly different from that of the wild-type enzyme. Therefore the loss of epimerase activity in mutants H110A, H132A, H155A, and H157A can largely be attributed to incorrect protein folding. In contrast, the mutation site of mutant H45A seems to be involved directly in the epimerization process, and the amino acids Asp-413 and Arg-420 of UDP-GlcNAc 2-epimerase/N-acetylmannosamine kinase are essential for the phosphorylation process. The fact that either epimerase or kinase activity are lost selectively provides evidence for the existence of two active sites working quite independently. N-Acetylneuraminic acid (Neu5Ac) 1 is the precursor of sialic acids, a group of important molecules in biological communication. Sialic acids have been shown to be involved in cellular adhesion (1,2), and they are important as recognition determinants (3). Glycoproteins can be protected against degradation by sialylation (4,5), and the metastatic and invasive potential of tumor cells is often correlated with the amount of overexpressed membrane-bound sialic acids (6,7). The biosynthesis of Neu5Ac in rat liver is initiated and regulated by its key enzyme, UDP-N-acetylglucosamine 2-epimerase (EC 5.1.3.14)/N-acetylmannosamine kinase (EC 2.7.1.60) (8). Furthermore, it was shown recently that UDP-GlcNAc 2-epimerase is a regulator of cell surface sialylation (9). The bifunctional enzyme catalyzes the conversion of UDP-Glc-NAc to ManNAc and the consecutive phosphorylation to form ManNAc 6-phosphate. The homogeneous enzyme from rat liver has an apparent molecular mass of 75 kDa. It assembles as a hexamer possessing both enzyme activities. In vitro it partly decays to dimers, which possess only the kinase activity. CMP-Neu5Ac, the end product of sialic acid biosynthesis, has been shown to be a competitive feed-back inhibitor of UDP-GlcNAc 2-epimerase activity (10). The UDP-GlcNAc 2-epimerases/ ManNAc kinases of rat (11), mouse (12), and human (13,34) have been cloned and sequenced. In all three enzymes, an open reading frame of 2166 base pairs encodes 722 amino acids. The overall amino acid identity between the enzymes from rat and mouse is 99.4%, and between rat and human the identity is 98.6% (13), showing that UDP-GlcNAc 2-epimerase/ManNAc kinase is highly conserved in mammalian organisms. Bifunctional enzymes are quite rare in mammalian metabolism. Further examples of enzymes catalyzing consecutive steps of a metabolic pathway are heparan sulfate/heparin Ndeacetylase/N-sulfotransferase, and 3Ј-phosphoadenosine 5Јphosphosulfate synthase. Sequence analysis and functional studies show that both of these enzymes might have evolved by gene fusion from two independent enzymes, which in part are still present in lower organisms. In 3Ј-phosphoadenosine 5Јphosphosulfate synthase the functional domains were expressed separately (14), whereas in heparan sulfate/heparin N-deacetylase/N-sulfotransferase only the sulfotransferase activity can be separately correlated to a distinct region, i.e. the carboxyl half of the enzyme (15). In the present paper we report the establishment of high level overexpression of UDP-GlcNAc 2-epimerase/ManNAc kinase, results of sequence analysis, and alignment-guided sitedirected mutagenesis of the bifunctional enzyme. Overexpression of UDP-GlcNAc 2-Epimerase/ManNAc Kinase and Mutants in Sf9 Cells Expression Vector Construction-The UDP-GlcNAc 2-epimerase/ ManNAc kinase coding cDNA (11) was amplified by polymerase chain reaction from pBluescriptII (Amersham Pharmacia Biotech) using primers designed to contain a forward XhoI site and a reverse 3Ј-KpnI site. About 0.5 g of product was excised, eluted from agarose gels, and digested first with XhoI. The restricted DNA was precipitated and afterward restricted with KpnI. To inactivate the restriction enzyme, the DNA was extracted with phenol-chloroform. An aliquot of the resulting 5Ј-XhoI-DNA-3Ј-KpnI was ligated to the double-restricted (XhoI, KpnI) vector pFastBac1 (Life Technologies, Inc.). The transformed ligation product was mini-prepped and verified by sequencing (16). Production of Virus-The recombinant baculovirus containing the coding sequence of the UDP-GlcNAc 2-epimerase/ManNAc kinase was produced by using the Bac to Bac system according to the procedures supplied by the manufacturer (Life Technologies, Inc.). The system is based on transposon-mediated insertion of the foreign gene into the baculovirus genome under transcriptional regulation of the polyhedrine gene (17). Propagation of the recombinant virus as well as wild-type Autographa californica nuclear polyhedrosis virus (strain) was performed according to procedures described by O'Reilly et al. (18). Conditions for Overexpression and Cytosol Preparation-Sf9 cells were grown to a density of 2 ϫ 10 6 cells/ml, then infected at a multiplicity of infection of 0.1. During infection, cells were grown in suspension culture in an orbital shaker at 120 rpm and at 27°C. After an optimal infection period of 60 h, the cells were pelleted and disrupted in lysis buffer (10 mM NaH 2 PO 4 , pH 7.5, 1 mM dithiothreitol, 1 mM EDTA, 1 mM phenylmethylsulfonyl fluoride) by tipping them up and down several times in a 1-ml canule. The ratio of cells/lysis buffer volume was adjusted to 2 ϫ 10 7 cells/ml. The crude cell lysate was clarified by ultracentrifugation (100,000 ϫ g, 40 min). After harvesting the cells, all procedures were carried out at 4°C. Protein concentration was measured by the method of Bradford (20) using bovine serum albumin as a standard. One unit of enzyme activity was defined as the formation of 1 mol of product/min at 37°C. Specific activity was expressed as milliunits/mg of protein. Construction of Mutants Mutagenic Oligonucleotides and Site-directed Mutagenesis-The mutagenic oligonucleotides used to generate the mutant constructs are shown in Table I. Site-directed mutagenesis was performed using the QuickChange TM site-directed mutagenesis kit (Stratagene, Heidelberg, Germany). In brief, a nonidentical duplicate of the original vector is produced by a polymerase chain reaction-like amplification using Pfu polymerase and primers containing the desired mutation. The parental template is then digested specifically by the restriction enzyme DpnI, which cuts only dam-methylated DNA (target sequence 5Ј-G m6 ATC-3Ј). The nicked vector DNA incorporating the desired mutations is transformed into Escherichia coli. Reaction parameters were chosen according to the manufacturer's recommendations. All mutant constructs were controlled by sequencing with the Sanger dideoxy chain termination reaction for double-stranded DNA. Determination of Oligomeric Structure-The oligomeric structures of wild-type and mutated UDP-GlcNAc 2-epimerase/ManNAc kinase were determined with freshly prepared cytosol by gel filtration on a Superdex 200 column (Amersham Pharmacia Biotech). For elution, a buffer containing 100 mM NaCl,10 mM NaH 2 PO 4 , pH 7.5, 1 mM dithiothreitol, and 1 mM EDTA was used. Standard proteins were ferritin (440 kDa), ␥-globulin (156 kDa), ovalbumin (44 kDa), and myoglobulin (17 kDa). Fractions obtained at a flow rate of 0.2 ml/min were analyzed by SDS-PAGE/Western blot analysis as described earlier (8) and assayed for enzyme activity as described above. Gel filtration was also performed with older cytosol fractions to investigate their rate of decay. RESULTS Sequence Alignment-guided Site-directed Mutagenesis-Sequence analysis was performed by comparing the sequence of UDP-GlcNAc 2-epimerase/ManNAc kinase of rat with the nonredundant GenBank CDS using the PSI-Blast software. As reported earlier (22), we found sequence similarities with kinases and epimerases in different halves of the protein, indicating that different regions are involved in the formation of the active sites for epimerization and phosphorylation ( Fig. 3 and Table II). The MultAlin software program for multiple protein alignment of related sequences gave the consensus sequences shown in Fig. 1 and Fig. 2. The N-terminal half of UDP-GlcNAc 2-epimerase/ManNAc kinase shows significant homologies to prokaryotic UDP-Glc-NAc 2-epimerases and to synX, a protein involved in prokaryotic ManNAc biosynthesis. The synX protein of E. coli catalyzes either the interconversion of GlcNAc-6-phosphate to ManNAc 6-phosphate or the dephosphorylation of the latter to produce ManNAc (23). The sequence similarities suggest that synX is an epimerase rather than a kinase. In contrast to the eukaryotic epimerase, the prokaryotic epimerase inverts the stereochemistry at C-2 without release of UDP. Mechanistic similarities between the eukaryotic and the prokaryotic epimerization process can be assumed, as in both enzymatic reactions 2-ac- (24 -26). This assumption implies the existence of similar protein structures in both cases. In the prokaryotic enzyme, deprotonation at C-2 is the ratelimiting step of the process (24). Because histidines are often connected with deprotonation reactions, we mutated four conserved and one semiconserved histidine (Fig. 1, 3) to determine their role in catalysis. In the C-terminal half of the UDP-GlcNAc 2-epimerase/Man-NAc kinase, all of the 5 characteristic motifs described for the ATP binding domain common to functionally divergent proteins (27) were identified. Similarities stretching over amino acids 410 -684 are highest for four prokaryotic hexokinases (Table II), whereas the several eukaryotic hexokinases match best in-between the phosphate 1 motif of the ATP binding domain. Using site-directed mutagenesis, several amino acids in the conserved motif phosphate 1 of mammalian hexokinase have been shown to be essential for catalysis (28 -30). Molecular modeling of ATP in the crystal structure of yeast hexokinase predicted interactions of these residues with ATP. The conserved aspartate is predicted to interact with ATP-complexed Mg 2ϩ ; the conserved arginine is predicted to interact with ␣and ␤-phosphate oxygens (30). The analogous positions in UDP-GlcNAc 2-epimerase/ManNAc kinase were mutated to investigate their involvement in the catalytic process (Figs. 2 and 3). Expression of Wild-type and Mutated UDP-GlcNAc 2-Epimerase/ManNAc Kinase in Sf9 Cells and Characterization of the Expression System-For functional characterization, high level overexpression of UDP-GlcNAc 2-epimerase/ManNAc kinase was established using the baculovirus expression system. Insect cells were infected with a recombinant baculovirus containing the cDNA of the enzyme or its respective mutants. All (21); uppercase letters indicate high consensus levels (Ͼ90%), and lowercase letters indicate low consensus levels (50% Ͼ C Ͼ 90%). In UDP-GlcNAc 2-epimerase/ManNAc kinase, histidines labeled with a star were transformed to alanine by site-directed mutagenesis. N. mening., ffffff; ! is any one of IV; $ is any one of LM; % is any one of FY; # is any one of NDQE. recombinant expressed proteins migrated at the same position (75 kDa) as the rat liver enzyme in SDS-PAGE (Fig. 4). As estimated by comparing the signal intensities after staining with Coomassie Brillant Blue, 20 -30 percent of the total cytosolic protein fraction consists of recombinant enzyme. Furthermore, cytosolic extracts of all mutants and the wild-type enzyme are immunoreactive, with a polyclonal antibody specific against UDP-GlcNAc 2-epimerase/ManNAc kinase (8) (data not shown). The native molecular mass of recombinant UDP-GlcNAc 2-epimerase/ManNAc kinase was estimated to be 450 kDa by size exclusion column chromatography, indicating that the expressed protein forms a hexamer. The recombinant enzyme dissociated partly into dimers, which retained only the kinase activity. This phenomenon was reported for the rat liver enzyme as well (Fig. 5) (11). To quantitate the cytosolic background activities, the UDP-GlcNAc 2-epimerase and ManNAc kinase activities of uninfected insect cells were investigated. We found that cytosolic fractions of uninfected Sf9 cells show an epimerase activity of 0.06 milliunits/mg. The fact that this activity can be inhibited by 0.1 mM CMP-Neu5Ac, the feedback inhibitor of UDP-GlcNAc 2-epimerase activity in rat liver, suggests the presence of this enzyme in insect cells (Fig. 6, panel B). The specific activity is about 30 times less than that in rat liver cytosol. In contrast, the apparent averaged ManNAc kinase activity of insect cell cytosol is 5 milliunits/mg, about 50 times higher than the UDP-GlcNAc 2-epimerase activity. A possible explanation is that other cytosolic kinases are able to phosphorylate ManNAc, as demonstrated recently for the rat liver N-acetylglucosamine kinase (31). Catalytic Activities of Wild-type and Mutated UDP-GlcNAc 2-Epimerase/ManNAc Kinase-Infection of Sf9 cells with the wild-type enzyme virus resulted in an average specific cytosolic epimerase activity of 890 Ϯ 310 milliunits/mg and in an average specific cytosolic kinase activity of 840 Ϯ 276 milliunits/mg (Fig. 7, panels A and B). Compared with cytosolic extracts of rat liver, the specific activities in insect cell cytosols after infection were increased about 400-fold for both activities. Thus, the specific epimerase activity in insect cytosol after overexpression corresponds to that of the homogeneous enzyme from rat liver cytosol (8). All histidine mutants displayed kinase activities in the same order of magnitude as the wild-type enzyme. On the other hand they all lost their epimerase activity almost completely (Fig. 7, panel A). Differences in the specific kinase activities should correlate with the differences in the expression level of the enzyme. The residual epimerase activities were not significantly higher than those assayed with uninfected cells. The mutants in the putative kinase active site showed the inverse behavior; they displayed epimerase activities comparable to the wild-type-enzyme, whereas the kinase activities were reduced drastically. Compared with the mutation to lysine, the mutation of aspartate 413 to asparagine seems to result in a slightly higher epimerase and residual kinase activities. The residual kinase activities of all mutants in the ATP binding motif are significantly higher than the background activities measured in the two negative controls (Fig. 7, panel B). FIG. 4. SDS-PAGE of cytosols derived from Sf9 cells infected with different mutant viruses, UDP-GlcNAc 2-epimerase/Man-NAc kinase wild-type (wt) virus, wild-type virus, and uninfected cells. Staining was performed with Coomassie Brillant Blue; the 75-kDa band shows the overexpressed enzyme. bition of the recombinant enzyme by the native feed-back inhibitor CMP-Neu5Ac was investigated. We could show that 0.1 mM CMP-Neu5Ac inhibits the wild type epimerase activity completely, whereas 0.1 mM N-acetylneuraminic acid does not influence the activity at all (Fig. 6, panel A). To determine whether the allosteric binding site is functionally intact in the epimerase-active mutants D413N, D413K, and R420M, these were assayed in the presence of CMP-Neu5Ac. The epimerase activity of all three mutants was in-hibited almost completely by 0.1 mM CMP-Neu5Ac (Fig. 6, panel B). Size Exclusion Chromatography of Wild-type and Mutated UDP-GlcNAc 2-Epimerase/ManNAc Kinase-To determine whether the loss of activity due to site-directed mutagenesis can be attributed to a disturbed oligomerization process, we performed size exclusion chromatography with all mutant proteins. The obtained fractions were analyzed by Western blotting for the protein and were assayed for epimerase and kinase activity. 7. Panel A, UDP-GlcNAc 2-epimerase (Epi) and ManNAc kinase activities of cytosols derived from different histidine mutants compared with wild-type (wt) enzyme activities. Panel B, UDP-GlcNAc 2-epimerase and ManNAc kinase activities of cytosols derived from arginine and aspartate mutants compared with wildtype enzyme activities. Virus infections, cytosol preparation, and enzyme assays were performed as described under "Experimental Procedures." All values are means of at least three independent experiments; error bars are indicated. Mutated amino acid positions are given. Histidine was mutated to alanine, arginine was mutated to methionine, aspartate was mutated to lysine and asparagine. As a negative control, insect cells transfected with wild-type virus and untransfected insect cells were assayed. mU, milliunits. Kinase Mutants D413N, D413K, and R420M-Western blot analysis of the fractions obtained after size exclusion chromatography revealed that all three mutants are able to build a hexamer (Fig. 8A); partial dissociation of the hexamer results in a dimer as observed for the wild-type enzyme. Only the hexamer shows epimerase activity. The distribution of residual kinase activity over the molecular weight spectrum shows a major maximum at the molecular mass of the hexamer and a minor peak at the molecular mass of the dimer, which is evidence for specific residual activity (Fig. 8B). Both mutation of aspartate to lysine (i.e. an inversion of polarity) or the neutralization of the charge of aspartate 413 by mutation to asparagine had no observable effect on oligomerization. In contrast, although the epimerase activities remain equal, the residual kinase activities after gel filtration of D413N is slightly increased over that of D413K. Taken together these results show that site-directed mutagenesis of the positions Asp-413 and Arg-420 does not interfere with enzyme oligomerization, indicating that there is no influence on protein folding. Histidine Mutants H45A, H110A, H132A, H155A, and H157A-These mutants can be grouped according to three types of oligomerization behavior (Fig. 9).The first one, H45A, forms a kinase-active hexamer (Fig. 9, Type I ). In this case, site-directed mutagenesis does not influence the oligomerization process. In cross-linking experiments both the hexameric enzyme and the dimeric enzyme were detected (data not shown). The neighboring mutants, H155A and H157A, constitute the third group (Fig. 9, Type III ); they form only small amounts of hexamer, whereas the maximum protein content and maximum kinase activity are found at molecular weights corresponding to a trimeric state. Because no dissociation to dimers was observed, these mutations seem to inhibit the dimerization process, and consequently, the formation of larger stable amounts of hexameric enzyme. As shown by Western blot analysis after size exclusion chromatography, the molecular weight distribution for mutant enzymes His-110 and His-132 is wider (Fig. 9, Type II ). The Western blot signals are in good agreement with overlapping trimeric and hexameric states of the enzyme. In this case, the second group would be a hybrid of group one and three, where the dimerization process is partially inhibited. DISCUSSION In this study we have identified amino acids needed for the catalytic activity of the key enzyme of Neu5Ac biosynthesis, UDP-GlcNAc 2-epimerase/ManNAc kinase. The catalytically essential residues were shown to be in different regions of the enzyme for epimerase activity and kinase activity. The data were obtained by overexpression of the enzyme in insect cells, sequence alignment-guided site-directed mutagenesis, and catalytic and structural characterization of the different mutants. Sequence analysis revealed homologies between prokaryotic epimerases and the N-terminal region of UDP-GlcNAc 2-epimerase/ManNAc kinase, whereas various kinases showed homologies to the C-terminal region of this eukaryotic bifunctional enzyme. Two strategies of enzyme evolution have been described (32). First, there are structurally related enzymes that catalyze identical reactions with possible differences in substrate specificity. For example, all members of the serine protease superfamily are known to catalyze the same chemistry, although their substrate specificity varied. Also, the sugar kinase superfamily seems to fit with this evolution type. In contrast, there are enzyme superfamilies whose members share a common structural scaffold but catalyze different overall reactions. These enzyme superfamilies probably evolved by incorporation of new catalytic groups within an active site, whereas groups necessary to catalyze the partial reactions common to all of them were retained. One might speculate that the prokaryotic and the eukaryotic UDP-GlcNAc 2-epimerases represent an example of the second evolution type. Prokaryotic and eukaryotic UDP-GlcNAc 2-epimerases do not catalyze identical reactions, but they show the same substrate specificity and share a common intermediate, so they probably employ similar mechanistic strategies. Based on the above-mentioned sequence similarities, sitedirected mutagenesis was performed on five conserved histidines in the N-terminal half of the enzyme. Surprisingly, all five mutants lost their epimerase activity. By contrast, the kinase activity was retained, giving a first hint for the existence of two active sites for each reaction working quite independently. To distinguish between general influences on the structural scaffold and specific involvement in the catalytic reaction we investigated whether the histidine mutants were still able to associate as a hexamer, as observed for the wildtype enzyme. The data obtained are consistent with the model shown in Fig. 10. The histidines His-155 and His-157 might be localized within the enzyme dimerization recognition site (33). Mutation of histidine to alanine leads to a strongly reduced dimer association constant, which is why mainly trimeric protein is found in place of hexameric protein (Fig. 9, Type III ). This result is in contrast to the former view, that the kinase active site assembles as a dimer, based on the detection of active kinase dimer, trimer, and hexamer. Therefore, dimerization is not necessary for the formation of the active site, but oligomerization in general may have a positive cooperativity effect. The previously determined Hill coefficients for ManNAc and ATP are consistent with this proposal, as they are greater for the hexameric enzyme than for the dimeric one (8). Furthermore, not only trimerization of the dimer seems to be essential for epimerase activity, but dimer-ization itself is also essential, because the trimers show no epimerase activity, although even the small amount of detected hexamer showed no epimerase activity. Thus these mutations seem to influence both enzyme oligomerization and the epimerase activity of the hexameric protein. The mutation of His-110 and His-132 leads qualitatively to the same effect on enzyme oligomerization (Fig. 9, Type II ). As the effect is clearly less drastic, one can conclude that these residues are at a position less sensitive for structural transformation, for example at the edge of the dimerization recognition site. Determination of the oligomeric state of the mutants H155A, H157A, H110A, and H132A revealed that they are significantly structurally disturbed. Therefore the loss of epimerase activity can be attributed to this structural perturbation, whereas there is no evidence for a direct involvement in catalysis. On the other hand, the mutant H45A assembles as a hexamer with apparent structural integrity. Since the fully hexameric mutant H45A has no observable epimerase activity, one might speculate that this residue is located in the active site and might be involved in the chemical reaction. In the C-terminal half of the enzyme two amino acid residues within the phosphate 1 motif common to different families of ATP-binding proteins were transformed. The mutated residues Asp-413 and Arg-420 of UDP-GlcNAc 2-epimerase/ManNAc kinase are well conserved within sugar kinases and phosphotransferases (27). Results of molecular modeling and site-directed mutagenesis of human brain hexokinase suggested the interaction of these residues with the tripolyphosphoryl portion of ATP (30). In this enzyme, phosphorylation is promoted in the classical way by using the binding energy for the stabilization of the transition state. We could show that site-directed mutagenesis of these conserved residues in the UDP-GlcNAc 2-epimerase/ManNAc kinase results in a drastic loss of kinase activity with no significant effect on epimerase activity. All three mutants show a functionally intact allosteric binding site for CMP-Neu5Ac. Site-directed mutagenesis does not influence the oligomerization process, as all mutants form hexamers. In brief, all parameters investigated correspond to those found for the recombinant wild-type enzyme, apart from the loss of phosphorylation capacity. We conclude that also in the bifunctional enzyme UDP-GlcNAc 2-epimerase/ManNAc kinase these conserved residues play a crucial role in the phosphorylation process. Furthermore it is quite probable that UDP-GlcNAc 2-epimerase/ManNAc kinase provides a structural scaffold for phosphorylation similar to that of the related sugar kinases. Taken together the results of sequence analysis as well as those of site-directed mutagenesis suggest that the bifunctional enzyme is composed of two catalytic domains. The epimerase active site seems to be localized in the N terminus, whereas the kinase active site seems to be in the C terminus of the protein. Construction of deletion mutants would make it possible to determine whether these postulated domains can be expressed separately while retaining their respective activity.
5,504.8
1999-10-01T00:00:00.000
[ "Biology", "Chemistry", "Computer Science" ]
Temporary Migration Programmes: the Cause or Antidote of Migrant Worker Exploitation in UK Agriculture The referendum result in Britain in 2016 and the potential loss of EU labour in the advent of a ‘hard Brexit’ has raised pressing questions for sectors that rely on EU labour, such as agriculture. Coupled with the closure of the long-standing Seasonal Agricultural Scheme in 2013, policymakers are grappling with how to satisfy on the one hand employer demands for mobility schemes, and on the other public demands for restrictive immigration policies. Labour shortages in agriculture transcend the immigration debate, raising questions for food security, the future of automation and ultimately what labour market the UK hopes to build. Temporary migration programmes have been heralded as achieving a triple win, yet they are rightly criticized for breeding bonded labour and exploitation. In lieu of a dedicated EU labour force, agricultural employers are calling for the establishment of a new seasonal scheme. In this paper, we explore whether the absence of a temporary migration programme resolves the potential exploitation of migrant workers. We argue that the absence of a temporary migration programme (TMP) is not an antidote to migrant exploitation, and that a socially just TMP which is built around migrant agency may be the most palpable solution. Introduction The slogan that 'there is nothing more permanent than temporary foreign workers' (Martin 2006) has been a popular phrase to express the perceived failures of temporary migration programmes (TMPs). Whilst TMPs particularly agricultural programmes are typically recognized as being exploitative (Lenard and Straehle 2012;Strauss and Mcgrath 2017), as global labour market needs have evolved, there has been some resurgence in TMPs and the advantages these programmes can bring (Gilbert 2014). TMPs purportedly achieve the so-called triple-win outcomes: the host country can meet labour market demands whilst appeasing electoral concerns over permanent settlement; the sending country benefits from both remittances and skills transfer/brain gain from migrants acquiring skills in the destination state and transferring these skills on return; and the migrants themselves benefit by a mechanism which provides people from low-income countries with better access to labour markets in high-income states. Ultimately, TMPs are pitched as the inbetween solution which satisfies both the 'no borders' and 'no migrants' arguments (Ruhs and Martin 2008, p. 260). TMPs could then be an attractive solution for the UK government following the shock referendum result in 2016. In the face of a possible 'hard Brexit' in 2019 and the consequential end of free movement following the transition period, the political establishment is grappling with how to satisfy both public and business demands for restrictive and expansive approaches to immigration respectively. Pressing questions remain as to how sectors, such as agriculture, that rely heavily on EU labour will survive in lieu of a dedicated labour immigration scheme. A National Framers Union (NFU) providers survey recently revealed that 47% of providers were already unable to meet labour demands in 2017 despite free movement continuing (NFU 2017). In a BBC survey in 2017, 78% of respondents said that recruitment had been more difficult in the past year (Simpson 2017). The anticipated labour market shortages in agriculture have been compounded by the closure of the 70-year long Seasonal Agricultural Workers Scheme (SAWS) in 2013-a turning point in its own right. The seasonal labour shortages in agriculture have led the Environmental, Food and Rural Affairs Committee to conclude that farming and horticulture businesses 'have big problems retaining labour' and that 'the challenge will become a crisis if the government do not swiftly take measures' (House of Commons 2017a, b, 195WH). Labour shortages in the agricultural sector have consequences beyond immigration; food security could be threatened, as imports will become the increasingly dominant source of fresh food for the UK. This will in turn raise costs for producers that will be subsidized through higher food prices for consumers (Sumption 2017). As a result, the government has insinuated that they may seek to establish a new agricultural scheme for labour migrants post Brexit (Gove quoted in Horticulture Week 2017), yet this is far from a certainty. The government's dilemma-to re-establish an agricultural migrant scheme or circumvent ways around migrant labour shortages by increasing native employment, automation and possibly offshoring-becomes wider questions regarding the worth of work and what kind of labour market the UK hopes to build. It is thus a fitting time to reflect on the policy evolution of the SAWS, and more importantly, the impact and repercussions of its closure both economically and socially as UK employers anticipate the loss of EU labour in post-Brexit Britain. Using the UK SAWS as a case study, we analyse the losses and gains from closing a TMP. Based on desk research and stakeholder interviews, we assess whether the absence of a TMP acts as an antidote to exploitation or a preventative. We argue that whilst the loss of EU labour may force the sector to improve working conditions, closing a TMP can have the adverse effect of a triple loss and exacerbate exploitation. We contend that a TMP underscored and built around migrant agency is a more socially just response than terminating a TMP. We begin the article by a brief literature review on temporary migration policies and outline our methodology. After summarising the policy evolution of SAWS and its termination, we assess and analyse the impacts of closing SAWS. Temporary Migration Policies: Rights, Conditionality and Debates Temporary labour migration policies have been criticized for a number of reasons, including the lack of rights granted, and in particular the lack of right/route to permanency and citizenship. The research on TMPs and TMWs focused more on the precarious status of the TMWs (Bernhard et al. 2009) than the possible alternative pathways to TMPs (Castles 2006;Ruhs 2006). The example of the UK gives us an opportunity to evaluate the repercussions of the absence of a temporary programme, and whether this result solves the rights-based arguments against such programmes. Previous research has analysed various aspects of TMPs and SAWS specifically, including the practices of employers and their perception of temporary migrant workers (TMWs) (Findlay et al. 2013;Scott 2013), migrants' agency (Cook et al. 2010;Spencer et al. 2007) and role of the recruitment agencies and intermediaries in seasonal work (Rogaly 2008). We seek to build on these bodies of work by drawing on Carens (2013) framework. Carens (2013, p.121) questions whether it is acceptable for democratic states to admit people only temporarily, 'under what conditions?' Goldring (2014, p.223) claims that temporariness has been considered in terms of insufficient and inequitable rights, blocked membership and temporality and argues that there are three possible policy solutions to overcome the exploitative potential of TMPs: '1) changing regulations to reduce exploitation and vulnerability 2) reducing the amount of time people spend in temporary situations or the number of people in them, or expanding pathways to permanence 3) eliminating temporary worker programs.'. Other scholars have similarly suggested that adapting TMPs by granting further rights is the more practical solution to terminating TMPs altogether (Lenard and Straehle 2012). It is well acknowledged that TMPs are opportunistic for states to both meet labour market demands whilst limiting settlement. However, an overlooked facet of this debate is whether TMPs truly benefit the migrants or not, and whether the termination of programmes on the basis of potential exploitation achieves better outcomes for workers. On this, the research is mixed. Much research has illustrated how TMPs are not beneficial for the migrant because even the best examples fall short of the international best practice standards (Hennebry and Preibisch 2012). However, Goldring (2014, p. 219) argues that temporariness is not unequivocal with precariousness, conditionality of precarious status or inequality, whilst at the same time 'permanence does not always mitigate insecurity and social exclusion'. Conditionality (Landolt and Goldring 2015, p. 857) is key for building a socially just TMP, which refers the trajectories of non-citizenship (including legal status) and how it is shaped via social learning, migrants' agency and policy enforcement at many levels, 'denoting the material and discursive conditions that must be met to acquire and exercise the formal or substantive right to remain'. Raghuram (2014) also underlines how migrants were required to inhabit temporariness and learnt to benefit from it as agents. TMPs can then bring opportunities for migrant workers, both structurally (financially) and from an agency perspective (entrepreneurship and skills gains). Gilbert (2014) and others (Bauder 2006) argue that TMWs are rendered vulnerable because of their status because 'temporariness puts limits on their access to health care, social programs and labour protections' (Gilbert 2014, p.153). However, Carens (2013) posits that in the absence of TMPs, some migrants will come and work irregularly in any case and will have no rights rather than limited. Thus, in the absence of sponsored programmes, irregularity as a status can dominate the lives of some migrant workers. De Lange and Sarah van (2014, p.143) similarly found in their research that 'all participants in the ethnographic component of this research were of the opinion that even a circular migration regime would be preferable to the current exclusion of Blow-skilled^migrant workers from regularized migration status'. Our position therefore is that it is not that TMPs that should be abandoned but the temporariness should be equally supporting the agency of the migrant workers (Goldring and Landolt, 2011) as much as it supports the employers' beneficial condition. This should be based on an agreement between the host and the origin country that enables temporariness to become an opportunity and freedom from working in precarious conditions. Methodologically, the paper is based on explorative process tracing conducted through semi-structured interviews between 2011 and 2016 with stakeholders in triangulation with document analysis. We conducted 20 interviews with stakeholders, which included employers, employer associations, trade unions, relevant government (national and local) representatives and migrant right organisations. Our desk research, which worked to triangulate and validate findings from the interviews, involved a three-stage analysis of documents including newspaper articles, press releases, policy documents and relevant research conducted by organisations. We now move to a review to surmise both the evolution and termination of SAWS, and the current climate following the referendum result in 2016. Policy Evolution of SAWS The SAW programme was established in 1945 following the War and the resulting shortage of British manpower, as a cultural exchange scheme to encourage young, predominantly agricultural, students from across Europe to work in agriculture and horticulture during the peak seasons. However, over time, the scheme evolved as a tool to meet labour demand in the agricultural sector more generally, although 'the idea was still to develop cross cultural understanding and friendships across borders' (interview Concordia 2015). The SAWS was modified over the years, but it was in 1990 when the scheme became quota-based, beginning with an annual quota of 5500 workers. Before the closure of the scheme in 2013, SAWS had reached a quota of 21,250, a fourfold increase from the original 1990 quota. Whilst in 2004, the UK government did not initiate transitional measures on nationals of the A8 countries-thereby giving unfettered access to the UK labour market for these citizens-the UK government imposed full transitional controls (7 years) on Bulgaria and Romania following the 2007 accession with an 'intention to phase it [SAWS] out as EU labour markets expanded' (Harper 2013). Thus, as a concession to the new accession states of Bulgaria and Romania (A2), the government stipulated that only A2 migrants could work on the SAWS from 2008, and that is why the student restriction was abandoned, which at the time the industry objected to a lot because they would not be from an agricultural background but the sector got used to that and they just made their recruitment process more rigorous (NFU policy officer, interview 2011). In 2013, when the scheme was closed, two thirds of the seasonal labour force remained from the A2 (NFU 2017). Different actors from both sending and receiving countries were involved in the operation of SAWS. In terms of implementation, one of the most important sets of actors were the operators, characteristic of the trend towards 'outsourcing' migration controls to private actors. Operators were responsible for recruiting and processing applications, ensuring that farmers provided suitable accommodation and adhere to regulations around work rights such as the minimum wage, and ultimately for the liability of those workers who had breached their visa terms and conditions. Without operators' permission, workers could not switch to another farm site creating 'bonded labour' (Rogaly 2008). Hence, operators formed the control mechanism for the lives of the workers. Nine operators managed the SAWS on behalf of the UK Visas and Immigration Directorate (formerly known as the UK Border Agency). However, SAWS was, in the main, an industry-run scheme with only a 'light touch' from the Home Office in terms of enforcement because there was no right to remain. Overall SAWS was deemed a success being regarded as a 'well managed scheme' with a high return rate (interview HOPS solution 2016). Although SAWS was regarded as successful scheme, the closure in 2013 was not unexpected. As the Conservative-led Coalition government entered office in 2010, one of the first orders of the day was to put measures in place to achieve the Conservative manifesto pledge of reducing net migration from the hundreds of thousands to the tens of thousands, 'so that people have confidence in the system' and to 'ensure cohesion and protect our public services' (HM Government 2010: 21). Numerous policy channels were tightened (Gower 2016), including TMPs. A position paper by the Labour Providers Association in 2012 raised a number of questions for the rationale for the SAWS questioning whether there was a need for the scheme, whether SAWS represented unfair competition to labour providers through tax benefits and 'whether a scheme which is basically Bbonded labour^is acceptable' (Labour Providers 2012). Later in 2012, the government called for the Migration Advisory Committee (MAC)-an independent, non-departmental public body that advises the government on migration issues-to assess the impact on the agriculture and horticulture sectors if SAWS was terminated. The MAC concluded that a reduction in labour in the short term would have a modest impact but that in the medium term farmers were likely to experience increasing difficulties in sourcing required labour (MAC 2013, 12). Following the MAC report, in order to cut net migration drastically and 'break the link' between temporary and permanent migration, the government proceeded with the planned closure of the SAWS in December 2013 to coincide with transitional controls lapsing for A2 citizens from Bulgaria and Romania in January 2014. The government's view was that 'at a time of unemployment in the UK and the European Union there should be sufficient workers from within those labour markets to meet the needs of the horticultural industry' (Harper 2013), and moves to gear the immigration system towards exclusively high-skilled migration were supported by employer associations such as the CBI due to the 'political pressure' and heightened skills shortages (interview with CBI 2015). Drawing on the evidence that growers sourced approximately half of their seasonal labour from the A8 despite A8 citizens having unrestricted access to the UK labour market, the government's assumption was that British and EU labour would continue to fill these shortages. The sector was not quite in its opposition to the government's decision to terminate SAWS, often deploying media-related strategies in their lobbying efforts. Meurig Raymond, deputy president of the NFU, said that their members were 'outraged' and that the decision 'completely contradicts David Cameron's belief that farmers are the backbone of Britain' (BBC 2013). In a survey of labour users conducted by the NFU, over 95% of growers who used SAWS in 2012 said that the removal of the scheme would have a negative impact on their business. From 2013 to 2016, the government and sector alike adopted a 'wait and see' approach to assess whether the closure of SAWS was proving to effect recruitment. However, the referendum result and the government's preference for a so-called hard Brexit since the result has brought the apparent need for a SAWS back on the agenda. This led the Environmental, Food and Rural Affairs Committee to undertake an inquiry into Brexit and the UK food industry aptly named Feeding the Nation (House of Commons 2017b). The evidence from the sector overwhelmingly indicated that farmers and growers were already struggling to recruit seasonal vacancies despite the fact that free movement was in place. The Committee put this to the Immigration Minister who maintained 'that there is no suggestion that there is a problem …. Nothing has changed' (Goodwill in House of Commons 2017b, p.5). In turn, the Committee concluded that 'we do not share the confidence of the Government that the sector does not have a problem: on the contrary, evidence submitted to this inquiry suggests the current problem is in danger of becoming a crisis if urgent measures are not taken to fill the gaps in labour supply' (House of Commons 2017b). Whether such shortages has been a result of terminating SAWS or a hard Brexit dissuading migrants from entering the UK labour market remains a counterfactual question (host state: labour market). Agricultural employers have long favoured foreign labour over the British workforce. Such preference is driven by a number of factors, but the overriding determinant is a superior work ethic and other soft skills (Ruhs 2006, 78). In a Home Office study involving 124 interviews with employers across five sectors, only in agriculture did employers unequivocally see migrant workers as 'crucial' to their businesses (Dench et al. 2006, 35). Over a decade later, researchers from Queens University Belfast told the Lords EU Committee that 98% of the seasonal workforce were migrants from elsewhere in the EU (House of Commons 2017a, 6). It is no exaggeration to say the sector relies on foreign labour. In 2015, most agricultural employers claimed that it was too early to know the full impacts of closing SAWS on labour shortages, but that the repercussions were likely to be felt long-term (GLA 2014). This was due to the labour supply from Bulgaria and Romania 'not immediately drying up' following the closure of SAWS, and due to contingency efforts put in place to ease the transition of the closure including making a limited number of work cards available (interview with NFU 2015). Many in the industry claimed that whilst farms had an adequate workforce for 2014 and 2015, there would be a 'workforce drought' in (interview HOPS solution 2016interview Concordia 2015). The NFU's most recent survey of labour providers showed a 17% drop in seasonal workers, with 1500 vacancies left unfilled in May 2017 alone (NFU 2017). In turn, the industry is reporting a shortfall in workers putting production of crops at risk with the industry calling for a new SAWS (House of Commons 2017a, 3). The seasonal labour shortages experienced by farmers and growers have potentially large ramifications for the agricultural and horticulture sector as a whole. The value of UK agricultural production (at market prices) was £25.8 billion in 2014, and in aggregate, the agri-food sector employs about 3.86 million employees (DEFRA 2016). Whilst reported shortages are of a seasonal nature, in lieu of meeting labour market demands to achieve harvesting produce the wider food production chain is threatened, in turn permanent employees could also be facing job insecurity. The repercussions of farmers not being able to source enough labour to fill seasonal vacancies extends beyond immigration debates. The implications extend to UK food security. A report from the trade organisation British summer fruits predicted that the cost of strawberries and raspberries could soar by 50% if Brexit makes it harder for growers to recruit (House of Commons 2017a, b). A survey from British Summer Fruits predicted rising prices of between 35 and 50% because of labour shortages. Furthermore, a weaker pound as a result of the Brexit uncertainty in the UK economy means imports can possibly cost more. Yet whether the reliance on EU labour can be said to constitute inevitable structural dependency is debatable. Geddes and Scott (2010) argue that such reliance on migrant workers in the low-skilled sector-such as agriculture-is rather 'constructed'. Drawing on segmented labour market theory (Piore 1979), they claim that it is possible for firms to offset the costs of an uncertain market by 'passing this uncertainty on to certain groups of workers' (Geddes and Scott 2010, 198), in this case seasonal migrant workers. Rogaly (2008) likewise argues that, through mechanisms of intensification, agricultural employers have used vulnerability to ensure compliance in the labour force. Employers' offering poor working conditions then partly construct the dependence on foreign labour in the agricultural sector. Undoubtedly agriculture's reliance on EU workers is of its own making, but path dependency and tight margins with consumers demanding more local produce at cheaper prices has created an intractable structural dependency. To succeed in the long-term, the sector will need to tackle the cycle of 'entrenched use of low-skill, low-paid casual workers' (Delvin 2016). Working conditions of agricultural workers have not changed in any substantial way since the closure of SAWS, thus attracting British workers to this type of work has remained challenging. British workers are reluctant to engage in this work due to the hard working conditions, long working hours, temporal nature and rural locations. Farm Minister George Eustice suggested in 2014 that UK benefit claimants should be sent to work on farms to fill any vacancies. Yet the seasonal and temporary nature of agricultural work means that there are little incentives for unemployed claimants to return to work for a short period, as the administrative hurdles to reapply for welfare assistance are long; a process which the DWP recognizes does not always support smooth transitions from receipt of out-of-work benefits into seasonal work (MAC 2013, 160). The UK welfare system operates in a way that makes it unfavourable to work for a temporary period. If British agriculture is to survive the government and more importantly, employers will need to attract and retain British workers as they once did. The closure of SAWS and the loss of EU labour following Brexit uncertainty may create an opportune moment for reforming the welfare system to accommodate seasonal work and potentially for improving pay and working conditions in a sector which has long relied on cheap labour. The government pursued a pilot scheme to fulfill this. The scheme was run by the Department for Work and Pensions was working with JobCentre Plus, LANTRA (the sector skills council) and the NFU to encourage unemployed UK residents into horticultural work through training and guaranteed interviews (Harper 2013). However, these measures seem to have had little to no impact (interview DEFRA 2016). As one interviewee commented: it wasn't very successful and the people that were trained on a course which we worked with, they didn't last in their placements, they worked a few weeks and then they'd leave their jobs, they weren't fulfilling the 6 months. An A2 would've worked that 6 months until the end. I think that proved that the labour can't be used from that welfare to work' (interview HOPS solution 2016). British workers have evidently always been able to work in the agricultural sector, yet they make up less than 10% of the agricultural labour force and only nine farms in 2013 had any UK employees in temporary/seasonal work, dropping to eight in 2014 (GLA 2014, 6). There is a clear imperative for the sector to shift towards a generally higher wage and higher skill food system in order to make the work attractive, and the introduction of the National Living Wage in 2016 represents a positive advance. However, the flipside of increasing wages is that they are predicted to impact on tightening profit margins for farmers and growers further. Horticultural crops have an unusually high requirement for seasonal labour, with labour cost often accounting for 35-60% of business turnover. As a result, the profitability of growing horticultural crops is highly sensitive to changes in wage costs. A report conducted by the NFU (2016, 2) assessing the impact of the National Living Wage found that forecasted increases in National Minimum Wage alone are equivalent to 47-58% of current business profit in the next 5 years. This, the NFU argues, 'has the capacity to make horticultural businesses unprofitable'. Closing SAWS coupled with Brexit and the loss of EU labour may force the sector to address its rights-based weaknesses in the form of increased wages, better working conditions and reforming upskilling and progression opportunities for British workers. In this sense, the absence of a TMP could prove to be a remedy for the structural dependency and poor working conditions which have traditionally accompanied this sector. Yet the increasingly tight profit margins farmers and growers face as a result of increasing consumer demands and retail demands for lower cost produce, make increasing wages an unappealing solution to the labour crisis the sector faces. The future opportunity or cost of the lack of an agricultural TMP will ultimately be a shift towards mechanisation, outsourcing and offshoring. On the one hand, automation will improve working conditions for some workers, making manual jobs less labour intensive. On the other hand, technological innovation will drive the automation of all kinds of work and with it bring large-scale job losses, which UKCES predict will lead to an extensive government-led skills programme (UKCES 2017, b). Occupations which are most susceptible and likely to be automated in the future are physical (elementary) activities in highly predictable environments. Such moves in agriculture mechanisation have included table-top technology to improve efficiency of the picking process; in salads and brassicas, picking rigs have enabled crops to be picked, washed, processed, packaged, labelled and carted in the field; and in top fruit and stoned fruit, new dwarf varieties of trees have been developed which have greatly eased picking. Whilst such technological developments are long-term solutions and cannot replace current labour-intensive demands, if employers face detrimental labour shortages, the mechanisation of agricultural production may be inevitable. Whilst the impact on profit margins makes it unlikely, the loss of EU labour in the context of hard Brexit may force the industry to restructure, improve wages and adapt to the new environment (AHDB 2016, 14). Alternatively, and more viable in the longterm, it may drive the sector to look increasingly towards technological solutions and become a fully automated sector to reduce labour dependency. Whilst the latter brings with it opportunities, increases productivity and represents inevitable technological progress, it also brings job displacement including of British workers. This raises major questions for the world of work, skill forecasting, as well as utilitarian or moral questions about the purpose of work (Danaher 2017) and if automation can also become complementary to labour or not. Sending States The second so-called triple wins generated from TMPs are the benefits they bring to the sending states. The most important of these benefits are those concerning remittances and skill transfers. The question then is whether SAWS was a positive gain in development terms for the sending countries, and has the termination and the Brexit uncertainty around EU labour had a negative impact on these countries. Ruhs (2006, 17) argues that the return of immigrants can influence the home society positively in two ways. Firstly, migrant workers transfer skills between states, which are made possible by the return mechanism in place. Secondly, development can occur through businesses or entrepreneurship that are opened with the capital of the returnees. For example, Balaz and Williams (2004) found that, in the case of Slovakian return migrants who had stayed temporarily in the UK, the level of human capital transferred was high. Migration policies including TMPs can also be seen as geopolitical capital and intertwined with trade policy (Peters 2017), as countries forge reciprocal benefits often based on geopolitical ties and/or history. These are typically demonstrated by bilateral agreements, and the majority of youth mobility schemes, including the UK's, are underpinned by reciprocal agreements. SAWS accrued positive diplomatic relations as a side benefit to the scheme: In terms of international relations it was really strong; the relationships set up − particularly between us and Russia − would be really useful at an informal level. On the government side there was a Polish agricultural minister who was ex-SAWS. So people from SAWS were often people who would achieve in their own country, and achieve influence (interview Concordia 2015). The trade policy potentials for the UK following Brexit could for example act as an opportune moment to forge new relationships with sending states, as Prime Minister's May has endeavored to do in recent visits to China and India. There are further potentials for bilateral deals to be struck with other countries such as Ukraine. In terms of current sending states, emigration central and eastern European countries have already slowed down. This could also mean that there has been saturation in the labour market and emigration levels. In their research, Dustmann and Weiss (2007, 253) found that those who return from the UK (in average after 4 to 5 years) are mostly from the EU countries, the Americas and Australia, whilst those who stay are from Indian subcontinent and from Africa. In case of return, the transferability of skills would definitely weigh higher. According to Vertovec (2007), a more ideal way of transferring set of skills is not temporary migration but circular migration if it can be realised. In terms of the transfer of skills, SAWS was considered a success. As the CBI commented (interview 2015) 'the UK has benefited from those people who are coming to us but also their home country could benefit from either from sending money home or returning…. [we need to] make sure that we have got a system that works as best as it can for everyone. This is not just dividing people into winners and losers'. A high number of SAWS workers returned the following year to generate remittances 'almost as a lifestyle choice' (interview Concordia 2015). As a GLA officer stated in an interview, agricultural work was used as a 'stepping-stone' for many migrant workers and those who had gained the skills could go back and use them in their countries: Because agriculture was much seen as a stepping-stone for workers with some skills and intelligence to get away from the very basic level of work into something better. So the Polish economy, which I think had 40 per cent unemployment when they came, improved, so anybody who has got skills and ability to be mobile, went back. Not everybody did, large populations in communities of Poles were established and created. But some of them went back with certain skills; others not because people who are already in the country would use the agricultural work as a stepping-stone; the better able workers have left (Director of Strategy at GLA, interview 2015). For repeat season workers especially meant that they could 'take on additional responsibilities, there are supervisor roles, team leader roles, so there's an advantage in that. These are key things for the participants, developing their experience; if they're interested in agriculture they can take their skills back with them as well' (interview Concordia 2015). Besides skill transfers, remittances are the major contributory factor, which TMPs are said to bring to sending states. Recruiters on SAWS found that 'a lot of people [we]re buying homes in their own countries and people are coming back to make money for that, rather than staying in the UK' (interview HOPS solution 2016). Whilst remittances are an important source of development for sending states from all types of mobility, temporary migration has been found to bring even greater remittances than more permanent migration, precisely because migrant workers intend to return, and thus have high incentives for investing in their country of origin (Dustmann and Mestres 2010). Sending emigration is also desired in comparison with permanent emigration by the sending states because those who migrate permanently decrease their remittances over time (ibid). When it comes to temporary migrant workers, the effect of remittances in the sending country Datta et al. (2007) found out that low-paid migrant workers in London have to sacrifice a great deal in their lives in order to send remittances, not only because they are low-paid but in certain cases, sending money to the home country is a strong pressure on the migrant. They suggest that a development policy based on remittances is unethical and unacceptable for the reason that logic of remittances can be another justification for a system that leads to more exploitation (ibid.) However, a number of studies have found positive advantages of TMPs for sending states. Markova (2010) for example found that Bulgarian returnees from TMPs contributed positively to the Bulgarian economy through an increase of small businesses through remittances. Lucas (2005) argues that consumption in sending states is increased via TMWs' families, which has revived some local economies. For some sending states, the remittances acquired from TMWs provide a main source of income. As one interviewee explained: The Moldovans as well were sort of saying: we're being shut out. We do what we can to keep Moldovan workers coming to the UK because it's the central income for them -UK wages are 20 to 40 times higher than theirs. So if you've got someone who's a second year student if they could spend six months in the UK then that would fund their studies, so they benefited a lot from that (NFU policy officer, interview 2011). Skill transfers acquired are clearly beneficial for both the migrant themselves and their country of origin, and the return clause inherent in TMPs acts as a prevention of brain drain, as the intention is for the worker to return home. Migrant Rights From an ethical perspective, the debate on temporary migrant rights has resulted in three different critical positions. Ruhs and Martin (2008) argues that there is a trade-off in practice between rights and numbers, whilst Mayer (2005) has suggested that some exploitation could be acceptable if the migrant workers are also benefiting from the schemes. In contrast, Lenard and Straehle (2012) argue that there is no need to eliminate the programmes or decrease the numbers but it is possible to improve them by giving the opportunity to migrant workers to have more rights gradually and have a route to permanent residency. If TMPs are to continue, what is the perfect duration of stay to grant the TMWs their political rights and full socio-economic rights? In terms of its contents and qualities, what does a socially just TMP looks like? The key criticism of TMPs acknowledged by both the NGO and academic sector is the inherent exploitative nature of TMPs. TMPs by their nature are exploitative and too often lead to for a number of reasons: since the work is temporary, the focus is on the working process, the work comes first rather than the living conditions, creating bonded labor. Ethically scholars ponder how temporary status can continue for years without gaining further benefits and rights (Hennebry and Preibisch 2012). Prescriptively, integration packages are key to overcome some of the potential exploitation pitfalls; from a local authority perspective it 'does not matter what scheme [migrants] came in on they still need to be able to translate the service information for who is coming throughout the door' (Migration Yorkshire interview 2015). SAWS was no exception and has been criticised for similar reasons (Anderson and Rogaly 2005). Whilst in law A2, citizens were well protected in contrast to third country nationals, in practice conditions were scarcely better (interview with Unite 2015). A national catastrophe in 2004 highlighted the potentially devastating effect of language barriers, and the potential exploitation of migrant workers, prompting government action to regulate the low-skilled sector. This was the Morecambe Bay Cockling disaster, resulting in the death of 23 Chinese workers. As a result, The Gangmasters Licensing Authority (GLA) (later renamed Gangmasters and Labour Abuse Authority) was established in April 2005, with the primary purpose to prevent the exploitation of workers in the agricultural and food sector. The GLA is a non-departmental public body with a board of 30 members from the industry, unions and government. The GLAA has been heralded as a role model for other countries to prevent exploitation of agricultural labour (interview with GLA, 2014), and there are calls for its remit to be widened (GLA 2015). Some 1201 labour providers had been licensed by the end of 2008, and during this period, 78 licenses were revoked for breaches discovered during inspections. However, on the absence of a TMP and free mobility, it seems likely that irregular migration will increase. As a Unison interviewee commented: The other solution is that they stay below the surface and are subject to exploitation, and that's not in their interests and it's not in the interests of other workers because it leads to employers taking on people to exploit them and it leads to undercutting (Unison interview 2011). Whilst this remains circumspect, if farmers and growers cannot source their labour through legitimate channels such as SAWS-which is felt as an inevitability across the sector, as the NFU state 'we know that's coming' (McEwan 2015)-they may 'fall foul of unscrupulous individuals who may commit more serious offences involving illegal labour supply or other potentially more serious criminal offences, for example trafficking or forced labour of the workforce being supplied' (GLA 2014, 30). The GLA (2014, 31) also found that additional workers required by farmers or growers were now being sourced through current workers and this presents further risk, as the opportunity is there for unscrupulous and potentially illegal gangmasters to operate within this area and exploit the workforce. The absence of a TMP in the face of a hard Brexit could be more damaging, since an under-regulated area might become wholly unregulated, and this could have a negative effect on both employers and employees. Coupled with deregulation in the labour market, and not being obliged to abide by international conventions, the rights of migrant workers could be further at risk. From an economic liberal perspective, the lack of a TMP could also prevent competition amongst the operators to acquit SAWS work cards. If there is no competition to qualify for attaining SAWS work cards, the operators may lower their standards. The lack of a holistic package for the migrant workers in the absence of protected EU rights could mean that employers will assume less responsibility. One reason for this is the accommodation clause, which used to be provided by the employers. As employers are not responsible for accommodation this means that the GLA does not have to check the premises of the employers where the migrant workers are staying. In the long term, the loss of EU labour coupled with the absence of a scheme like SAWS could mean that the migrants who used to come on the basis of SAWS, in which certain standards were guaranteed, will be arriving and working on a more informal basis, and therefore further possibility for exploitation. This would mean that more informal employment and recruitment could take place. Where EU citizens sit within this continuum of rights and entitlements is the looming question. Concluding remarks Throughout the paper, we have underlined that TMPs can be serving a better purpose if they can take into account the agency of the migrant workers. The fact that the TMPs are a triple win can be questioned in many ways as remittances are not a guarantee for development, rights of migrant workers are not always granted fully despite the international conventions on migrant workers' rights (for instance, International Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families) and host countries are not always benefiting from the migrant workers to the utmost level as these migrants do not have the chance to learn how to speak the language and they might be even deskilled doing the temporary jobs. Therefore, we suggest that in the case of a Brexit, UK will have to face similar dilemmas of bilateral TMPs and the conditions will be determined according to the interests of both states. The bargaining power of the TMPs cannot be balanced as the migrant workers might have to leave the country definitely in case of a contract, unlike the case of the EU citizen migrant workers. Temporariness will be a definite condition as these migrant workers will not be coming from an area of freedom of movement, which means that the European migrant workers will lose most of the rights they had gained in regard to benefiting from visa-free travel and opportunity to seek for jobs. On the other hand, if these programmes are redesigned according to the rights-based perspective, the TMPs can be more beneficial than those such as SAWP in Canada and SAWS in the UK. Until now, TMPs have been falling short of justice considerations, but they are still the only way to manage seasonal work. Thus, we argue that they should not be abandoned but that they should be redesigned so that the unions and migrant advocacy organisations have equal footing with the states, which register and implement these programmes. GLA should assume more responsibilities in the case of a Brexit, that is the only way these programmes can have a better reputation if structured temporariness can ever have such a reputation. Ultimately, the question for policymakers is what does a socially just TMP look like? Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
9,727.2
2018-06-01T00:00:00.000
[ "Economics" ]
The receptor tyrosine kinase EPHB6 regulates catecholamine exocytosis in adrenal gland chromaffin cells The erythropoietin-producing human hepatocellular receptor EPH receptor B6 (EPHB6) is a receptor tyrosine kinase that has been shown previously to control catecholamine synthesis in the adrenal gland chromaffin cells (AGCCs) in a testosterone-dependent fashion. EPHB6 also has a role in regulating blood pressure, but several facets of this regulation remain unclear. Using amperometry recordings, we now found that catecholamine secretion by AGCCs is compromised in the absence of EPHB6. AGCCs from male knockout (KO) mice displayed reduced cortical F-actin disassembly, accompanied by decreased catecholamine secretion through exocytosis. This phenotype was not observed in AGCCs from female KO mice, suggesting that testosterone, but not estrogen, contributes to this phenotype. Of note, reverse signaling from EPHB6 to ephrin B1 (EFNB1) and a 7-amino acid-long segment in the EFNB1 intracellular tail were essential for the regulation of catecholamine secretion. Further downstream, the Ras homolog family member A (RHOA) and FYN proto-oncogene Src family tyrosine kinase (FYN)–proto-oncogene c-ABL–microtubule-associated monooxygenase calponin and LIM domain containing 1 (MICAL-1) pathways mediated the signaling from EFNB1 to the defective F-actin disassembly. We discuss the implications of EPHB6's effect on catecholamine exocytosis and secretion for blood pressure regulation. Erythropoietin-producing human hepatocellular receptors (EPH) are the largest family of receptor tyrosine kinases. Their ligands are ephrins (EFN), which are also cell membrane molecules (1). EPHs are classified into A and B subfamilies based on their sequence homology. EFNs are also categorized into A and B subfamilies, based on the way they anchor on the cell membrane. EFNAs attach to the cell membrane by a glycosylphosphatidylinositol linkage, and they are without intracellular tails. EFNBs are transmembrane molecules. EPH and EFN interact promiscuously, but generally speaking, EPHAs preferably interact with EFNAs, and EFPBs with EFNBs (2). EPHB6 is an inactive receptor tyrosine kinase due to a mutation in its kinase domain. EPH kinases and EFNs have profound and diverse functions in physiology and pathophysiology in almost all the systems in our bodies. Their functions were first reported in the nervous system (2,3). Subsequently, EPHs and EFNs were found to be essential in intestinal epithelial cell maturation (4), bone metabolism (5), angiogenesis (6), immune responses (7), insulin secretion (8), kidney glomerular filtration (9), chemotaxis (10), and homeostasis of vestibular endolymph fluid in the inner ear (11), etc. Our recent work revealed that EPHs/EFNs are critical in controlling blood pressure, according to mouse models and human genetic studies (12)(13)(14)(15)(16)(17)(18)(19)(20)(21). The target tissues of such EPHs/EFN functions are vascular smooth muscles and adrenal gland chromaffin cells (AGCCs). In male but not female EPHB6 gene knockout (KO) mice, their 24-h urine catecholamine (CAT) levels are reduced, but castration reverts the levels to a normal range (12). Using isolated AGCCs, we have shown that in the absence of EPHB6, the acetylcholine (ACh)-triggered Ca 2ϩ influx of the KO AGCCs is compromised. This is in part caused by an increase in big potassium channel (BK) current, causing premature closure of voltage-gated calcium channels, leading to decreased Ca 2ϩ influx (19). Ca 2ϩ flux is a secondary messenger in excitable cells and influences multiple downstream events. In AGCCs, it controls long-term CAT synthesis as well as CAT exocytosis (22,23). Accordingly, AGCCs from male EPHB6 KO mice have a lower CAT content, caused by a reduced level of tyrosine hydroxylase (24), the rate-limiting enzyme of CAT synthesis. In this study, we assessed whether EPHB6 KO affected CAT exocytosis and studied underlying mechanisms. Reduced CAT exocytosis in male KO AGCCs Our previous study has shown a reduction of 24-h urine CAT levels in EPHB6 KO male but not female mice. To prove that such reduction was due to decreased CAT secretion by AGCCs, we conducted amperometry to assess CAT release by individual AGCCs. Typical amperometry traces of AGCCs from male KO and wildtype (WT) mice are shown in Fig. 1A. Compared with WT counterparts, KO AGCCs from male mice presented a significantly reduced spike number per cell within the first 2 s of ACh stimulation (Fig. 1B) and within 60 s (Fig. 1C). The maximal spike height (I max , Fig. 1D) and charge per peak (Q, Fig. 1E) of the KO AGCCs were also reduced, although the spike width This article contains supporting information. ‡ These authors contributed equally to this work. * For correspondence: Dr. Jiangping Wu cro ARTICLE at half-height (T1 ⁄ 2 , Fig. 1F) and the time to reach the peak (T peak , Fig. 1G) were not significantly different from their WT counterparts. Analysis of the pre-spike feet (PSF) showed that the number of PSF per cell (Fig. 1H), the PSF amplitude (Fig. 1I), and the percentage of spikes with PSF ( Fig. 1K) were all significantly reduced in AGCCs from male KO mice, compared with their WT counterparts. However, the PSF charge (Fig. 1K) and duration (Fig. 1L) were comparable between KO and WT AGCCs from male mice. Of note, all these amperometry parameters, including the number of spikes per cell were not significantly different in AGCCs from female KO and WT mice (Fig. S1). Compromised cortical filamentous actin (F-actin) network disassembly in KO AGCCs upon ACh stimulation We next examined the cortical F-actin morphology in AGCCs after nicotine stimulation using confocal microscopy. Typical micrographs of continuous and disassembled cortical F-actin in resting and activated WT AGCCs (40 s after nicotine stimulation) are shown in Fig. 2A. The percentages of cells with disassembled F-actin ring in the resting KO and WT AGCCs cells are similar (Fig. 2B). When examined between 20 and 60 s after nicotine stimulation, KO AGCCs showed a consistently lower percentage of cells with disassembled cortical F-actin (Fig. 2B). However, after castration, the KO AGCCs reverted to the WT morphology as the percentage of cells with disassembled F-actin increased to a level similar to that of WT AGCCs (Fig. 2B). Castration did not affect the F-actin disassembly in WT AGCCs (Fig. 2C). F-actin disassembly in AGCCs from female KO and WT mice was also similar (Fig. 2D). Altogether these results suggest that EPHB6 deletion and testosterone are both indispensable for the compromised F-actin disassembly in AGCCs from male KO mice. We further elucidated how sex hormones in concert with EPHB6 regulated F-actin disassembly. When AGCCs from female KO mice were treated shortly for 15 min with cell-impermeable BSA-conjugated testosterone, their F-actin disassembly was compromised (Fig. 2E). BSA-conjugated testosterone had no effect on AGCCs from WT females (Fig. 2F), excluding the possibility that testosterone alone affected the F-actin disassembly. In contrast, when estrogen was added to the culture of AGCCs from male WT (Fig. 2G) or KO (Fig. 2H) mice, it manifested no significant effect on the F-actin disassembly. To prove that the reduced F-actin disassembly in KO cells due to cell-impermeable testosterone did have a functional consequence in terms of CAT secretion, we treated female AGCCs with BSA-conjugated testosterone. Such treatment indeed significantly reduced noradrenaline secretion by KO but not WT AGCCs (Fig. 2I). Altogether, these data indicate that the observed compromised F-actin disassembly in AGCCs from male KO mice is due to EPHB6 deletion in concert with the nongenomic effect of testosterone, whereas the absence of estrogen does not play a role in this matter. Furthermore, such F-actin disassembly appears functionally associated with CAT secretion. EPHB6 reverse signaling through EFNB1 is essential in regulating CAT exocytosis in AGCC EPHB6 and its ligands (EFNBs) are capable of bidirectional signaling. To discern these two types of signaling related to CAT exocytosis of AGCCs, we employed solid-phase EPHB6 (EPHB6-FC-coated wells) and solid-phase anti-EPHB6 Ab (anti-EPHB6 Ab-coated wells) to stimulate tsAM5NE cells, which are derived from normal AGCCs (25). In the adrenal gland medullae, there are two types of chromaffin cells containing either epinephrine (EPI) or norepinephrine (NE) (26). They secret EPI or NE, respectively (27). tsAM5NE cells are NE-secreting AGCCs (25); their NE secretion was used as a readout. As shown in Fig. 3A, solid-phase EPHB6 significantly augmented ACh-triggered NE release by AGCCs, whereas solidphase anti-EPHB6 Ab had no such effect. Their respective proteins control normal human IgG (the Fc part of EPHB6-Fc was of human IgG origin) or normal mouse IgG did not impact on the NE release. This result indicated that the reverse signaling from EPHB6 to EFNBs was responsible for regulating NE exocytosis in AGCCs. To identify which EFNB was mediating such reverse signaling, we placed Ab against two major EPHB6 ligands, i.e. EFNB1 and EFNB2, on the solid-phase, and used them to stimulate tsAM5NE cells. Anti-EFNB1 but not anti-EFNB2 Ab drastically augmented ACh-stimulated NE secretion (Fig. 3B), suggesting that the major reverse signaling was mediated by EFNB1. EFNBs have no enzymatic activity, and their reverse signaling depends on the association of other signaling molecules with their short intracellular sequences. In the intracellular sequence of EFNB1, there are 5 tyrosine residues and a PDZbinding domain at its C terminus (Fig. 3C). We constructed lentiviruses expressing full-length EFNB1 and various deletion mutants of the EFNB1 intracellular sequence (Fig. 3C). tsAM5NE cells were infected with these viruses. By adjusting virus concentrations, the tsAM5NE cell-surface overexpression of different EFNB1 mutants was titrated to a similar level according to flow cytometry (Fig. S2). The method of flow cytometry was described in the supplementary methods in supplementary Information. tsAM5NE cells overexpressing the full-length EFNB1 increased NE secretion significantly, com- EPHB6 controls catecholamine exocytosis pared with the empty virus-infected cells (both of them were cultured in anti-EFNB1 Ab-coated wells) (Fig. 3D). The cells with full-length EFNB1 overexpression and cultured in anti-EFNB1 AB-coated wells also had significantly increased NE secretion compared with the same kinds of cells cultured in wells coated with control IgG (i.e. without reverse signaling via EFNB1) (Fig. 3D). These controls validated the assay system as one being able to detect reverse signaling by overexpressed EFNB1 over the endogenous EFNB1 on the tsMA5NE cell surface. EFNB1-⌬2Y (the deletion of the last 16 aa of the C terminus including the PDZ-binding domain and two tyrosine residues, Tyr-342 and Tyr-343) was equally potent as the fulllength EFNB1. The further deletion of 7 aa, including tyrosine residues Tyr-323 and Tyr-328 (EFNB1-⌬4Y), significantly reduced the potency of the mutant to stimulate NE secretion. The additional deletion of 11 aa containing the remaining tyrosine residues Tyr-312 and Tyr-316 (EFNB1-⌬6Y) did not result in a further decrease of the potency. These results suggested that the critical sequence mediating EFNB1 reverse signaling in mouse AGCCs in terms of controlling CAT secretion resided within the 7-aa intracellular sequence from aa 322 to 328, containing Tyr-323 and Tyr-328. It has been established that EFNB1 reverse signaling depends on several RHO family G-proteins (13,(28)(29)(30). We found that RHOA activity after nicotine stimulation in AGCCs from male KO was increased compared with their WT counterparts (Fig. 3E). This is compatible with prior knowledge that heightened RHOA activity increases actin polymerization (31) and hence reduces overall F-actin disassembly as a result. When AGCCs from male WT mice were treated with RHOA inhibitor Rhosin, no effect on F-actin disassembly was observed (Fig. 3F). However, such inhibition reverted the decreased F-actin disassembly in the KO cells to a level similar to that of the WT counterparts (Fig. 3F). This result indicated that the increased RHOA activity in the KO AGCCs contributed to the diminished F-actin disassembly. The FYN/c-ABL/MICAL-1 pathway conducted signaling from EPHB6 to F-actin disassembly FYN was previously reported to interact with EFNB1 (32) and therefore is a possible downstream signaling molecule mediating EPHB6 reverse signaling in AGCCs concerning their function in CAT secretion. The SRC-family tyrosine kinase FYN was activated (based on its Tyr-420 phosphorylation) within 5-10 min of nicotine stimulation in AGCCs of WT male mice (Fig. 4A). Such phosphorylation was compromised in the KO AGCCs, whereas the total FYN protein remained unchanged (Fig. 4A). The tyrosine kinase c-ABL is a substrate of FYN kinase (33). Its phosphorylation, which is needed for its activation (33), was diminished, as expected, in AGCCs from male KO mice (Fig. 4B). In this experiment, total phosphoprotein was immunoprecipitated, followed by anti-c-ABL Ab blotting. We previously demonstrated that the phosphorylation of ELK1 was not influenced by EPHB6 KO in AGCCs (24). Fig. 4B showed that ELK1 phosphorylation was similar in all the precipitated samples and was used as a loading and immunoprecipitation efficiency control. Furthermore, the total protein of c-ABL in the KO and WT AGCCs was similar (Fig. 4C). MICAL-1 is a substrate of the c-ABL kinase (34) and is an F-actin-monooxygenase, which oxidizes methionine residues of actin (35). It is essential in promoting the depolymerization of F-actin (35), and its phosphorylation is necessary for its activity (34). Although total MICAL-1 protein levels in WT and KO AGCCs were similar in resting and 10-min nicotine-stimulated cells (Fig. 5A), MICAL-1 phosphorylation was significantly increased in stimulated male WT AGCCs (Fig. 5B). This augmentation was compromised in the KO AGCCs (Fig. 5B). FYN inhibitor PP2 suppressed the up-regulation of nicotine-stimulated MICAL-1 phosphorylation in the male WT AGCCs but did not affect the KO counterparts ( Fig. 5B), supporting a model in which FYN acted downstream of EPHB6, and this activity appeared upstream of MICAL-1 phosphorylation. When the male WT AGCCs were treated with Imatinib, a c-ABL inhibitor, their MICAL-1 phosphorylation up-regulation upon nicotine stimulation was compromised (Fig. 5C). On the other hand, imatinib had no such effect on KO counterparts, suggesting that c-ABL activity was downstream of EPHB6 and upstream of MICAL-1 phosphorylation in these cells. ELK1 phosphorylation was again employed as an internal control for the efficiency of immunoprecipitation and loading in this experiment. To assess the functional consequences of FYN and c-ABL inhibition concerning F-actin disassembly, we treated the WT and KO AGCCs with FYN inhibitor PP2 (Fig. 6A) and c-ABL inhibitor imatinib (Fig. 6B). They effectively reduced F-actin disassembly in WT but not KO AGCCs. This finding suggested that reduced FYN and c-ABL activities occurred downstream Figure 2. The effect of sex hormones on cortical F-actin disassembly in AGCCs from Ephb6 KO and WT mice. AGCCs isolated from KO and WT mice were cultured for 24 h and then stimulated with nicotine (50 M) for 0, 20, 40, or 60 s. The cells were stained with rhodamine-conjugated phalloidin for F-actin and then analyzed according to confocal microscopy. At least 60 AGCCs per adrenal gland per mouse were examined for F-actin disassembly, which was defined as the gaps in the cortical F-actin ring that exceeded more than 5% of the circumference. Three independent experiments, each using one male KO and WT mouse, were performed, and the data of the three experiments were analyzed by a paired two-way test, mean Ϯ S.E. were presented. Significant p values (Student's t test after arcsine transformation) between the WT and KO AGCCs at a given time point are shown. A, representative micrographs of cortical F-actin rings in WT AGCCs before and after 40-s nicotine stimulation. The arrowheads indicate gaps in the cortical F-actin ring. Scale bar ϭ 2 m. B, male KO AGCCs presented reduced cortical F-actin disassembly, and castration abrogated this phenotype. C, castration did not affect cortical F-actin disassembly in male WT AGCCs. D, AGCCs of female KO and WT mice were similar in cortical F-actin disassembly. E and F, testosterone rapidly lowered cortical F-actin disassembly in female KO (E) but not female WT (F) AGCCs. AGCCs were treated with cell membrane-impermeable BSA-conjugated testosterone (1.1 g/ml) or vehicle for 15 min at 37°C before nicotine stimulation. G and H, estrogen did not affect cortical F-actin disassembly in AGCCs from male WT (G) or KO (H) mice. AGCCs were treated with 17␤-estradiol (100 pg/ml) or vehicle for 15 min at 37°C before nicotine stimulation. I, short-term testosterone treatment lowered NE released from female KO but not from WT AGCCs. Cells obtained for female KO and WT AGCCs (10,000 cells per well) were cultured for 16 h and then treated with BSA-conjugated testosterone (1.1 g/ml) or BSA in Hank's buffer at 37°C for 15 min, and stimulated with 5 mM ACh for 1 min at room temperature. NE in the supernatants was measured and normalized to baseline NE secretion by female WT AGCCs without testosterone pretreatment or ACh stimulation. Normalized fold-changes (mean Ϯ S.D.) of NE secretion of samples with different treatments are shown. Three independent experiments were conducted. Significant p values (2-way paired Student's t test) are indicated. Additional statistical analysis for the changes between different points in time is presented in Table S1. EPHB6 controls catecholamine exocytosis of EPHB6, and were relevant to the reduced F-actin disassembly seen in the make KO AGCCs. Discussion In this work, we demonstrated that EPHB6 reverse signaling via a 7-aa intracellular sequence of EFNB1 between aa 322 and 328 was critical for regulating CAT exocytosis in AGCCs. The signaling from EFNB1 traversing through RHOA as well as through the FYN/c-ABL/MICAL-1/F-actin pathways was necessary for EPHB6's effect on CAT secretion. In our amperometric experiments, CAT vesicles released within 2 and 60 s after ACh stimulation were recorded. A reduced number of spikes during the first 2 s in KO AGCCs were found (Fig. 1C). Such reduction reflects reduced immediately-released CAT. This could be caused by the Ca 2ϩ influx decrease in these cells, as we reported previously (19), or caused by a smaller pool of immediately-releasable vesicles (IRVs), or both, in the KO AGCCs. Additional experiments will be needed to assess the pool size of IRVs. Analysis of individual spike parameters is commonly used to quantify the dynamics and size of single vesicular fusion events. We found here that the spike charge Q, reflecting the amount of CAT released per vesicle, was lower in male KO AGCCs, suggesting less catecholamine content in the vesicles in action. This observation is in agreement with our previous report that the male KO AGCCs are compromised in their CAT biogenesis (24). The maximal amplitude (I max ) of the spike is proportional to the rate of catecholamine release and is thus a function of the amount of catecholamine released and the discharge kinetics. T1 ⁄ 2 and T peak reflect the discharge kinetics. In the male KO AGCCs, the I max but not T1 ⁄ 2 or T peak was significantly reduced, indicating that the discharge kinetics in the KO AGCCs was normal, and thus the reduced I max is likely the consequence of a smaller amount of catecholamine released. The pre-spike feet reflect the fusion pore formation between vesicular and cell membranes, and the small amount of CAT released during this process (36). PSFs per cell, PSF amplitude, and the percentage of PSF present in all spikes were all reduced in the male KO AGCCs, suggesting that EPHB6 was involved in the processes reflected by these parameters. However, the underlying mechanisms and significance of these PSF parameters concerning CAT exocytosis remained to be elucidated. The cortical F-actin network in resting AGCCs forms a continuous net but is disassembled within seconds after ACh-triggered activation. Such a morphological change does not only allow CAT-containing vesicles in the reserve pool to pass the disrupted F-actin net to replenish the IRV pool but also plays an active role in the fusion of the IRVs with the cell membrane (37). EPHB6 deletion compromised such F-actin disassembly, and this likely also contributes to the decreased CAT exocytosis observed in KO AGCCs. If the role of meshed F-actin in preventing the vesicles in the reserve pool to move to IRV pool outweights its role in the fusion of vesicles to the cell membrane, then the role of EPHB6 is probably more critical in the sustained CAT release by AGCCs. Our results also showed that the effect of EPHB6 KO in preventing F-actin disassembly depended on the presence of testosterone. This finding was corroborated by the decreased CAT release by female KO AGCCs in the presence of testosterone (Fig. 2I), and our previous in vivo results showing that male but not female KO mice presented decreased 24-h urine CAT levels (12). EPHB6 can trigger signals into cells in two ways. The first is forward signaling from its ligand EFNs in neighboring cells to EPHB6 on the target cells, occurring through an intracellular sequence of EPHB6. Second is reverse signaling from EPHB6 on the neighboring cells to the EFNs on the target cells, occurring through the intracellular sequence of EFNBs. Through a series of solid-phase stimulations mimicking the cell-surface EPHB6 and EFNs, we determined that reverse signaling through EFNB1 but not forward signaling through EPHB6 was essential for EPHB6's effect on F-actin disassembly. Among the two critical features of the intracellular domain of EFNB1 (i.e. C-terminal PDZ-binding motif and 5 tyrosine residues potentially associating with SH domain-containing signaling molecules), our results showed that a 7-aa sequence between residues 322 and 328, which contains Tyr-323 and Tyr-328, was critical for CAT release. This region was previously shown to be essential to mediate T cell chemotaxis toward chemokine CXCL12 (38). For different cell types and in responses to various stimuli, different regions of the EFNB1 intracellular domain are implicated, leading to different biological outcomes. For example, in T cells, the 11-aa segment between residues 311 and 321 harboring Tyr-312 and Tyr-316 is indispensable for T cell chemotaxis to chemokine CCL21 (39); the 16-aa sequence contain- EPHB6 controls catecholamine exocytosis ing Tyr-342 and Tyr-343 negatively regulates T cell chemotaxis to CXCL12; the 16-aa stretch from residues 331 to 345 is essential for RHOA activation and metalloproteinase secretion in gastric cancer cells (40). However, these regions did not have a perceptible effect on F-actin disassembly in AGCCs, based on our deletion study. We tried to decipher the signaling pathways from the EFNB1 intracellular sequence to CAT exocytosis. Possible pathways elucidated in this study or described in the literature from EPHB6/EFNB1 to F-actin disassembly and concerning both CAT exocytosis are illustrated in Fig. 7. It seems that EFNB1 has a constitutively suppressive effect on RHOA activation in AGCCs. In its absence in KO AGCCs, such a suppressive effect was released, and hence the level of GTP-bound active RHOA was elevated. Active RHOA favors the maintenance of the cortical F-actin network in AGCCs (41-43) and inhibits AGCC 's t test). B and C, decreased c-ABL phosphorylation in the KO medullae. Lysate proteins from WT and KO medullae after 0-or 10-min nicotine stimulation were immunoprecipitated with anti-phosphoprotein Ab. The precipitated proteins were immunoblotted with anti-c-ABL or anti-ELK1 Ab. Representative immunoblots on the left show c-ABL and ELK phosphorylation (B) and total c-ABL and ␤-actin (C). The densitometry signal ratios of phospho-c-ABL versus phospho-ELK, and total c-ABL versus ␤-actin of WT medullae at 0 min were used to normalize the data from four independent experiments. The normalized fold-changes (mean Ϯ S.D.) were presented on the right. The significant p values are indicated (2-way paired Student's t test). EPHB6 controls catecholamine exocytosis CAT secretion (31). Currently, the intermediate molecule linking the EFNB1 peptide sequence aa 323-328 to RHOA remains to be identified. This molecule might be a negative regulator of RHOA activity, such as GTPase-activating protein or a guanine nucleotide dissociating inhibitor (44). Alternatively, it might be an adaptor protein binding to RHOA regulators (40). Several pathways can lead to RHOA's activity to actin polymerization (45). We investigated one of them and found that the RHOA/ ROCK/LIMK/COFILIN/F-actin pathway was not implicated (data not shown). We cannot exclude the possibility that the effect of EPHB6/EFNB1 reverse signaling on regulating RHOA activity is via the initially reduced Ca 2ϩ influx (19). However, this possibility is incompatible with some literature and our findings. For example, the RHOA/ROCK/Ezrin pathway, which promotes F-actin stabilization, is positively regulated by Ca 2ϩ influx and CaMKII (46,47), as illustrated in Fig. 7. This implies that the reduced Ca 2ϩ influx caused by EPHB6 KO would reduce RHOA activity, and consequently decreased F-actin stabilization, which favors CAT secretion but not reduced CAT secretion. Additional investigation is needed to firmly establish if there is Ca 2ϩ -independent regulation of RHOA activity by the EPHB6/EFNB1 reverse signaling. The other pathway from EPHB6/EFNB1 to F-actin involves FYN/c-ABL/MICAL/F-actin. Using FYN inhibitor PP2 and 's t test). A, unchanged total MICAL-1 levels in male KO medullae. B, compromised MICAL-1 phosphorylation in KO medullae after nicotine stimulation, or in WT medullae treated with FYN inhibitor. C, decreased MICAL-1 phosphorylation in WT medullae treated with c-ABL inhibitor. The same membranes were sequentially blotted with anti-c-ABL (Fig. 4C), MICAL-1 and ␤-actin Abs, with a stripping process between these different immunoblottings. The same ␤-actin immunoblotting was used as loading controls for both Fig. 4C and panel A. (74), subsequently amplified by voltage-gated calcium channels (VGCC). The depolarization and the increased Ca 2ϩ concentration causes the opening of the BK, which repolarizes the cells and shuts down VGCC (75,76). The nongenomic effect of testosterone positively regulates BK opening, whereas EPHB6 to EFNB1 reverse signaling negatively impacts on such an effect of testosterone (24,77). Increased Ca 2ϩ levels activate CaMKII (78), leading to activation of many downstream signaling events that enhance both CAT biogenesis (79) and exocytosis (this study). The FYN/c-ABL/MICAL-1 pathway that promotes F-actin depolymerization is downstream of and positively regulated by CaMKII (33)(34)(35)80). The RHOA/ROCK/Ezrin pathway, which promotes F-actin stabilization, is also positively regulated by CaMKII (46,47). Although the two pathways have opposite effects on F-actin disassembly after the Ca 2ϩ influx, under normal circumstances, the balance is in favor of F-actin disassembly. The EPHB6 to EFNB1 reverse signaling by itself has no effect on CAT exocytosis. The effect of the defective reverse signaling on all the downstream events could be due to the initial compromised Ca 2ϩ influx. It is also possible that such reverse signaling might modify signaling events directly. The reverse signaling might have a default direct positive effect on the FYN/c-ABL/MICAL-1/F-actin pathway. It might also have a direct default negative effect on RHOA activation, which promotes F-actin stabilization typically. ROCK/Ezrin (49) and ROCK/LIMK/Cofilin (50) pathways are known to be downstream of RHOA, although the latter is not activated in ACh-stimulated AGCCs (data not shown). F-actin depolymerization is not only essential in moving the slow-release CAT-containing vesicles to the docking position (51) but is also critical for optimal vesicular fusion and sewage of CAT content from the vesicles in the IRV pool during rapid exocytosis. In the absence of EPHB6/EFNB1 reverse signaling, as is the case in EPHB6 KO, the signaling strength of the FYN/c-ABL/MICAL pathway is compromised, whereas the signaling strength of RHOA/ROCK/Ezrin is relatively increased. These changes eventually lead to reduced F-actin disassembly and CAT exocytosis in the male KO AGCCs. EPHB6 controls catecholamine exocytosis c-ABL inhibitor Imatinib, we showed that these inhibitors affected F-actin disassembly in WT but not in KO AGCCs, indicating that this pathway leading to F-actin is relevant to the EPHB6/EFNB1 reverse signaling and that it is not functional when EPHB6 is absent. FYN is likely immediately downstream of EFNB1, as the SRC family kinases are recruited to the lipid domain where EFNB1 is localized after the reverse signaling is triggered, and FYN is activated (45), although direct evidence of EFNB1 and FYN co-localization or interaction is still lacking. MICAL-1 is likely to be at the other end of this pathway, and its redox enzymatic activity may specifically destabilize F-actin (48). Again, it is possible that the observed effects of EPHB6/ EFNB1 reverse signaling on the FYN/c-ABL/MICAL/F-actin pathway are due to its initial impact on the Ca 2ϩ influx (19). The end results remain the same. In the absence of EPHB6 reverse signaling via EFNB1, MICAL-1 activity was reduced, leading to an increase in F-actin stability. Although the above-mentioned two pathways have opposite effects on F-actin disassembly after the Ca 2ϩ influx, under normal circumstances, the balance is in favor of F-actin disassembly. The EPHB6 to EFNB1 reverse signaling by itself has no effect on CAT exocytosis. The effect of the defective reverse signaling on all the downstream events could be due to the initial compromised Ca 2ϩ influx, as mentioned above. It is also possible that such reverse signaling might modify signaling events directly in addition to its effect on Ca 2ϩ influx. The reverse signaling might have a default direct positive effect on the FYN/c-ABL/MICAL-1/F-actin pathway. On the other hand, it might have a direct default negative effect on RHOA activation, which promotes F-actin stabilization typically. ROCK/Ezrin (49) and ROCK/LIMK/Cofilin (50) pathways are known to be downstream of RHOA, although the latter is not activated in ACh-stimulated AGCCs (data not shown). F-actin depolymerization is not only essential in moving the slow-release CAT-containing vesicles to the docking position (51) but is also critical for optimal vesicular fusion and sewage of CAT content from the vesicles in the IRV pool during rapid exocytosis. In the absence of EPHB6/EFNB1 reverse signaling, as is the case in EPHB6 KO, the signaling strength of the FYN/c-ABL/MICAL pathway is compromised, whereas the signaling strength of RHOA/ROCK/Ezrin is relatively increased. These changes eventually lead to reduced F-actin disassembly and CAT exocytosis in the male KO AGCCs. We previously demonstrated that the male but not female EPHB6 KO AGCCs presented a lower Ca 2ϩ influx after ACh stimulation (19). The difference in the Ca 2ϩ influx between male and female KO AGCCs was due to the absence of testosterone but not the presence of estrogen in females (19). The ACh-triggered Ca 2ϩ influx is the first and most important event leading to the activation of many downstream events. It is possible that some abnormal manifestations of these events in the KO cells are the consequence of the initially reduced Ca 2ϩ influx. However, it is also possible that EPHB6/EFNB1 has Ca 2ϩ -independent regulation of FYN and RHOA activation, as illustrated in Fig. 7 by faint dotted lines between EFNB1 and FYN, and between EFNB1 and RHOA. We could induce a maximal Ca 2ϩ influx in AGCCs by ionomycin. If under such a circumstance, the activation of the FYN and/or RHOA pathways is still different in the WT and KO AGCCs, then we could conclude that indeed Ca 2ϩ influx-independent regulation of FYN and RHOA activation by EPHB6/EFNB1 reverse signaling does exist. Such experiments will be performed shortly. Although F-actin disassembly became obvious only 20 s after nicotine stimulation, it is conceivable that some more subtle actin disassembly, not measurable with our experimental approach, had already occurred in the first several seconds after stimulation. This rapid kinetics is compatible with that of acute CAT secretion, which happens at a similar time scale and is consistent with the more recently described positive role of F-actin disassembly in vesicle fusion (52). Of course, the disassembly is also essential in mobilizing the vesicles from the reserve pool for chronic CAT secretion (51), which is probably more relevant to hypertension caused by chronic stress and chronic sterile inflammation (53-55). In our study, the FYN/c-ABL/MICAL-1 pathway activation was demonstrated to occur 5-10 min after nicotine stimulation. Such slow kinetics is probably pertinent to the chronic CAT secretion, which requires F-actin disassembly and slow-releasing vesicle mobilization at such a time scale. However, FYN and c-ABL inhibitors effectively suppressed F-actin disassembly in 20 s after nicotine stimulation (Fig. 6, A and B), proving that this pathway is also essential in acute CAT secretion. Likely, immunoblotting of FYN, c-ABl, and MICAL-1 phosphorylation is not sensitive to detect earlier activation of this pathway within seconds of nicotine stimulation. The major sources of blood EPI and NE are AGCCs and the nervous system, respectively, whereas the blood dopamine level is very low. Blood pressure increase during acute stress is associated with EPI released from AGCCs. There are different opinions regarding whether the blood EPI level is associated with blood pressure under nonstress conditions. Early publications showed that ambient EPI levels are associated with blood pressure in humans and animals (56 -61). In recent years, chronic stress (53, 54) and systemic sterile low-level chronic inflammation are found to be significant contributing factors of primary hypertension (55). These conditions are associated with elevated blood NE levels derived from the nervous system, but increased EPI levels from AGCCs are also often observed (62)(63)(64)(65)(66)(67). In a mouse model, augmented catecholamine release from AGCCs caused by TRPM4 deletion has been shown to cause hypertension (68). These data suggest that the excessive CAT release from the adrenal glands is a contributing factor to hypertension. In the case of EPHB6 mutation, hypogonadism causes an increase of CAT secretion from a depressed to a normal level. Based on our previous and current findings along with existing literature, it is postulated that EPHB6 acts in concert with testosterone to regulate blood pressure. EPHB6/EFNB1 reverse signaling has a positive constitutive effect on CAT biosynthesis (24) and exocytosis, but such effects depend on the presence of testosterone. In the absence of the said reverse signaling due to EPHB6 deletion in individuals with a normal level of testosterone, the ambient CAT biosynthesis and secretion are reduced. Such reduction tends to lower blood pressure, but EPHB6 KO loss-of-function mutations also cause increased resistant artery constriction due to its other function on vascular smooth mus-EPHB6 controls catecholamine exocytosis cle cells (12), inclined toward an increase in blood pressure. These two opposite effects on blood pressure cancel out each other, and the final outcome is that in EPHB6 KO/mutations males with sufficient testosterone, their blood pressure remains normal. However, if patients with EPHB6 loss-of-function mutations or defective EPHB6-EFNB1 reverse signaling become hypogonadic (castration in the case of mice), their CAT biosynthesis and exocytosis return to the normal level. At the same time, due to the vasoconstrictive effect of EPHB6 KO/mutation, the resistance of blood flow is increased, and hence their blood pressure is augmented. For these patients, testosterone could be a personalized medication for hypertension. Ephb6 gene KO mice Ephb6 KO mice were generated in our laboratory, as described previously (69). They were backcrossed to the C57BL/6J genetic background for more than 15 generations. Age-and gender-matched WT littermates served as controls. Some male mice underwent castration, and they were used 3 weeks after the surgery. Amperometry Chromaffin cells from Ephb6 WT and KO mice were washed with Locke's solution and processed for CAT release measurements by amperometry, which was conducted at room temperature. A carbon fiber electrode of 5-m diameter (ALA Scientific, New York) was held at a potential of 650 mV compared with the reference electrode (Ag/AgCl) and was approached closely to the cells. The secretion of CAT was induced by 10-s pressure ejection of 100 M nicotine in Locke's solution from a micropipette positioned at 10 m from the cell and recorded over 60 s. Amperometric recordings were performed with an AMU130 amplifier (Radiometer Analytical, Loveland, CO), sampled at 5 kHz, and digitally low-pass filtered at 1 kHz. The analysis of amperometric recordings was carried out as previously described (71) with Macro software (obtained from the laboratory of Dr. R. Borges) written for Igor Software (Wavemetrics, Portland, OR), allowing automatic spike detection and extraction of spike parameters. The number of amperometric spikes with an amplitude Ͼ5 pA within 60 s after ACh stimulation was registered. The spike parameter analysis was restricted to these spikes with amplitudes of Ͼ5 pA. The quantal size of individual spikes was measured by calculating the spike area above the baseline (72). For PSF signals, the analysis was restricted to spikes with foot amplitudes of Ͼ2 pA. Cells responding by fewer than 5 spikes during the 60 s were excluded from the analysis, and so were spikes that partially overlapped with another spike. The number of spikes and PSFs were averaged per cell for 2 or 60 s after ACh stimulation. All the other amperometric parameters were calculated according to all the events in all the cells tested during the 60-s recording period. Primary AGCC culture for biochemical and confocal microscopic analyses Mouse AGCCs were isolated, as described by Kolski-Andreaco et al. (73), with modifications. The adrenal glands were obtained from 8-to 10-week-old mice. The fat and cortex were removed from the medullae, which were then digested with activated papain (P4762, Sigma-Aldrich, Oakville, Ontario, Canada) in Hank's buffer (2 medullae/100 l of Hank's buffer containing 4 units of activated papain) at 37°C for 25 min. The digested medullae were washed twice with Hank's buffer and triturated by pipetting in 300 l of Hank's buffer until they became feather-like. Cells were pelleted at 3,700 ϫ g for 3 min and re-suspended in DMEM containing 15% FCS for culture. Confocal microscopy of F-actin AGCCs were cultured in 6-well-plates with coverglass placed at the bottom of the wells. After 1 day, the cells were washed once with pre-warmed PBS and stimulated with nicotine (50 m) for 0, 20, 40, and 60 s. The cells were then washed once with PBS and fixed with paraformaldehyde (4%) in PBS for 10 min at room temperature. They were permeabilized with 0.1% Triton X-100 in PBS for 3 min. The cells were labeled with rhodamine-conjugated phalloidin (1 unit/ml; R415, Thermo-Fisher Scientific, Burlington, Ontario, Canada) at room temperature for 20 min and imbedded with ProLong Gold antifade reagent (Invitrogen, Burlington, Ontario, Canada). All labeled cells were viewed under a confocal microscope (Leica TCS SP5 MP, Allendale, NJ). The integrity of the cortical polymerized F-actin ring of each cell was assessed. At least 60 randomly selected AGCCs per treatment per mouse were examined. If gaps in the cortical F-actin rings were more than 5% the length of the circumference, the rings were considered disassembled. G-LISA assays for activated RHOA Adrenal medullae were isolated from Ephb6 KO and WT male mice, and cultured in Opti-MEM TM Reduced Serum Media at 37°C for 2 h. Nicotine (20 M) was used to stimulate the adrenal medulla for 2.5 min, which was determined as the peak activation time according to pilot studies. Proteins were extracted from the tissues on ice for 5 min in G-LISA's cell lysis buffers containing protease inhibitor cocktails (Cytoskeleton, Inc., Denver, CO; BK128). The cleared supernatants were snapfrozen in liquid nitrogen and stored at Ϫ80°C until the assay. Activated RHOA G-protein within samples (25 g/sample) were determined by the G-LISA assay (Cytoskeleton, Inc.) according to the manufacturer's instructions. Samples were assayed in duplicate. NE measurements tsAM5NE AGCCs were cultured for 24 h. The cells were washed once by pre-warmed Hank's buffer and placed in Hank's buffer at 37°C for 15 min. Then these cells were stimulated with 5 mM ACh chloride (A2661, Sigma-Aldrich) in Hank's buffer for 1 min. NE levels in the supernatants were measured with the NE Research ELISA kit (BA E-5200, Rocky Mountain Diagnostics, Colorado Springs, CO) according to the manufacturer's instructions. Each sample was tested in duplicate. In some experiments, 10,000 AGCCs from WT or Ephb6 KO female mice were cultured in collagen IV-coated 24-well-plates in DMEM with 15% FCS for 16 h at 37°C. These cells were treated with BSA-conjugated testosterone (1.1 g/ml, testosterone-3-(O-carboxymethyl)-oxime-BSA; Aviva Systems Biology, San Diego, CA) or BSA in Hank's buffer at 37°C for 15 min after washing once by pre-warmed Hank's buffer. The cells were then stimulated with ACh (5 mM) for 1 min. The supernatants were harvested and tested for NE levels by ELISA. Lentivirus preparation and infection Polymerase chain reaction (PCR)-based deletion mutations of Efnb1 intracellular tails were generated and cloned into the pLentiviral CMV/TO PGK GFP Destination vector (Addgene) as previously described (38). The expression plasmid, control plasmid, and packaging constructs plp1, plp2, and plpSV were transfected into HEK 293T cells. The viruses were harvested by collecting the supernatant 72 h later and concentrated by ultracentrifugation. tsAM5NE cells (1.2 ϫ 10 5 cells/well in 24-wellplates) were transfected with lentiviruses in the presence of 10 g/ml of Polybrene (sc-134220, Santa Cruz Biotechnology) immediately after passage. After 72 h, the transfected cells were re-plated (1.2 ϫ 10 5 cells/well) in 24-well-plates coated with collagen IV plus goat anti-EFNB1 Ab (AF473, R&D Systems), or normal goat IgG (sc-2028, Santa Cruz Biotechnology) for 24 h before the measurement of ACh-stimulated NE secretion. Immunoprecipitation and immunoblotting Adrenal gland medullae were collected from 8-to 10-weekold WT and Ephb6 KO male mice, and rested in DMEM at 37°C for 2 h. They were stimulated with nicotine (20 m) for 5 or 10 min at 37°C and then lysed by immunoprecipitation assay buffer (RIPA), which contained PhosSTOP and a protease inhibitor mixture (Roche Applied Science, Meylan Cedex, France). Sixty g of lysate protein/sample was resolved on 7.5% SDS-PAGE and transferred to polyvinylidene difluoride membranes (Invitrogen). In some experiments, the lysates were precleared with protein G magnetic beads (1614023, Bio-Rad Laboratories), and then precipitated with anti-phosphotyrosine Ab 4G10 (05321, Sigma) plus protein G magnetic beads at 4°C overnight with gentle rotation. The precipitated proteins were resolved in 7.5% SDS-PAGE and transferred to polyvinylidene difluoride membranes. The membranes were blotted with mouse anti-phospho-FYN (Y420) Ab (STJ110851, St. John's laboratory, London, UK), mouse anti-FYN-59 Ab (626502, Biolegend, San Diego, CA), rabbit anti-c-ABL Ab (2862, Cell Signaling Technology, Danvers, MA), rabbit anti-MICAL1 Ab (ab181145, Abcam), rabbit anti-ELK1 Ab (9182; Cell Signaling Technology), or rabbit anti-␤-actin Ab (4967; Cell Signaling Technology). All the Abs were used at the manufacturers' recommended dilutions. Signals were visualized by SuperSignal West Pico Chemiluminescent Substrate (ThermoFisher Scientific). Ethics statement All the animal studies were approved by the Animal Protection Committee (Comité institutionnel d'intégration de la protection des animaux) of the CRCHUM or conducted according to European Council Directive 86/609/EEC. Data availability All the data supporting our conclusions are presented in this article.
9,604
2020-04-22T00:00:00.000
[ "Biology" ]
Crystal structure of tribenzylbis(tetrahydrofuran-κO)lutetium(III) In the compound [Lu(C7H7)3(C4H8O)2], the Lu ion is coordinated by three benzyl and two tetrahydrofuran ligands. Two of the benzyl groups are bonded in a classical η1-fashion through the methylene via the ipso-carbon atom of the benzyl ligand in addition to bonding through the methylene C atom, resulting in a modified trigonal–bipyramidal coordination geometry about the Lu center. The mixed modes of benzyl coordination in the title compound are in contrast to the structure of the related hexacoordinate tris-THF compound, [Lu(CH 2 Ph) 3 (THF) 3 ], in which all of the benzyl ligands are 1 -coordinated (Meyer et al., 2008(Meyer et al., , 2013. The structural results provide yet another example of the importance of the metal size in the series of homologous [RE(CH 2 Ph) 3 (THF) 2 ] (RE = Sc, Er, Lu) compounds: the complex featuring the small scandium center shows all three benzyl ligands adopting the 1 -bonding mode (Meyer et al., 2008), whereas the larger lutetium can allow one of the three benzyl ligands to adopt the more stericallydemanding 2 -bonding mode; indeed, the Lu compound is isomorphous with the similarly-sized erbium complex, [Er( 2 -CH 2 Ph)( 1 -CH 2 Ph) 2 (THF) 2 ] (Huang et al., 2013), with metrical parameters reflecting the small decrease in ionic radius from erbium to lutetium (Shannon, 1976). Supramolecular features The closest intermolecular contacts are between benzyl carbons C11 and C12 and the THF methylene-group hydrogen H1B (at x À 1, y, z), at 2.80 and 2.89 Å , respectively, and between the benzyl carbon C16 and the phenyl-group hydrogen H22 (at Àx, Ày, 1 À z), at 2.86 Å . These interactions connect the complexes in a supramolecular ribbon running along the a-axis direction X-ray quality crystals of compound 1 were obtained by cooling a dilute toluene solution of the compound to 243 K for several days. Molecular structure of 1 in the crystal. Displacement ellipsoids are shown at the 50% probability level. Hydrogen atoms are shown with arbitrarily small displacement parameters. Refinement Crystal data, data collection and structure refinement details are summarized in Table 1. Hydrogen atoms were generated in idealized positions according to the sp 2 or sp 3 geometries of their attached carbon atoms, and given isotropic displacement parameters U iso (H) = 1.2U eq (parent atom). C-H distances in the CH 2 groups were constrained to 0.99 Å and those in phenyl-ring C-H groups to 0.95 Å . Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
677.6
2018-01-09T00:00:00.000
[ "Chemistry" ]
Subdomain Separability in Global Optimization We propose a generalization of separability in the context of global optimization. Our results apply to objective functions implemented as differentiable computer programs. They are presented in the context of a simple branch and bound method. The often significant search space reduction can be expected to yield an acceleration of any global optimization method. We show how to utilize interval derivatives calculated by adjoint algorithmic differentiation to examine the monotonicity of the objective with respect to so called structural separators and how to verify the latter automatically. Introduction In contrast to local optimization methods, deterministic global optimization methods, e.g. interval-based branch and bound (b&b) algorithms [1], guarantee to find the global solution for a predefined tolerance for optimality in finite time [2]. These methods are more expensive in terms of computational effort than their local counterparts. An important property that should be exploited during optimization is separability of the objective function. A function f : R n → R is called partially separable (also: decomposable) if it is of the form with a given partitioning of the set of indexes of independents into two disjoint subsets X 1 , X 2 and functions f 1 : R |X1| → R and f 2 : R |X2| → R. The function is called (fully) separable if the separation can be applied recursively until all disjoint subsets only contain a single element [3,4]. For a global optimization problem with partially separable objective function f as in (1) it is well known [5] that the global minimum can be obtained by decomposing the problem into smaller subproblems y * = min that can be solved in parallel. In the context of b&b algorithms with a division into k parts for all dimensions every non-leaf node generates k n children. The decomposition reduces the number of generated nodes to O(k max(|X1|,|X2|) ) for the particular problem and thus results in a potentially significant reduction of the corresponding search space. Separable functions have been extensively researched in the context of optimization. In [6] a quasi-Newton method is introduced that exploits the structure of partially separable functions when computing secant updates for the Hessian matrix. A parallel b&b approach was used in [7] to find optima of non-convex problems with partially separable functions over a bounded polyhedral set. In [8] a derivative-free method for exploiting partial separability in unconstrained optimization was proposed. The automatic detection of partial separability as in (1) by algorithmic differentiation was proposed in [9]. In [10] a class of problems was introduced, which is called as easy to optimize as decomposable functions and that is related to the present work. Such functions satisfy such that the first-order optimality condition can be transformed to g(x i ) = 0. The equation is only dependent on a single variable. Optima for which h(x) = 0 and optima at the boundary are not taken into consideration by this approach. In this paper we aim to generalize the concept of separability in order to make previously non-separable functions also benefit from decomposition of the optimization problem on subdomains. Therefore, the function must be of a special structure which is less restrictive than (1), but is a variation of (2) and additionally needs to fulfill a monotonicity condition on the separator. The monotonicity condition guarantees that the decomposition still takes all possible optima into consideration which is crucial for the integration into deterministic global optimization algorithms. We use interval adjoints as a combination of reliable interval computations [11,12] and adjoint algorithmic differentiation [13,14] to obtain an enclosure of all adjoints over a given subdomain. In [15] we used this information for significance based approximate computing. In [16] we discussed significance analysis in the context of neural networks. Deterministic global optimization through a check for first-order optimality is described in [17]. In the following we show how to use interval adjoints for a monotonicity check of structural separators and for the verification of these separators. The paper is organized as follows: In Section 2 we define structural separability and we formulate the necessary monotonicity condition for the decomposition of the optimization problem. Examples for functions that are non-separable by (1) but fulfill the new definition such that their corresponding optimization problem can still be decomposed are given. Section 3 explains how to implement the presented work and how to integrate it into a b&b algorithm for deterministic global optimization. Therefore, interval adjoints are utilized for the examination of the monotonicity condition and for automatic detection of separators. In Section 4 we show results from a proof of concept implementation for the examples from Section 2 followed by conclusion and outlook in Section 5. Subdomain Separability We introduce subdomain separability and we show how to exploit this property in global optimization. with disjoint and non-empty index sets X 1 and X 2 . The scalar function s (x i ) i∈X1 is called structural separator. Conventionally separable functions as in (1) are covered by Definition 1 with structural separators Application of the chain rule of differentiation to differentiable structurally separable functions yields the gradient If X 1 only contains a single element, then the structurally separable function f also satisfies (2) with g(x i ) = ds dxi (x i ) and h(x) = df ds (x). Theorem 1. Consider the global optimization problem with structurally separable, non-convex and differentiable objective function f and separator s (x i ) i∈X1 . If the objective function is monotonic w.r.t. the separator on the domain, that is, and then the optimization problem in (3) can be decomposed into Proof. From (4) we know that the objective function is either monotonically increasing or decreasing w.r.t. the separator. In case it is monotonically increasing, that is df ds (x) ≥ 0, over the subdomain D, we have for s − ≤ s + . As to ∂fs ∂xi (s, (x j ) j∈X2 ) = 0 for i ∈ X 1 , and due to monotonicity ∂fs ∂s (s, (x j ) j∈X2 ) > 0 the global minimum of f requires the separator s to be minimal on the domain. The monotonic decrease scenario is handled analogously. Remark 1. The dimension of the inner optimization problem as in (6) is |X 1 | while the dimension of the outer optimization problem in (5) is |X 2 | + 1. Remark 2. If s (x i ) i∈X1 is also structurally separable, then the separation approach can be applied recursively and the original optimization problem decomposes into even smaller disjoint optimization problems. Remark 3. If two structural separators s 1 (x i ) i∈X1 and s 2 (x i ) i∈X2 fulfilling (4) are independent of each other, i.e. X 1 ∩X 2 = ∅, the decomposed optimization problems can be solved in parallel. Otherwise, either separator s 1 or s 2 needs to be optimized first if X 1 ⊂ X 2 or X 2 ⊂ X 1 , respectively. Remark 4. If the monotonicity condition in (4) holds for separator s = x i , i ∈ {0, . . . , n−1} then the minimum is located at the boundary either at min xi∈Di Remark 5. A degenerate solution is implied if df ds (x) = 0 for all x ∈ D and D contains more than one element. Remark 6. If the monotonicity condition is violated, then the structural separability can still be exploited similar to [10] by solving ds dxi (x i ) i∈X1 = 0 for finding stationary points. As already proposed in Section 1 this approach does not necessarily compute all stationary points. Examples Five test problems are investigated in the light of subdomain separability. They illustrate different aspects of the general approach. Besides the partially separable function in Example 1, there is the exponential function which is solvable in parallel and globally monotonic in Example 2, a recursive exponential function which is still globally monotonic but cannot be solved in parallel in Example 3 and the Shubert function in Example 4 that is not globally monotonic but solvable in parallel. Example 5 can neither be solved in parallel nor is it globally monotonic but it could still benefit from subdomain separability. Example 1 (Styblinski-Tang function [18]). Partially separable functions as in (1) are structurally separable and always fulfill the monotonicity condition in (4) with on any domain which yields the well-known fact that the corresponding optimization problem can be decomposed and solved in parallel. For example, the Styblinski-Tang function is as in (1) except for the factor in front of the sum. In [3] it is marked as nonseparable. Still, the problem can be decomposed into for any x ∈ R n . Example 2 (Exponential function [19]). For the exponential function we choose s i = x 2 i to be the separators and the derivative of the objective w.r.t. these separators is equal to The exponential function is globally monotonically increasing. Theorem 1 becomes applicable to all separators. The resulting subproblems can be solved in parallel. Example 3 (Recursive exponential function). To demonstrate the usefulness of structural separability we consider the optimization problem in (3) which is non-separable in a conventional manner, but fulfills Definition 1 with separators y i , i = 1, . . . , n−1. To decompose the optimization problem it remains to be shown that the derivatives of the objective with respect to the separators df dyi (x) for i = 0, . . . , n are positive (or negative) on any subdomain. From By mathematical induction we show that y i ≥ 1 for i = 0, . . . , n. The basis y 0 = 1 obviously fulfills the statement. The assumption y i ≥ 1 yields due to monotonicity of the exponential function. Thus, y i ≥ 1 and df dyi (x) ≥ 1 for i = 0, . . . , n. Furthermore, we know that the global minimum is located at As a consequence of Theorem 1 the optimization problem can be reformulated as Note, that this function is globally monotonic w.r.t. the separator which does not necessarily hold in general. Since the separators are partially dependent on each other the corresponding optimization problems need to be solved sequentially beginning with y * 1 . Example 4 (Shubert function [20]). The Shubert function is given by Each factor of the multiplication can be considered as a structural separator with s i = 5 j=1 cos((j + 1)x i + j). Derivatives of the function value w.r.t. the separators are derived as If any s i is either positive or negative, then the corresponding optimization problem can be decomposed by Theorem 1. Example 5 (Salomon function [10]). We show that the Salomon function is separable only on selected subdomains. The differentiable program is given by Introduction of an intermediate result As dS dsi (x) is always positive it remains to be shown that df dS (x) is either positive or negative. The roots of df dS (x) are The function is monotonic between those roots. Thus, Theorem 1 can be applied to the Salomon function on the (sub-)domain x ∈ Sz √ n , Sz+1 √ n n for all z ∈ N + . If z is even, the minimum of the separator is required for a minimum of the objective function. Otherwise, if z is odd the separator needs to be maximized to obtain a minimum of the objective function. Next, we show how to compute interval adjoints and how they can be used to apply Theorem 1 to a differentiable program implementing a function f . Furthermore, we use interval adjoints to verify structural separators. Implementation Let f : R n → R be implemented as a differentiable program y = f (x) with independent variables x and dependent variable y. Following [13], we assume that at a particular argument x the implementation of f can be expressed by a finite sequence of elemental function evaluations as where v j for j = n, . . . , n + p − 1 are referred to as intermediate variables. The precedence relation i ≺ j indicates a direct dependency of v j on v i . Furthermore, the transitive closure ≺ * of ≺ induces a partial ordering of all indices j = 0, . . . , n + p. Equation (7) is also referred to as the single assignment code (SAC) of f . The SAC may not be unique due to commutativity, associativity and distributivity. We assume a SAC to be given. Interval Arithmetic Interval arithmetic (IA) is a concept that enables the computation of bounds of a function evaluation on a given interval. A closed interval of a variable x with lower bound x and upper bound x is denoted as If there is only a single element in [x], i.e, the endpoints are equal x = x, then the square brackets [·] are dropped and x is called a degenerate interval. In that sense IA represents an extension of the real/floating-point number system. Interval vectors are denoted by bold letters and have endpoints for each component When evaluating a function y = f (x) in IA on [x] we are interested in the information The asterisk denotes the united extension which computes the true range of values on the given domain. United extensions for all unary and binary elementary functions and arithmetic operations are known and endpoint formulas can be looked up e.g. in [12]. Unfortunately, the derivation of endpoint formulas for the united extensions of composed functions might be expensive or even impossible. Hence, we will compute corresponding estimates by natural interval extensions. A natural interval extension can be obtained by replacing all elemental functions ϕ j in (7) with their corresponding united extensions as The computation of the interval function value by the natural interval extension from (8) results in The superset relation states that the interval [y] can be an overestimation of all possible values over the given domain, but it guarantees enclosure. Furthermore, the natural interval extension of Lipschitz continuous functions converges linearly to the united extension with decreasing domain size. The reader is referred to [11,12,21,22] for more information on the topic. Adjoint Algorithmic Differentiation Algorithmic differentiation (AD) techniques [13,14] use the chain rule to compute in addition to the function value of a primal implementation its derivatives with respect to independent variables at a specified point. The adjoint or backward mode of AD propagates derivatives of the function w.r.t. independent and intermediate variables in reverse relative to the order of their computation in the primal SAC. The computationally intractable combinatorial optimization problem known as DAG Reversal [23] is implied. Following [14], first-order adjoints are marked with a subscript (1) . They are defined as A single adjoint computation with seed y (1) = 1 results in the gradient df dx (x) stored in x (1) . The adjoint of (7) can be implemented by (7) itself followed by The evaluation of the adjoint yields the adjoints of all intermediate variables v j v (1),j = y (1) · df dv j (x) , j = n + p, . . . , n . Interval Adjoints The natural interval extension of (7) and (9) Compared to the traditional approach of AD in which the derivatives are only computed at specified points, we now get globalized derivatives that contain all possible values of the derivative over the specified domain. The interval adjoints in (9) might be overestimated compared to the united extension as it is already stated for the interval values in Section 3.1. The natural interval extension of the adjoint converges linearly for continuously differentiable functions [24]. Higherorder converging interval extensions of adjoints can be derived, e.g. by centered forms. Monotonicity Check A single evaluation of the interval adjoint for y (1) = 1 suffices to verify monotonicity as in (4) for all independent and intermediate variables. If the separation approach is embedded into a b&b solver that involves verification of the first-order optimality condition by interval adjoints, then the monotonicity check is for free, assuming that the separators are known apriori. Verification of Separators Interval adjoints can be used to detect if an intermediate variable s is a separator. Note that df ds ([x]) as well as df dxi ([x]) are assumed to be available from the adjoint evaluation required for the monotonicity check. An additional evaluation of (10) is required with the adjoint of the intermediate variable set to s (1) = df ds ([x]). The resulting adjoints of the independent variables become equal to ) . If f is structurally separable and fulfills Definition 1 with separator s, then needs to hold over the entire domain, which can be verified by and since all other independent variables need to satisfy If any x (1),i fulfills neither (11) nor (12), then s is not a separator. Consequently, in addition to the interval adjoint evaluation for the monotonicity check another interval adjoint evaluation is required for the verification of each separator candidate. An exhaustive search for separators should be avoided, due to the potentially high number of intermediate variables and the associated number of separator candidates. Separators given by expert users can be verified efficiently. Since structural separability as given in Definition 1 is domain-independent and thus is a global property, it is sufficient to identify the separators once before performing the global search. Case Study The general idea of b&b algorithms [21] used for global optimization problems as given in (3) is to remove all parts of the domain that cannot contain a global minimum. The implementation used for this case study is a variation of the one presented in [17] implementing Theorem 1. The user needs to specify at least one separator. The algorithm performs the following steps: of the subdomain to find a better bound y * ; separator check : Check monotonicity condition for apriori known separators and generate a subproblem if Theorem 1 is applicable. Obviously, the improvement of the upper bound of the global minimum can be enhanced by local searches instead of evaluation of the objective function at the midpoint of the current subdomain. Recursive separation is not supported by the current version of the solver. It is the subject of ongoing development efforts. The software implements the required interval adjoints by using the interval type from the Boost library [25] as a base type of the first-order adjoint type provided by dco/c++ 1 [26]. Both template libraries make use of the concept of operator overloading as supported e.g. by C++. On the left side of Fig. 1 isolines of the two-dimensional Shubert function over the domain [0, 2π] are shown with green lines around (local) minima and red lines around local maxima. The two global minima are marked by green crosses. The right side of Fig. 1 shows the subdomains that are considered by the b&b algorithm. For visualization the branching is set up to stop when the subdomain is smaller than 0.1 in any direction. Non-square domains result from the separation approach and only occur in regions that are proven to be monotonic by the interval adjoints. Green boxes are active domains that could contain the global minimum. White boxes are discarded by the value check. Orange boxes violate the first-order optimality condition. Our solver is used to find the global minima of the examples from Section 2. The algorithm is performed with and without separation. Structural separators are marked manually. The results are summarized in Table 1. Most of the presented examples benefit from the domain-dependent separation approach and have less subdomains generated by b&b if separation is enabled. The benefit increases with growing dimensionality due to the exponential complexity of the bisection. The Salomon function does not benefit from the domain-dependent separation since the relevant domains are already discarded by the value or firstorder optimality checks. We only measure runtimes for the Styblinski-Tang example with n = 8 with and without exploiting subdomain separability. Since the derivative information is already available for all separators after the first-order optimality check, the monotonicity check only iterates over the separators defined by the user. The number of subdomains considered by the b&b algorithm without separation is 8820 times higher than with separation. The corresponding runtime without separation is only 7673 times higher than with separation. This observation correlates with the fact that the computations of subdomains that do not pass the value check are terminated immediately. The percentage of subdomains that are eliminated due to the value check is 30.2% for the case without separation and 2.8% with separation approach. The runtime estimates are averaged over 100 calls of the solver for both cases. Our in-house solver has been designed as a playground for novel algorithms. Neither is it optimized for speed, nor does it feature state-of-the-art non-convex optimization methodology beyond the previously described b&b algorithm. Ultimately, we aim for integration of our ideas into modern software solutions for global optimization, e.g. [27,28]. Conclusion and Outlook Our notion of separability combined with checks for monotonicity allows us to decompose an optimization problem into smaller optimization problems. It extends the verification of the first-order optimality condition as it was proposed in [10]. This also enables implementation of the proposed work as an add-on to deterministic global optimization algorithms by considering all possible optima instead of some candidates fulfilling first-order optimality condition. We explained how to utilize interval adjoints to verify monotonicity of the objective function w.r.t. all structural separators at the cost of a single adjoint evaluation. As a first result, we revisited examples from the literature that benefit from the domain-dependent separability approach. Furthermore, we showed how to verify the separation property of a variable in a given computer program at the cost of only two adjoint evaluations. The verification of separators can be used as a starting point for research into heuristics for automatically detecting separators in a computer program. Further work in progress includes enabling recursive separation. Moreover, interval arithmetic can result in a significant overestimation of the true value range, e.g. due to the wrapping effect or the dependency problem. The replacement of interval adjoints by an adjoint version of affine arithmetic [29] or by McCormick relaxations [30,31,32] of adjoints is expected to yield tighter enclosures.
5,215
2020-10-19T00:00:00.000
[ "Mathematics", "Computer Science" ]
Asymmetric supercapacitors: optical and thermal effects when active carbon electrodes are embedded with nano-scale semiconductor dots Optical and thermal effects in asymmetric supercapacitors, whose active-carbon (AC) electrodes were embedded with nano-Si (n-Si) quantum dots (QD), are reported. We describe two structures: (1) p-n like, obtained by using a polyethylimine (PEI) binder for the n-like electrode and a polyvinylpyrrolidone (PVP) binder for the p-like electrode; (2) a single component binder, poly(methyl methacrylate) (PMMA). In general, AC appears black to the naked eye and one may assume that it acts as a black body absorber, namely, indiscriminately absorbing all light spectra. Yet, on top of a flat lossy spectra, AC (from two manufacturers) exhibited two distinct absorption bands: one in the blue (~ 400 nm) and the other one in the near IR (~ 840 nm). The n-Si material accentuated the absorption in the blue and bleached the IR absorption. Both bands contributed to capacitance increase: (a) when using aqueous solution and a PMMA binder, the optical related increased capacitance was 20% at low n-Si concentration and more than 100% for a high concentration dose; (b) when using IL electrolyte, the large, thermal capacitance increase (of ca 40%) was comparable to the optical effect (of ca 42%) and hence was assigned as an optically-induced thermal effect. The experimental data point to an optically induced capacitance increase even in the absence of the n-Si dots; this could be attributed to the absorption of AC in the blue. I. Introduction Supercapacitors, S-C, [1][2][3] -capacitors that take advantage of the capacitance at the interface between an electrode and an electrolyte -have been found a large range of energy applications, least of all, optical modulators [4], and as buffering elements to subdue demand fluctuations in digital power networks [5][6]. A subclass of these are asymmetric S-C -capacitors made with two types of electrodes [7][8][9]. Asymmetric S-C are mostly fabricated as pseudo-supercapacitors -S-C whose capacitance is associated with a chemical reaction at the electrolyte/electrode interface. Pseudo-supercapacitors are aimed at increasing the operation potential range [10][11][12][13][14] via electrochemical means. To these, one ought to add dye sensitized solar cells [15][16], a special class of optically powered electrochemical energy sources with two distinct electrodes and an electrolyte that mediates the reacting ion species. Here, we describe carbon based, optically controlled asymmetric S-C that do not exhibit a chemical reaction at their electrolyte/electrode interfaces; namely, we focus on a basic, electrochemical double layer (EDL) supercapacitors. Our intent is to gain basic understanding of the optical and related thermal effects when incorporating n-Si dots in AC based electrodes. We describe two S-C types, which are both embedded with n-Si dots: polymeric doped AC electrodes via PVP and PEI binders, and nondoped AC electrodes with a PMMA binder. II. Methods and Experiments The basic cell is composed of two transparent electrodes, either indium tin oxide (ITO, sheet resistance Rsqr=20 Ohms) or fluorinated doped tin oxide (FTO, sheet resistance Rsqr=30 Ohm) films on glass substrates. The electrodes were facing each other to make a parallel plate capacitor. The electrodes were coated with active carbon (AC) film (either produced by American Hardwood, AH, or by General Carbon Company, GCC). In the case of p-n cells, the AC was functionalized with binders using the same techniques used to functionalize carbon nanotubes [17][18][19][20][21][22]. While the AC is not a semiconductor material, nevertheless, it was hypothesized that the small AC domains would make it susceptible to polymeric doping. PVP and PEI binders, p-n like cells: In the case of a p-n like cell, the AC (various concentrations in the range 100-200 mg/mL) in methanol was mixed using a sonicator with a horn antenna. The n-Si dots, of size less that 100 nanometers and at various concentrations in the range of 1-10 mg/mL were mixed in mostly the PVP 'p-type' material. The concentration range of the PVP was 20-40 mg/mL but larger than 10% by weight. Two molecular weights have been considered for the polymers: low m.w of 25 kD and large m.w of 630 kD. Two molecular weights were also considered for PEI: low m.w of 25 kD and 50% water diluted 600 kD. The low m.w polymers worked the best. The irradiation on n-Si embedded PEI resulted in a reaction. The slurries were drop-casted on the transparent electrodes and were dried out at 90 o C for 30 min. The entire structure was held by a strong clip, leaving an exposed surface for light illumination. The illuminated (exposed) area was smaller than the entire area of the S-C. The films thickness varied but were of the order of a few hundreds of microns. As a result, capacitance of a given sample was assessed as a relative measurement: under light irradiation and without it. PMMA binder: The adhesion of PEI and PVP to the conductive glass with IL was good but was sometimes compromised in aqueous cells. PMMM binder worked better under these circumstances with both IL and Na2SO4 electrolytes. The n-Si were typically incorporated within the electrode facing the light source. Typically, a 200 mg/mL AC produced by GCC with a 20 mg/mL PMMA binder on FTO was used. The 2 mg/mL n-Si was mixed with the other components in Anisole and were dried out in an oven at 90 o C for 30 min. Electrochemical Measurements: Measurements were carried out with a Potentiostat/Galvanostat (Metrohm). The samples were irradiated with a 75 W incandescent light bulb situated 30 cm above the samples. The light intensity of the entire radiation spectra (from the visible to the IR) was measured with a bolometer and was assessed as 30 mW/cm 2 . A calibrated homemade hot plate, which was interfaced with a thermocouple was used for the thermal experiments. A second thermocouple assessed the temperature right at the sample surface. Electrical measurements: Current-voltage plots (I-V curves) on dry films were obtained with a sensitive, 100 fA, computer controlled dedicated system (Keithley). Optical transmission measurements: A computer controlled monochromator (SPEX), which was interfaced with a white light source, a chopper and a Si detector was used to assess the optical transmission of each film on a glass substrate. The transmission value was assessed as the signal obtained with the film on the glass slide divided by the signal obtained with only the glass slide. III.a. Film characterizations: Asymmetric Cells: The asymmetry of the capacitive element is demonstrated in Fig. 1. Shown are cyclic voltammetry (CV) curves at a scan rate of 0.5 V/sec for a cell made with concentrations of AC, PVP, PEI, as follows: 100 mg/mL, 20 mg/mL and 20 mg/mL with lower m.w polymers. The 'p'-like electrode was made of AC/PVP on Al and the 'n'-like electrode was made of AC/PEI also on Al. The AC was made by AH and the electrolyte was IL soaking a tissue separator. The curves are mirror image of each other when the positive and negative leads were switched, namely, the effect may not be attributed to Schottky barrier at the contacts. Fig. 1a was obtained when the positive lead of the potentiostat was connect to AC/PVP electrode (denoted as p (+) ) and the negative lead of the potentiostat was connected to the AC/PEI side (denoted as n (-) ). Fig. 1b was obtained when lead connections were switched -namely, n (+) -p (-) . Charge-discharge (CD) curves ( Fig. 1c) conveyed the same message: the rise time is shorter and the discharge time is longer for p (+) -n (-) , whereas the reverse is true when the lead connections are switched. Granted that the S-C is less than perfect; it was of the order of tens of micro-F for this 0.5 cm 2 capacitor. Fig. 1d exhibits the redox reaction when the n-Si incorporated in the AC/PEI electrode. Optical Transmission and I-V curves: The optical transmission through the various material components is shown in Fig. 2a. Since the films' thickness varied and we are only interested in the spectral shape of the curves, each transmission curve was normalized to its peak transmission. The glass slide signal was referenced to the transmission through air; all other curves were referenced to the signal of their substrate -a glass slide. The transmission of a glass slide is fairly constant throughout the spectral range between 400 to 900 nm. The transmission The yellowish n-Si powder absorbs heavily in the blue green rang as typical for these nano-scale dots. The indirect bandgap of Si at 1.1 microns turns into a direct bandgap and its absorption is blue shifted when the dot size becomes smaller. The AC film exhibited two distinct absorption bands regardless of maker (the one depicted in the picture has been produced by American Hardwood (AH)). One absorption band is in the blue, centered at 460 nm and the other is in the near IR, centered at 840 nm. PVP absorbs in the deep blue and portrays flat absorption for wavelengths between 500 to 900 nm. The absorption of AC/PVP followed that of only AC. Most notably is the transmission of PVP/n-Si (not shown) and AC/PVP/n-Si. The absorption peak at 840 nm disappeared, leaving only an absorption peak in the blue. Also noted is the blue shift for the 460 nm absorption line to below 400 nm with a combined effect that is larger than for either components. The behavior at 840 nm may be explained if attribute the absorption line to impurity doping or surface states. Electrons were transferred from the AC (donors) to the n-Si (acceptors) and the transition was bleached. The behavior near 400 nm is more complex and could involve dipole coupling between the AC and n-Si species. In Fig. 2b we show the I-V curve of illuminated and non-illuminated dry AC based films on ITO. A 1-mm scratch was made in the ITO layer, preventing electrical conduction across it (Fig. 2c). The film was bridging the gap and enabled conduction. There are two takeaways from the experiments: (1) The ITO/AC films exhibited Ohmic contacts; and (2) the film conductivity has increased under white light illumination. The films' thicknesses and widths were not the same, which explains the difference in the curves slopes. Nevertheless, we can assess the relative conductivity (=I/V) change under illumination: it was (illuminated-dark)/dark=5.5x10 -2 =5.5% for the AC/PVP/n-Si and 3x10 -2 =3% for the AC/PVP film. As expected, the n-Si has improved the film conductivity under light even though its concentration was of the order of a few percent compared with a typical 20% conductive additives in commercial S-C material [25]. The effect is small but nonetheless measureable. (c) Measuring conductivity with and without illumination (impinging the film from above). III.b. Supercapacitors under light ON and light OFF conditions In the remainder of this paper we will sort out the optical and related thermal effects in these films. In Fig. 3 we show CV and CD curves for a p-n cell -p(AC/PVP/n-Si)-n(AC/PEI) -under light ON and light OFF. The CV curves are a-symmetric when light is OFF (room light) and ON. The sample was deposited on ITO with lens tissue, serving as a separator and soaked with IL. Capacitance increase can be observed in Fig. 3a. The relative capacitance increase which should be normalized by the illuminated area [(C/A)illum-(C/A)dark]/(C/A)dark=0.42 or a 42% increase under illumination. Upon illumination, one may observe a tilt of the plot axis towards larger current values. Such tilt may be attributed to a larger film conductivity and a lower ESR. The latter is corroborated by CD data (Fig. 3b). Capacitance here is calculated for the decay branch as Ceff~(I0/(V0/) which replaces the usual linear expression, C=I0/(dV/dt). Here I0 is the (constant) discharge current, 1/ is a single decay rate that approximates the discharged branch, V0 is the voltage difference (V0=1 V in our case) and Ceff, the effective capacitance across the various film regions as it is gradually discharged. The relative capacitance change under illumination is ~37%. Overall, the ESR is quite large and may be attributed to the large impedance of the current collector (ITO; Rsqr=20 Ohms compared to 2.7 mOhms of Al), use of IL, and non-optimized binder to AC ratio. III.c1: PMMA binder with aqueous solution Let us start with the simpler system using a single electrode binder (PMMA), where the electrode facing the light source is embedded with n-Si dots. In Fig. 4a-b we present two CV plots: Fig. 4a has been obtained when heating the sample from 23 o C to 35 o C, at the rate of 0.1 o C/sec, while collecting the CV data continuously. The light was OFF. The AC manufacture was GCC and the electrolyte was 1 M Na2SO4. The loading of the n-Si was 2 mg/mL. Note the rotation of the curve as the sample heats up; its waist at zero potential has changed only little. This is our reference. The relative change from the first scan to the last was <2%. Fig. 4b exhibits two CV curves for light OFF and light ON. Upon illumination, the temperature at the sample surface has elevated to 27 o C. The relative capacitance change (including the fact the exposed illuminated area is smaller than the area of the entire S-C) for the ON/OFF case was 20%, clearly, larger than the thermal reference. While the curve has rotated, its waist has increased too. Finally, CV curves were obtained with AC electrodes and only PMMA as a binder (namely, without n-Si). As seen from Fig. 4c, there is a little optical effect when considering the smaller illuminated area of the S-C, while the temperature at the sample surface has reached 30 o C under optical illumination. Heating of the AC electrode even without n-Si is attributed to an overall AC scattering/absorption. Thus, while the optical related heating is substantial, the increase in local polarization is small. Note the rotation of the curve as the sample heats up; its waist, however, at zero potential barely changed. (b) CV curves for light OFF (room lighting) and light ON. While the curve has rotated its waist has increased too. (c) AC with PMMA binder, yet without n-Si has exhibited a small optical effect if we take into account the smaller light illuminated exposed area. An easy calibration method may be made with the one S-C cell. In that case one compares the effect of illumination for two cases: the case when the electrode containing n-Si dots is facing the light source and the case when the other electrode, the one without the dots is facing the light source. This experiment is presented in Fig. 5. Both electrodes were deposited on an ITO with Rsqr=5 Ohms (Huanyu). The electrolyte was Na2SO4 and the cell has reached 26 o C in both cases during illumination. The concentration of the n-Si dots was 10 mg/mL, substantially larger than all other cases described before. Specifically, when electrode without n-Si was facing the light source, the relative OFF/ON change was ca 11% (22% when considering the smaller light exposed area); it was 56% (112% when considering the smaller light exposed area) for electrode containing the n-Si dot and facing the light source. Based on Fig. 4c and 5a, one may postulate that there could be an optical effect even in the absence of n-Si dots due to absorption of AC in the blue. III.c2: PMMA binder with ion liquid electrolyte In order to sort out whether the capacitance increase is due to resulting heating of the electrolyte through light absorption, we repeated the previous measurements and replaced the aqueous solution with an IL electrolyte. The AC electrode here was embedded with a higher n-Si concentration -10 mg/mL. Fig. 6a presents the capacitance increase upon heating from 23 to 45 o C at a rate of 0.1 o C/s. The CV data was continuously collected throughout the heating process. Fig. 6b are the related CV curves upon light ON and light OFF. A large capacitance increase is noted: the relative capacitance has increased by 50% when the sample was heating up from 23 o C to 37 o C (Fig. 6a). Similarly, a ca 50% capacitance increase and related temperature increase is noted based on Fig. 6b. Thus, unlike Fig. 4, here we cannot separate the optical effect from the related thermal effect. Similar conclusions may be drawn when a lower concentration of n-Si was used. III.c3: p-n binders with ion liquid electrolyte CV were obtained for p-n like cell (n-Si in AC/PVP and AC/PEI, respectively. The electrolyte was IL. In Fig. 7a we present a CV curve when heating the sample from 23 o C to 35 o C while collecting the CV data continuously, all awhile the light source was OFF. The heating rate was 0.1 o C/sec. The relative change from the first scan to the last was ca 40%. Fig. 6b exhibits two CV curves for light OFF and light ON. Upon illumination, the temperature at the sample surface has elevated to 30 o C. The relative capacitance change for the ON/OFF case was 35%. Using the calibration curve of Fig. 7a, the thermal effect between 23-30 o C may account for only 23% of the capacitance increase while the remaining 13% could be attributed to the optical effect. This last value is similar to the value obtained for the aqueous system. Nonetheless, since the p-n is a more complex system than the one involving PMMA, and recognizing that the conductivity of PVP changes upon light illumination (Fig. 2b), more data are needed to affirmatively determine the portion of the optical effect with IL electrolytes. We point to recent studies that showed that the thermal sensitivity of IL could be substantially decrease by using a proper mixtures [24] and could be used here to accentuate the optical effect; such a study is beyond the scope of the current manuscript. III.c4: p-n binders with aqueous solution The thermal response of p-n like cells may be corroborated by using Al electrodes, 1 M Na2SO4 electrolyte and eliminating the n-Si QDs from the electrode composition. Fig. 8 shows CD curves for p(AC/PVP/n-Si)-n(AC/PEI) sample. The sample was heated by a 75 W lamp (although the Al current collector prevented any light penetration) and the temperature was recorded at the sample surface by using a thermocouple point probe. The AC was manufactured by GCC. The <2% difference in the discharge time may be more to do with the instability of the binder in the aqueous electrolyte than with the heat. Therefore, we reiterate that aqueous solutions at the small temperature range studied have little thermal effect on the cell capacitance. III.d. The effect of IR absorption band at 840 nm As discussed earlier, incorporation of n-Si dots bleached the absorption band at 840 nm. Further corroboration may be obtained when measuring the capacitance while using a yellow optical filter. The filter transmits wavelengths larger than the cut-off wavelength of 550 nm, thus, eliminating the blue band from the white-light spectra. One has to factor the reduced overall light intensity when the light source is interfaced with such a filter. The intensity of the light source interfaced with the filter was 70% of the total white light output (both visible and IR) as measured by use of a bolometer. Thus, if the intensity related effects are linear, the capacitance increase due to the optical effect and/or thermal effect would be 70% of the effect without the filter. Fig. 9a shows that a p(AC/PVP/n-Si)-n(AC/PEI) on FTO with IL electrolyte, illuminated with a filterinterfaced white-light source exhibited the same characteristics as non-illuminated S-C. On the other hand, a p(AC/PVP)-n(AC/PEI) on FTO (without n-Si) in Fig. 9b showed that the characteristics of light ON with, and without the filter are the same; both were different than the light OFF situation. When using the filter we eliminated the heating through blue light absorption but absorption was still present through the IR band. All of the above may be summed up as follows: the IL was heated up through only the blue band absorption when the electrode contained n-Si; it was heated through both the blue band and the IR band in the absence of n-Si. In both cases, there is an increase in the relative cell capacitance. Conclusions Asymmetric S-C, embedded with n-Si QD have shown a large capacitance increase. For aqueous cells it was due to local polarizations even in the absence of n-Si dots. For cells interfaced with IL electrolytes the capacitance increase was mostly due to optically-induced thermal effects in the electrolyte. Such elements exhibit promise for future novel, optically controlled supercapacitors.
4,782.4
2020-12-15T00:00:00.000
[ "Physics" ]
A New Spin on the Weak Gravity Conjecture The mild form of the Weak Gravity Conjecture states that quantum or higher-derivative corrections should decrease the mass of large extremal charged black holes at fixed charge. This allows extremal black holes to decay, unless protected by a symmetry (such as supersymmetry). We reformulate this conjecture as an integrated condition on the effective stress tensor capturing the effect of quantum or higher-derivative corrections. In addition to charged black holes, we also consider rotating BTZ black holes and show that this condition is satisfied as a consequence of the $c$-theorem, proving a spinning version of the Weak Gravity Conjecture. We also apply our results to a five-dimensional boosted black string with higher-derivative corrections. The boosted black string has a $\text{BTZ}\times S^2$ near-horizon geometry and, after Kaluza-Klein reduction, describes a four-dimensional charged black hole. Combining the spinning and charged Weak Gravity Conjecture we obtain positivity bounds on the five-dimensional Wilson coefficients that are stronger than those obtained from charged black holes alone. Introduction The effective field theory (EFT) approach allows one to systematically compute UV effects on the IR dynamics of a physical system. In an EFT, UV effects are encoded in an infinite series of higher-dimensional operators and corresponding Wilson coefficients. At first sight, it seems that without performing any measurements a low-energy observer cannot know the value of any Wilson coefficients. However, it has been known for a long time that not every EFT is "healthy" in the sense that it enjoys an embedding in a UV-complete theory free of pathologies. For example, unitarity and causality can constrain certain (combinations of) Wilson coefficients to be positive [1]. In the context of quantum gravity the criteria that distinguish healthy EFTs from sick ones are known as swampland conjectures (see [2,3] for reviews). Healthy EFTs that enjoy an embedding in a consistent theory of quantum gravity are said to reside in the landscape, while EFTs that cannot be embedded in quantum gravity belong to the swampland. In the absence of experimental data sufficiently sensitive to directly probe quantum gravity, swampland criteria are helpful in constraining the space of EFTs that arise in its low-energy limits. The swampland conjecture that is the focus of this paper is the Weak Gravity Conjecture (WGC) [4], which in its original form states that any theory with a U (1) gauge field must include at least one state whose charge-to-mass ratio exceeds that of extremal black holes in that theory. This allows extremal black holes to decay, unless protected by a symmetry (such as supersymmetry). Further refinements of the WGC specify the energy scales at which these states should appear. Strong forms of the conjecture require the states in question to be light or part of a tower [5] or charge sublattice [6,7] of (super)extremal states. Milder forms of the WGC allow the states to be heavy or even given by black holes with an extremality bound that is corrected by quantum or higher-derivative corrections. 1 This latter version is referred to as the "mild form" of the WGC and requires that corrections increase the charge-to-mass ratio of extremal black holes in a canonical ensemble (fixed charge and temperature). Because the sign of the corrections to the extremality bound depends on the sign of the Wilson coefficients, unitarity and causality play a crucial role. In fact, several proofs applying in different restricted settings and making use of thermodynamics [9,10] or unitarity and causality [10][11][12] have been given by now, but it has become clear that generically one needs additional UV information and that the WGC cannot follow solely from IR consistency. In the presence of a massless graviton, positivity bounds cannot completely constrain the correction to the extremality bound due to a singularity in the forward limit of graviton exchange in the t-channel. 2 It is thus of interest to identify the minimal set of assumptions needed to prove the WGC. To manage expectations, we will not identify this minimal set of assumptions in this paper. Instead, we will reinterpret the mild form of the WGC as a criterion on matter that generates corrections to the extremality bound. In a way, this is similar to using energy conditions to exclude pathological matter contributions (see [14] for example). This results in a condition on the stress tensor that is equivalent to the WGC. For a d-dimensional black hole this condition is given by Here Σ is a Cauchy slice with normal vector n a and ξ a is a Killing vector for which the horizon is a Killing horizon. δT eff ab is an effective stress tensor whose definition will be given in the main body of this article. This condition has several attractive features. Just as in the thermodynamic approach described in [17] it is only neccesary to know the uncorrected black hole metric to derive corrections to the extremality bound. This has the technical advantage that one does not need to solve the (possibly complicated) corrected Einstein equations in order to evaluate the WGC. In addition, this condition is valid for any correction that generates an effective stress tensor, not just higher-derivative corrections, so it can be applied to a wide range of scenarios. It is therefore natural to view (1.1) as a condition for matter on an extremal black hole background to be "healthy". We motivate this point of view by applying our condition to extremal rotating BTZ black holes and showing that (1.1) is satisfied as a consequence of the Null Energy Condition (NEC). This follows from perturbing a BTZ black hole with NEC-satisfying matter holographically dual to a relevant deformation in the CFT. This relevant operator triggers a Renormalization Group (RG) flow along which the central charge monotonically decreases as a consequence of the c-theorem. On the black hole side, a decrease in the central charge increases the extremal angular momentum-to-mass ratio, so that a "spinning" version of the WGC is satisfied. Although the WGC is normally phrased in terms of black holes charged under U (1) gauge fields, 3 we believe this spinning version of the WGC to also be of interest. First, our results suggest a generalization of the Repulsive Force Conjecture (RFC) proposed in [18]. If we consider the gravitational and centrifgual forces of a single rotating black hole, we note that a co-rotating object of negligible mass hovering at the event horizon of an extremal rotating black hole experiences an attractive gravitational force that precisely cancels the repulsive centrifugal force. When the spinning WGC is satisfied (but not saturated), the angular momentum-to-mass ratio increases causing the gravitational force to become weaker than the centrifugal force. A similar condition for higher-spin states was studied in [19]. Second, corrections to the BTZ extremality bound also play an important role in determining the consistency of pure three-dimensional gravity. In [20], it was observed that the partition function constructed in [21,22] contains a negative density of states in the regime where BTZ black holes are near extremality. One way of curing this pathology is to modify the theory by including additional matter (which could be very heavy) that modifies the BTZ extremality bound and corrects the density of states in a way that guarantees positivity. An alternative resolution has been proposed in [23]. Third, in string theory BTZ black holes can appear as the near-horizon limit of a black string. Upon compactifiying the black string, a charged black hole appears. 4 If we consider higherderivative corrections to the black string, one can get constraints on the black hole solution by imposing the spinning WGC in the near-horizon geometry of the black string. We test this idea by studying a five-dimensional boosted black string. We show that the extremal entropy of the BTZ still matches the four-dimensional entropy after including higher-derivative corrections, but the correction to their respective extremality bounds do 3 One conceptual difference between black holes carrying electric charge versus angular momentum is that spinning black holes are naturally unstable via the Penrose process. On the other hand, extremal electrically charged black holes can provide a large family of stable non-supersymmetric states unless the spectrum is modified from that of pure Einstein-Maxwell theory. 4 Alternatively, one can use U-duality to map the charged black hole to a BTZ black hole [24]. not coincide. 5 Instead, they contain complementary information and by imposing both the spinning and charged WGC we obtain positivity bounds on the five-dimensional Wilson coefficients that are stronger than those obtained from the charged WGC alone. The fact that the three-dimensional spinning WGC does not imply the four-dimensional charged WGC, but offers complementary information, agrees with the phenomenon that in theories of gravity with spacetime dimension d ≥ 4, IR consistency cannot completely constrain the sign of corrections to the extremality bound [11]. Our finding that positivity bounds can be strengthened by dimensional reduction is also supported by [26], who, independently of us, study higher-derivative corrections to the extremality bound of five-dimensional black objects and their four-dimensional Kaluza-Klein reductions. The rest of this paper is organized as follows. In Sec. 2 we first loosely motivate why a condition of the form (1.1) should be true. We then formalize this idea by explicitly deriving the condition using the Iyer-Wald formalism and, as an example, apply our relation to Reissner-Nördstrom and BTZ black holes. In Sec. 3 we focus on BTZ black holes and show that our derived relation, and therefore a spinning form of the WGC, is satsified as a direct consequence of the holographic c-theorem. Finally, in Sec. 4 we study higher-derivative corrections to a five-dimensional black string and compute corrections to the extremality bounds of the near-horizon BTZ × S 2 geometry and the four-dimensional charged black hole that arises after a Kaluza-Klein compactification. Some technical details regarding the Iyer-Wald formalism are reviewed in App. A and the explicit form of the higher-deriative corrected black string solution is described in App. B. Loose motivation Given an extremal charged black hole perturbed by quantum or higher-derivative corrections, it is of interest to understand under what conditions the charge-to-mass ratio increases in a canonical ensemble (fixed temperature and charge), such that the mild form of the WGC is satisified. In this section, we will rewrite the shift to the extremality bound as a condition on the stress tensor that captures these corrections. In [10,11], it was explained in great detail (see also [25]) that corrections to the extremality bound in a canonical ensemble and corrections to the entropy in a microcanonical ensemble (fixed mass and charge) of an extremal black hole are directly related. At least for stationary black holes both corrections are determined by a modification of the same metric function f (r), whose roots give the location of the horizon of a black hole. To have a concrete example in mind, we can think of f (r) as the rr component of the inverse metric in Schwarszchild gauge of a Reissner-Nordström black hole perturbed by higher-derivative corrections. Schematically, such a black hole is described by the following action. (2.1) 5 Note that this is not in conflict with the relation between microcanonical entropy (which is not evaluated at zero temperature) and extremality [11,25]. Here α i are Wilson coefficients and I (i) hd are higher-derivative terms. Fixing mass and charge, we can write the corrections to f (r) as f (r) = f 0 (r) + δf (r) and the corrected horizon as r + = r 0 + δr. The location of the corrected horizon is now found by solving where we treated the correction as a small perturbation. For an extremal black hole f 0 (r 0 ) = f 0 (r 0 ) = 0 and the shift in the horizon is given by when δf (r 0 ) and f (r 0 ) are both non-vanishing. Because f (r 0 ) > 0, the singularity of the solution is only cloaked behind a horizon when δf (r 0 ) ≤ 0, which shifts the outer horizon positively (or leaves it uncorrected). The Wald entropy of the corrected black hole is now given by where the first term is the Bekenstein-Hawking entropy and the second term contains a modification to the Bekenstein-Hawking formula due to higher-derivative corrections. Because the horizon shift of an extremal black hole scales as δr ∼ O( √ α i ), we find that The leading piece of the black hole entropy is simply given by the Bekenstein-Hawking formula evaluated on the corrected horizon. Equivalently, we can also consider the corrections in a canonical ensemble. In that case, a microcanonical increase in the horizon manifests itself as a decrease to the ADM mass, increasing the charge-to-mass ratio [10,11]. Thus, when working in a microcanonical ensemble the WGC can be understood as the statement that singularities present in the uncorrected spectrum should be cloaked behind a horizon after including corrections. However, we should stress that this does not imply that the mild form of the WGC follows from the Weak Cosmic Censorship Conjecture. Here we are comparing two different black holes (one with and one without higher-derivative corrections) to each other and not having a positive real shift of the outer horizon, i.e. δr < 0, at odds with the WGC only implies that an extremal black hole in the uncorrected theory is not a regular solution in the corrected theory. When this correction to the extremality bound is induced by additional matter (for example heavy matter that is integrated out, generating higher-derivative terms), it is natural to expect that whenever that matter is "healthy" it leads to a correction compatible with the WGC. Indeed, in the set-up considered in [10,11] this is precisely what happens; unitarity and causality imply WGC-compatible signs of the Wilson coefficients. We will now phrase this healthiness in terms of the condition on the stress tensor to which we alluded earlier. To first gain some intuition, we imagine that the effect of the additional matter is to introduce a shell outside the horizon of an extremal charged black hole. For simplicity, we suppose the shell is uncharged and has a mass m: see Fig. 1. Before introducing the shell, the extremal black hole has a charge-to-mass ratio of Q/M = 1 (in appropriate units), where M is the ADM mass. Keeping the ADM mass and charge fixed, we now introduce the shell. The charge-to-mass ratio is now given by As explained, the WGC now dictates that the resulting state does not contain a naked singularity, which means that the Q/M ≤ 1. Since the ADM mass and charge are held fixed this requires m ≤ 0. In terms of the matter stress tensor this condition reads Here Σ is a Cauchy slice of constant Killing time t with an induced metric h ab and a unit normal vector n a . We are interested in stationary black holes, so ξ a is a timelike Killing vector. So we see that, at least in this example, we can rephrase the correction to the extremality bound as a covariant condition on the stress tensor. While this simple setup gives some useful intuition, it also has its shortcomings. In particular, if the two-derivative action includes a graviton and gauge field as massless degrees of freedom, additional matter fields also backreact on the gauge field and must be taken into account. We now show that the correct condition (1.1) also takes into account a correction to the stress tensor of the gauge field. Deriving the general relation Now that we have motivated the WGC as a condition on a stress tensor, we make this intuition precise by rewriting the corrections to the horizon of an extremal black hole as an integral of the stress tensor. To do so, it will be useful to employ the covariant phase space formalism of Iyer and Wald, which we review in App. A. Viewing the Lagrangian as a d-form, we consider Einstein-Maxwell theory, possibly with a cosmological constant: Here κ 2 = 8πG d and is the volume form on the d-dimensional background. As explained in the appendix, for any infinitesimal diffeomorphism parametrized by ξ or gauge transformation parametrized by λ we can construct a Hamiltonian that obeys a conservation equation that is satisfied on-shell. We now consider an off-shell variation of the Hamiltonian. We then find (see (A.26)) the following conservation equation: The left-hand side is a variation of the exterior derivative of the Hamiltonian and the righthand side contains a term E g that captures the gravitational equations of motion and a second term that arises from a variation with respect to the gauge field A. Because the background satisfied Einstein's equations, we can rewrite the first term as (2.10) Now let us consider a black hole (not necessarily in asymptotically flat space). Integrating (2.9) over a Cauchy slice Σ of constant Killing time t located somewhere between the (outer) horizon and spatial infinity, we can use Stokes' theorem to write To arrive at this form, we picked a gauge in which ι ξ A + λ vanishes at the horizon and we assumed that δF ab dies off sufficiently fast at infinity. Here h ab is the induced metric on Σ and n a its unit normal vector. The second integral on the right-hand side consists of two terms, which arise from varying both the metric and gauge field. The sum of both contributions can be thought of as an effective stress tensor and we arrive at the following relation As we will see next, when we specify a black hole background the first integral on the lefthand side becomes proportional to the asymptotic charges of the black hole and the second integral gives the correction to the horizon. At fixed asymptotic charges, we then find a identity relating the shift of the black hole horizon to the stress tensor, reformulating the WGC as a condition on the stress tensor. A non-covariant version of this relation already appeared in [27] in the context of the four-dimensional Reissner-Nördstrom black hole. Here we considered Einstein-Maxwell theory, but a generalization to a more general theory with stationary black hole solutions is straightforward. BTZ black hole The first example we look at is pure three-dimensional gravity described by the following Lagrangian. On a constant negative-curvature background, Λ = −1/ 2 , a particular solution of Einstein's equations is given by the BTZ black hole which has the metric The inner and outer horizon are given by r ± and the mass M 3 and angular momentum J 3 can be written as To construct the Hamiltonian associated with an infinitesimal diffeomorphism, we first have to compute the symplectic potentials and Noether charges (see (A.20) and (A.24)). For the Killing vector ξ = ∂ t we then find the following expressions. We only displayed the terms proportional to dφ because the other terms will drop out of the integral of interest. Taking variations with respect to the metric functions and using (A.17) we find the variation of the Hamiltonian for the timelike Killing vector. The conserved charge associated with with this Hamiltonian is of course the mass: Similarly, the Noether charge and symplectic potential for the Killing vector ∂ φ are The associated conserved charge is the angular momentum: In [28] it was observed that the integral of the Hamiltonian variation associated to the Killing vector K = ∂ t − Ω∂ φ (where Ω = r − r + is the angular potential) over the horizon is directly proportional to the variation of N (r + ) 2 , whose roots determine the location of the horizon: Notice that for this Killing vector, the horizon is a Killing horizon. We can now relate the shift in the horizon to the stress tensor and the conserved charges by making use of (2.13), leading to Hence, the horizon shift for an extremal black hole at fixed charges (δM 3 As explained, a positive (or absent) shift of the horizon requires δ( Given this condition it is now straightforward to determine whether a particular correction to the BTZ background increases the horizon in a microcanonical ensemble (fixed M 3 and J 3 ), which determines the extremality bound in a canonical ensemble (fixed temperature and J 3 ). In [28] for example, this relation has been employed to compute the correction to the extremality bound induced by the one-loop stress tensor of a massive scalar field. Here, we are interested in computing higher-derivative corrections that are generated upon integrating out heavy matter and we will perturb the BTZ black hole by the leading gravitational corrections in a derivative expansion, The mass scale m in front, which obeys m 1, is chosen such as to make the coefficients α 1 , α 2 dimensionless. Varying the higher-derivative Lagrangian with respect to the metric, we find that the stress tensor is given by Evaluated on the BTZ background the stress tensor is Plugging this into (2.26), we find a divergent integral. This divergence can be blamed on the fact that higher-derivative corrections do not fall off going to the boundary in threedimensional gravity. To regulate this divergence, we subtract the following contribution ab is the metric of empty AdS. We can now perform the integral to find the finite result The horizon is shifted positively (or receives no corrections) when 3α 1 + α 2 ≥ 0. Reissner-Nordström black hole We now repeat the above calculation for electrically charged Reissner-Nordström black holes in asymptotically flat four-dimensional space. These black holes are solutions to the following Lagrangian: The line element of the Reissner-Nordström black hole solution is given by where it is useful to choose a gauge in which A t vanishes at the horizon: Φ + gives the difference in electric potential between the horizon and infinity. The mass and electric charge are given by We now need to consider the Hamiltonian that generates the flow of the timelike Killing vector ∂ t , for which the black hole horizon is a Killing horizon, and the Hamiltonian for the gauge transformation A → A + dλ. Proceeding as before, we obtain explicit expressions for the Noether charge and symplectic potential (using (A.21) and (A.24)). Similarly, the charge associated with gauge transformations is The variation of the Hamiltonians are (using (A.17) and (A.25)) and the corresponding conserved charges are We now use (2.13) to find the shift in the horizon due to additional contributions to action (such as higher-derivative terms): Thus, at fixed charges (δM 4 = δQ = 0) we obtain where the effective stress tensor is defined as Because a positive shift (or no corrections) of the horizon requires δf (r + ) ≤ 0 we find that the mild form of the WGC can be rewritten as a condition on the effective stress tensor. We will now show how this relation can be employed by perturbing the Reissner-Nordström black hole with the following higher-derivative corrections: Dimensionless coefficients are defined by b 1 = a 1 /κ 4 and b 2 = a 2 /κ 2 . For an electrically charged Reissner-Nordström these are the most general higher-derivative corrections up to four derivatives. As we saw, the effective stress tensor δT eff ab contains explicit terms δT ab that arise from varying ∆L with respect to the metric as well as implicit corrections F ac δF c b that capture a modification of Maxwell's equations. The corrected Maxwell equations are given by This is solved by The explicit corrections are given by [27] δT (2.48) Adding both corrections and performing the integral over Σ and using (2.42) we find in the extremal limit A positive (or absent) shift of the horizon requires 2b 1 − b 2 ≥ 0, which matches [11,27]. Spinning WGC from Holographic RG In the previous section, we rephrased WGC-satisfying corrections to the extremality bound as an integrated condition on an effective stress tensor and gave two examples where the corrections to the stress tensor arose from higher-derivative terms, but we could also have considered quantum corrections. In [28], this relation was used to relate the one-loop stress tensor of a massive scalar field to a correction of the BTZ extremality bound. To compute the effect of quantum corrections, one simply replaces the classical value of the stress tensor by its expectation value. This correctly gives the shift in the horizon radius as long as the semi-classical approximation is valid. In light of a spinning WGC we would like to understand whether there is a general principle behind positivity of the horizon shift. In this section, we show that this is the case for a particular class of corrections to the BTZ black hole. In particular, when a BTZ black hole is perturbed by a relevant deformation, this triggers a holographic RG flow. When the NEC is satisfied along the flow, the central charge of the dual CFT decreases by virtue of the c-theorem [29]. This implies that when we reach a fixed point in the IR, that theory includes BTZ black holes with angular momentum-to-mass ratios exceeding those of unperturbed black holes, so that a spinning version of the WGC is satisfied. The holographic RG [30], provides a systemic way of computing CFT correlation functions from the on-shell gravitational action at fixed radial coordinate in the context of AdS/CFT. The radial coordinate in the bulk is identified as an energy scale in the CFT and moving from the boundary of AdS into the bulk describes an RG flow from the UV to the IR in the boundary theory. For our purposes, we will consider the following threedimensional action on an AdS background perturbed by purely gravitational four-derivative operators: For now we focus on these particular higher-derivative corrections, but our method can be easily generalized to additional terms as well. In the context of the holographic RG we can think of this action as an effective field theory in the IR whose higher-derivative corrections parametrize the effect of modes that have been integrated out along the flow. In three dimensions, the Ricci tensor is proportional to the metric which implies that the higher-derivative corrected action still has a BTZ solution described by the metric (2.15). The effect of adding higher-derivative corrections is to shift the central charge of the dual CFT 2 from its Brown-Henneaux [31] value. The corrected central charge can easily be determined using c-extremization [32]. One defines a c-function given by and extremizes this with respect to to find the central charge. Here L 3 is the Euclidean Lagrangian. In our case, we have Extremizing the c-function, we obtain From Einstein's equations one finds that the higher-derivative terms are proportional to the cosmological constant, so we can also absorb them into the AdS length. Explicitly, Thus, we can write (3.1) equivalently as with Since we removed the higher-derivative corrections, this action has BTZ solutions with an AdS length L and the central charge of the dual CFT is now just given by the Brown-Henneaux value c = 3L/2G 3 . Indeed, using (3.7) the central charge is still given by We therefore see that on-shell, higher-derivative corrections in three dimensions can equivalently be understood as an uncorrected theory with a modified AdS length [33]. The correction to the central charge modifies the entropies and extremality bound of black holes in the theory. The mass and angular momentum given in (2.17) are related to the excitation levels of the dual CFT by the standard relations [34] In terms of the excitation levels we can write the extremality bound as h ≥ c 24 . (3.10) To derive the change in extremality bound in a canonical ensemble and the change in entropy in a microcanonical ensemble we find it useful to use a thermodynamic approach. The Euclidean action is given by (3.11) where we supplemented the bulk action by a Gibbons-Hawking-York boundary term, defined at the boundary at r → ∞, and a counterterm K 0 to make the on-shell action finite. The counterterm that removes the divergence of I E when r → ∞ is given by (3.12) Using this, the on-shell action is given by . (3.13) Here β is the inverse temperature of the black hole. It is well known that, even in the presence of higher-derivative corrections, the Euclidean action can be written in terms of the Gibbs free energy G as [17] (3.14) Here Ω = r − r + denotes the angular potential and S is the entropy. We can now evaluate the Euclidean action in a grand canonical ensemble by writing The different thermodynamic quantities are given by 15) and the mass is We start by computing the extremal entropy by expressing S(Ω, T ) as S(T, M 3 ) and take T → 0. We then find We now evaluate the mass in a canonical ensemble by expressing it in terms of T and J 3 . In the limit T → 0 we obtain Thus, the extremality bound is modified as Finally, we are also interested in the microcanonical entropy (fixed M 3 and J 3 ). Expanding the canonical expression of the mass for small temperature, we find that the z = J 3 /( M 3 ) = 1 state has a non-zero temperature of the form (3.20) At fixed M 3 and J 3 the correction to the extremal black hole entropy is given by We see that a positive shift of the angular momentum-to-mass ratio of an extremal BTZ black hole increases the microcanonical entropy and corresponds to a decrease of the central charge. When the correction to the central charge is generated by the higher-derivative corrections in (3.1) a negative (or absent) shift in the central charge requires 3α 1 + α 2 ≥ 0. It is now straightforward to argue that when we perturb a BTZ black hole by a relevant perturbation, the central charge decreases (or is uncorrected) along the flow such that 3α 1 + α 2 ≥ 0 and a spinning WGC is obeyed. Our starting point is a purely threedimensional gravity theory with a Brown-Henneaux central charge. Then, we perturb this theory by some matter field that is holographically dual to a relevant operator. This will trigger a holographic RG flow until we reach a fixed point in the IR, which corresponds to a CFT perturbed by an irrelevant deformation. The gravitational dual of this theory has BTZ solutions with an AdS length (and central charge) that is smaller than the one in the unperturbed theory by virtue of the c-theorem: see Fig. 2. So whenever the higherderivative corrections in (3.1) arise in the IR as a consequence of a relevant perturbation, 3α 1 + α 2 ≥ 0. Next, we will illustrate this behaviour when the relevant perturbation is a scalar field. Although it is convenient to assume that the UV CFT is dual to pure Einstein gravity, such that its central charge takes the Brown-Henneaux form, this is strictly speaking not necessary. As long as the c-theorem is obeyed, it is guaranteed that the IR central charge is smaller than the central charge of the UV fixed point. In this sense, the three-dimensional spinning form of the WGC is insensitive to the UV, as long as there exists a black hole with which to compare the extremality bound. Example: Scalar perturbation We now give an explicit example of a holographic RG flow where the relevant perturbation is a scalar field. Because BTZ black holes are related to empty AdS by a modular transformation, we find it convenient to describe a flow between two AdS spaces. A modular transformation does not modify the central charge and the AdS flow is therefore sufficient to show that the central charge decreases. Of course, one could also consider a direct flow between two BTZ black holes as in [35], which is technically more involved. As expected, those results are also in agreement with the c-theorem. To describe the flow, it will be useful to take the following domain-wall ansatz for the metric. where we take x ∼ x+2π. Empty AdS space (with a compactified x-coordinate) corresponds to A(ρ) = ρ/ . We now perturb the Einstein-Hilbert action by a scalar field φ, writing Taking the scalar field to depend only on the radial coordinate ρ, we can write the action in the following form: Here V 2 = dtdx. The dot denotes a derivative with respect to ρ, the prime a derivative with respect to φ and W (φ) is any function that solves The function W (φ) has dimensions of length −2 . The equations of motion can now be easily obtained by setting the squares to zero, The perfect square structure of the action leading to these equations of motion can also be derived using the Hamilton-Jacobi formalism and is reminiscent of BPS equations [36]. One can check that the equations of motion solve the full non-linear Einstein equations and the Klein-Gordon equation. Let us now consider a holographic RG flow between two AdS spaces connected via a domain wall. From the equations of motion, we see that a stationary point φ of W (φ) corresponds to a solution with AdS length = (κ 2 W (φ )) −1 . Although it is not necessary for our argument to give an explicit form of the potential, we find it illustrative to work through an explicit example so we pick the simple potential Here m is a positive constant with dimensions of length −1 which we choose to obey 0 < m 1/κ 2 . Focussing on the region φ ≥ 0, this function has two critical points, Solving the first-order equations of motion we obtain the following solution Here c 1 , c 2 are integration constants. We find that the critical points of W (φ(r)) are reached asymptotically, If we consider fluctuations δϕ = φ − φ around the critical points, we find that V (δϕ) < 0 around φ = 0 and V (δϕ) > 0 around φ = m/2. Because a scalar field with mass M in the bulk is dual to a CFT operator with conformal dimension ∆ given by M 2 2 = ∆(∆−2), we identify φ = 0 as the UV CFT perturbed by a relevant deformation (∆ < 2) and φ = m/2 as the IR CFT perturbed by an irrelevant deformation (∆ > 2). The c-function that gives the central charge at the critical points is given by [35] c(ρ) = 3 2G 3Ȧ . We see that the IR central charge is smaller than the UV central charge in accord with the c-theorem. More generally, the change in the c-function along the radial direction iṡ The requirement that the central charge monotonically decreases (or is uncorrected) along the flow from UV to IR isÄ ≤ 0. From Einstein's equations, this condition follows naturally. Contracting the Einstein tensor with a null vector ζ a = ( −g tt , √ g ρρ , 0) we find The conditionÄ ≤ 0 follows directly from the NEC (T ab ζ a ζ b ≥ 0). So, as long as the NEC is obeyed, the central charge of the IR theory is smaller than the central charge of the unperturbed UV theory, so that the spinning WGC for BTZ black holes is satisfied. This is the bulk dual of the c-theorem in the CFT. Five-dimensional black string Now that we have seen that perturbing a BTZ black hole by a relevant deformation leads to a spinning version of the WGC, one might wonder whether the spinning WGC has any application in constraining the extremality bound of charged extremal solutions that have near-horizon limits with BTZ factors (the particular example that we consider is a boosted five-dimensional black string). This idea is quite natural as it is well known that the entropy of such charged extremal solutions can be easily determined using Cardy's formula in the near-horizon BTZ geometry (see [37] for example). Because of the close connection between entropy and extremality [11,25] one might naively think that both extremality bounds should coincide and that the spinning WGC implies the charged WGC. One should keep in mind however that this relation only holds for the microcanonical entropy (which is not evaluated at zero temperature). By computing higher-derivative corrections to a five-dimensional boosted black string, we show that while the entropy of the BTZ and four-dimensional black hole agree at zero temperature, their extremality bounds do not. To determine the corrections to the near-horizon BTZ×S 2 geometry we find it convenient to use c-extremization again and corrections to the four-dimensional black hole are computed employing a thermodynamic approach. As an additional check of our results, we also calculate corrections to the fourdimensional extremality bound using (1.1) and on top of that explicitly construct the fivedimensional higher-derivative corrected black string. In App. B we show that the explicit solution is in perfect agreement with our results from c-extremization, our integrated condition, and the thermodynamic approach. The precise system we will study is a compactification of M-theory with three M5branes that intersect along a string on T 6 × S 1 . This particular setup has been studied in [38]. For simplicity, we take the three magnetic charges of the M5-branes to be equal. Before adding higher-derivative corrections the relevant action that describes them is given by 1) and the line element reads [38] The factor of three in front of the Maxwell term in the action signifies that there are three magnetic charges. The magnetic field strength F and harmonic function H(r) are defined as The physical charge of the solution is given by We can now perform a boost along the x-direction, which transforms t → cosh δ 0 t + sinh δ 0 x , (4.5) x → sinh δ 0 t + cosh δ 0 x . (4.6) The metric now becomes As we will now show, this geometry describes a four-dimensional charged black hole after a Kaluza-Klein reduction and has a BTZ×S 2 near-horizon limit: see Fig. 3 for a sketch. Four-dimensional black hole To obtain a charged four-dimensional black hole solution, we take x = x + 2πR to be compact and perform a Kaluza-Klein reduction. We take the standard ansatz where ϕ is a scalar field and A 0 a gauge field corresponding to the electric field F 0 = dA 0 . Defining q 0 = r 0 sinh 2 δ 0 we read off Figure 3. Sketch of the five-dimensional black string geometry. The coordinate x is periodic, shown with identifications on the left, while on the right t is suppressed. In the coordinates of (4.7) the ring singularity is at r = −q and the two horizons (red), located at r = 0 and r = r 0 , have topology S 1 × S 2 at fixed t. The four-dimensional black hole is found by reducing over x (leading to a horizon of spacial topology S 2 and a point-like singularity). The BTZ × S 2 geometry is found in the near-horizon limit. with H 0 (r) = 1 + q 0 r . (4.10) After performing the reduction and going to Einstein frame, we find that the metric (4.7) becomes the following four-dimensional black hole, It will be useful to combine the electric and magnetic field in a single field strength defined by The physical electric and three magnetic charge of the black holes are given by For simplicity, we now also set the electric charge equal to the magnetic charge, i.e. q 0 = q, such that ϕ = 1. The four-dimensional action then takes the standard form Having obtained the black hole solution, it is easy to find the extremality bound and entropy of this black hole. Solving f (r) = 0 we find that the horizons are located at r + = r 0 , r − = 0 , (4. 16) and the extremal limit corresponds to r 0 → 0. From the asymptotic form of f (r) we find that the ADM mass is given by In terms of the physical charges, the extremality bound is given by Finally, the Bekenstein-Hawking entropy of the extremal black hole is given by (4.20) BTZ × S 2 solution We now consider the near-horizon limit of the boosted black string solution (4.7) by taking the limit q r, which sends H(r) → q r . (4.21) After performing the coordinate transformation we obtain a BTZ × S 2 solution. Explicitly, the lapse and shift functions are given by The BTZ black hole has an AdS length = 2q and the S 2 has radius S 2 = q. The mass and angular momentum are given by The extremal limit corresponds to r 0 = 0 and the extremality bound is given by Instead of explicitly performing the near-horizon limit an alternative way of obtaining the same BTZ × S 2 solution is to use c-extremization [32]. This will be especially useful when we include higher-derivative corrections. Going to Euclidean signature and evaluating the five-dimensional action, we find that the c-function takes the form Extremizing with respect to and S 2 we find the following lengths and central charge where we used G 5 = 4π 2 S 2 G 3 . This coincides with the solution found by taking the nearhorizon limit. The entropy can be found using Cardy's formula [34] After using (3.9), we find that in the extremal limit where we wrote G 3 = RG 4 2Q 2 . The extremal entropy of the BTZ matches that of the extremal four-dimensional black hole. Including higher-derivative corrections We are now ready to include higher-derivative corrections to the five-dimensional action and see how the BTZ near-horizon geometry and four-dimensional black hole are modified. The most general four-derivative corrections to five-dimensional Einstein-Maxwell theory are Here W M N OP is the Weyl tensor and is the five-dimensional Euler density. We normalize the higher-derivative operators with Q so that the α i are dimensionless. Readers uninterested in technical details can skip the next two subsections and instead directly look at Table 1, where we give an overview of the corrections to the entropy and extremality bounds. Take notice that the BTZ extremality bound does not coincide with the four-dimensional extremality bound. Table 1. Overview of the corrections to the extremality bounds, entropies and temperatures. z = J3 2QM3 for the BTZ black hole and z = Q G4M4 for the four-dimensional black hole. It is clear that thermodynamics at z = 1 only makes sense if the WGC is satisfied (else z = 1 is a naked singularity with no horizon). Results for BTZ × S 2 are presented also in terms of "four-dimensional quantities" via the correspondence G 3 = 2πR 4πQ 2 G 4 . The relation between a i s and α i s are given in (4.34). Four-dimensional black hole solution To obtain the four-dimensional action with higher-derivative corrections we perform a Kaluza-Klein reduction along the x-direction, just as before, using the ansatz (4.8). Taking magnetic and electric charges equal the reduced action takes the following form. The Wilson coefficients are related to the coefficients appearing in the five-dimensional action as Because the four-dimensional Euler density E 4 is topological, it will affect neither the equations of motion nor the extremality bound. To determine the corrections to the extremality bound and entropy, we will make use of a thermodynamic approach, which has the advantage that we don't need to explicitly know the corrected metric. Instead, we can evaluate the corrected Euclidean action on the uncorrected solution [10,17]. As an additional check, we show in App. B that this approach agrees with a direct computation of the corrected metric. The Euclidean action is given by where we supplemented the bulk action by a Gibbons-Hawking-York boundary term. Here K is the extrinsic curvature of the induced metric at the boundary r → ∞ and K 0 a counterterm constructed by embedding the boundary metric in flat space. This is required to obtain a finite on-shell action. Before we take into account the higher-derivative corrections, we first focus on the uncorrected action. By explicitly evaluating the uncorrected Euclidean action on the uncorrected black hole background (4.11) one finds The factor of three indicates there are three magnetic charges. Here β = T −1 is the inverse temperature,Q = Q/4G 4 andQ 0 = Q 0 /4G 4 are rescaled charges and Φ and Ψ are the electric and magnetic potentials given by Using the Smarr formula M 4 = 2T S +ΦQ 0 +3ΨQ, the Euclidean action can now be written as with G the Gibbs free energy. This relation still holds in the presence of higher-derivative corrections as long as we interpret S as the Wald entropy [17]. To derive the various thermodynamic quantities of the black hole, we express the Gibbs free energy as G = G(T, Φ,Q), appropriate for a grand canonical ensemble. The entropy and mass of the black hole are given by After having obtained the entropy, the mass is given by We now include higher-derivative corrections. Evaluating the corrected Euclidean action on the uncorrected black hole background and using the thermodynamic identities, we find that the T = 0 entropy is given by where we took equal electric and magnetic charges. At zero temperature the correction to the mass is The extremality bound is therefore modified as Finally, we are also interested in the microcanonical entropy of the z = 1 black hole in the corrected theory. Expanding the mass at small temperature we find that this black hole has a non-zero temperature The entropy becomes The WGC is satisfied when the extremality bound is corrected positively, which requires The same bound can be derived using (1.1). Because the Euler density does not contribute to the extremality bound in four dimensions, we can use the previously derived stress tensor and corrections to Maxwell's equations (see (2.48) and (2.46), but note the different normalization of the action (4.33)). We then find Imposing the integrated stress tensor to be smaller than or equal to zero, we again find 2a 1 + a 2 ≥ 0. BTZ × S 2 solution To find the BTZ × S 2 near-horizon geometry in the corrected theory, it is very convenient to use c-extremization instead of computing the corrected metric. Taking the same ansatz for the metric as before, we find that the c-function takes the form Extremizing the c-function for and S 2 we find the following lengths: Because the S 2 length is also corrected by higher-derivative terms, this modifies the relationship between the five-dimensional and three-dimensional Newton constant. We are interested in computing corrections keeping the Newton constant fixed, so we either have to express corrections to the central charge in terms of G 5 or rescale G 3 to keep the ratio G 5 /G 3 uncorrected. We choose the latter option and therefore rescale and express all correction to the BTZ geometry with respect to this rescaled Newton constant. The relationship between the different Newton constants is then still given by Using the values for the AdS and S 2 length we found the central charge is given by To compute the entropry, we use Cardy's formula and find that the entropy at zero temperature is given by Using (4.34) this entropy equals (4.41). To derive the correction to the extremality bound in a canonical ensemble and the entropy in a microcanonical ensemble we again use a thermodynamic approach, just as in Sec. 3. The five-dimensional Euclidean action supplemented by a Gibbons-Hawking-York boundary term and counterterm is given by We now evaluate this on a BTZ × S 2 background. To make the on-shell action finite, the counterterm is chosen to be (4.55) Using this the complete on-shell action evaluates to Just as before, we can write and the thermodynamic quantities are given by (3.15) and (3.16). We find that the extremality bound in a canonical ensemble is corrected as so that this spinning WGC is satisfied when Notably, this combination of Wilson coefficients does not coincide with the combination appearing in the extremality bound of the four-dimensional black hole. The state z = J 3 /(2QM 3 ) = 1 has a temperature At this temperature, the microcanonical entropy is given by A summary of all corrections to the extremality bounds and the entropy are displayed in Table 1. WGC bounds In Sec. 4.3 we demonstrated that the WGC as phrased in terms of a corrected extremality bound differs for two distinct limits of the five-dimensional black string. The corrected angular momentum-to-mass ratio of the near-horizon BTZ × S 2 black hole and corrected charge-to-mass ratio of the four-charge black hole after a Kaluza-Klein reduction to four dimensions depend on different combinations of the five-dimensional Wilson coefficients. Alternatively, we may impose that the mild form of the WGC holds for each of these independently, allowing us to more strongly constrain the α i appearing in five dimensions. This is similar in spirit to the works of [5,39] where the lattice and tower WGC respectively were argued for based on robustness under toroidal compactifications. In fact, we can go further than only combining the bounds of Eqs. (4.46) and (4.59) by asking that the mild WGC be satisfied also for electric black holes in five dimensions. Such bounds for charged black holes are known, appearing for example in [27]. With the normalizations of Eq. (4.31), these three bounds read 5D boosted black string These conditions are compatible with one another, as shown in Fig. 4, and together provide more stringent bounds on the allowed values of the α i . One could also ask what bounds arise for more general charged black holes in four dimensions after a Kaluza-Klein reduction, but since both the radion and axion can be sourced we do not consider such backgrounds here. Discussion Understanding precisely the neccesary and sufficient conditions for proving the mild form of the WGC is an interesting question that can shed light on the boundary between those effective theories which are consistent with quantum gravity (the landscape) and those that are pathological (the swampland). In particular, one may wonder what sorts of matter configurations correct the extremality bound in a manner consistent with the WGC. To understand this better, we reformulated the shift in the extremality bound of a black hole in terms of an integrated condition on the stress tensor. When this integral of the stress tensor is negative, the horizon is shifted positively in a microcanonical ensemble. As a particular application we evaluated this condition for four-dimensional Reissner-Nördstrom and rotating BTZ black holes perturbed by higher-derivative corrections, but it can be applied to any stationary black hole and more general corrections. Applying this condition to extremal rotating BTZ black holes suggests a spinning version of the WGC that posits that corrections to the extremality bound should increase the extremal angular momentum-to-mass ratio. Although the spinning WGC does not follow from standard arguments of black hole decay, we showed that when a BTZ black hole is perturbed by a relevant operator it obeys the spinning WGC as a consequence of the c-theorem in the dual two-dimensional CFT. We then studied the spinning WGC in the context of a five-dimensional boosted black string with higher-derivative corrections. The string has a near-horizon BTZ × S 2 geometry and describes a four-dimensional charged black hole upon a Kaluza-Klein reduction. While the entropy of the four-dimensional black hole at zero temperature agrees with the entropy computed from the BTZ geometry, their extremality bounds do not coincide. By applying both the spinning and charged WGC to the black string we derived positivity conditions on the five-dimensional Wilson coefficients that are stronger than those obtained by applying the charged WGC alone. Because the three-dimensional spinning WGC does not directly imply the four-dimensional charged WGC, our findings agree with the phenomenon that IR consistency is not sufficient to prove the charged WGC in d ≥ 4. While the c-theorem can be used to prove the spinning WGC in three dimensions, still more UV information is needed to prove the charged WGC in higher dimensions. In future work, it would be interesting to consider the relationship between holographic RG flow and the WGC in more detail in higher dimensions. While higher-derivative corrections have constant magnitude in a BTZ background, in higher-dimensional theories these terms vary as one moves inward from the boundary, so that perturbed geometries are more directly related to holographic RG flows. At least for a subclass of higher-derivative corrections, holographic c-theorems have been studied in detail [40]. As a particular example, we could perturb an AdS 5 background and use the Hamilton-Jacobi formalism to derive a function that monotonically decreases along the holographic RG flow, which would be the dual of the a-theorem [41] in the CFT 4 . An important difference with three dimensions, however, is that the extremality bound is now not just determined by one anomaly coefficient; in five dimensions there are four independent four-derivative operators that contribute to an Einstein-Maxwell theory [11]. Thus, to constrain the extremality bound one would have to consider a subclass of theories for which effectively only the a-anomaly coefficient contributes. In four-dimensional flat space, a similar strategy was employed in [42] by considering the deep IR where only the c-anomaly contributes to the extremal charge-to-mass ratio. Furthermore, because of the close connection between holographic c-theorems and entanglement entropy [40] it would be interesting to understand better if and how quantum corrections to the entanglement entropy are related to the WGC. In fact, in the holographic proof of the WGC presented in [43], entanglement entropy played a crucial role. Because logarithmic quantum corrections to the (von Neumann) entropy of black holes are universal and determined by anomaly coefficients [44], one might also hope to extract similar general lessons about corrections to the extremality bound. C of [28] and adapt that derivation to include a Maxwell term. Another more extensive review can be found here [47]. A.1 Notation and conventions Because this appendix heavily relies on the use of differential forms, we briefly list our conventions. A p-form α is defined as For integration over a d-dimensional space, we use the following volume form where 12...d = 1 denotes the Levi-Civita symbol. The Hodge star operator acts on p-forms as The Levi-Civita symbol is 12...d = 1. The exterior derivative acts as Taking a p-form α and a q-form β, we can write When p = q this simplifies. The interior product ι X is defined as A.2 Iyer-Wald formalism We start with writing the Lagrangian for a d-dimensional gravitational theory with arbitrary matter fields φ as a d-form L. Varying with respect to a matter field results in Here E collectively denotes the equations of motions and Θ is the so-called symplectic potential. An anti-symmetric variation of the symplectic potential yields the symplectic current ω(δ 1 φ, δ 2 φ) = δ 1 Θ(δ 2 φ) − δ 2 Θ(δ 1 φ) . (A.10) Now consider an infinitesimal diffeomorphism labeled by a vector field ξ, which acts as δ ξ φ = L ξ φ. Integrating the symplectic current over a Cauchy surface Σ gives the symplectic form, which with foresight we will write as the variation of an Hamiltonian that generates the flow of ξ. For any ξ we can construct a Noether current which is conserved on-shell. The fact that J ξ is closed (on-shell) and only depends linearly on ξ implies that we can write it as where Q ξ is the Noether charge. To extract conserved quantities from the Noether current, we consider a variation On-shell, the symplectic current can be written as 16) and the variation of the Hamiltonian is When ξ is a symmetry, i.e. L ξ φ = 0, the Hamiltonian is conserved We therefore see that (A.11) indeed gives the conserved quantities. A.3 Einstein-Maxwell gravity Let us now restrict to Einstein-Maxwell gravity. The Lagrangian is given by Here κ 2 = 8πG d , with G d the d-dimensional Newton constant. We can now perform variations with respect to the metric δg ab = h ab and the gauge field δA a . Also, in addition to diffeomorphisms we can perform gauge transformations on the gauge field δ λ A = dλ. Varying with respect to the metric we find E g (h) = −E ab h ab , Θ g (h) = ι X , 6 See page 21 of [47] for the proof. with E ab g = 1 2κ 2 R ab − Varying with respect to the gauge field we find From (A.12) we can now construct the Noether current. The Noether current is now given by The expressions for the Lie derivatives are L ξ g ab = 2∇ (a ξ b) and L ξ A a = ξ b F ba + ∂ a (ξ b A b ). Using these we find After some algebra, the Noether current can be written as 7 We see that the Noether charges for ξ and λ are given by. Using (A.17), the variation of the Hamiltonian is given by Taking an exterior derivative of the variation of the Hamiltonian we obtain dδH = −2δ (E g · ξ) − (ι ξ A + λ) d δF , (A. 26) which vanishes on-shell. 7 The superscript denotes the one-form dual ξ = g ab ξ b dx a and Eg · ξ = E ab ξ b dx a . B Five-dimensional black string with higher-derivative corrections In this appendix we provide some of the details of the α-corrected black string in five dimensions. This brute-force solving of the equations of motion reproduces the thermodynamic and c-extremization arguments as presented in the main text. Begin by writing the five-dimensional action with higher-derivative terms as where E 5 = R µνρσ R µνρσ − 4R µν R µν + R 2 is the Euler density. As in Sec. 4.3 we normalize the Wilson coefficients with Q so that the α i are dimensionless. Take for an ansatz ds 2 = H −1 −f dt 2 + h dx 2 + H 2 f −1 dr 2 + r 2 dΩ 2 2 , F = q(q + r 0 ) sin θ dθ ∧ dφ . Before turning to the O(α) corrections to these functions, let us discuss how these functions appear in the near-horizon limit and 4D reduction. In boosting the string along the xdirection we make the replacements t → cosh δ 0 t + sinh δ 0 x , Near-horizon limit The near-horizon geometry is found by taking q r, r 0 . In this limit the metric splits into a (locally) AdS 3 space and constant-radius S 2 . Boundary conditions In solving for the corrected five-dimensional solution we should keep the asymptotic form of the solution fixed, namely H, h, f = 1 + O( 1 r ). To work with fixed charges, we should also impose that the 1 r term in Equation B.8 is uncorrected. It is also convenient to choose coordinates (equivalently, integration constants) so that the outer horizon remains at r = r 0 and extremality is still r 0 → 0. With these choices the αcorrected solutions are uniquely determined. The full expressions for the corrected H, h, f are quite cumbersome, so we present here only their form in some relevant limits, as needed. Corrected reduction to four dimensions In the asymptotic region, r q, r 0 , the corrected solution reads The function G(α i ; z) has the following small-z limit, relevent for the extremal limit r 0 → 0: Choosing q 0 = q for simplicity, the ADM mass can be read off from the 1 r coefficent of (H 3 H 0 ) 1/2 f −1 : 2G 4 M 4 = 2q + r 0 − 3 4 (q + r 0 ) G(α i ; r 0 q ) . (B.14) Taking r 0 → 0, we find that the four-dimensional extremality bound is corrected to Q G 4 M 4 ≤ 1 + 3 8 G(α i ; 0) = 1 + 8α 1 + 7α 2 + 6α 3 40 . (B.15) Corrected near-horizon limit With q r r 0 , the corrected solution behaves as 8 H(r) = 1 + Q r − Q(8α 1 + 3α 2 − 2α 3 ) 4r + · · · , (B.16) so that we may read off that the extremal S 2 radius has been corrected to (B.17) 8 The inequality r r0 may seem odd for a near-horizon limit, but we are ultimately interested in the extremal limit, r0 → 0, so that r r0 is satisfied for any finite r. Note also that the singularity is at r = −q.
14,229.6
2020-11-10T00:00:00.000
[ "Physics" ]
Staying True to Your Word: (How) Can Attention Become Explanation? The attention mechanism has quickly become ubiquitous in NLP. In addition to improving performance of models, attention has been widely used as a glimpse into the inner workings of NLP models. The latter aspect has in the recent years become a common topic of discussion, most notably in recent work of Jain and Wallace; Wiegreffe and Pinter. With the shortcomings of using attention weights as a tool of transparency revealed, the attention mechanism has been stuck in a limbo without concrete proof when and whether it can be used as an explanation. In this paper, we provide an explanation as to why attention has seen rightful critique when used with recurrent networks in sequence classification tasks. We propose a remedy to these issues in the form of a word level objective and our findings give credibility for attention to provide faithful interpretations of recurrent models. Introduction Not long since its introduction, the attention mechanism (Bahdanau et al., 2014) has become a staple of many NLP models. Apart from enhancing prediction performance of models and starting the trend of fully attentional networks (Vaswani et al., 2017), attention weights have been widely used as a method for interpreting decisions of neural models. Recently, the validity of interpreting the decision making process of a model through its attention weights came under question. Jain and Wallace (2019) introduced a set of experiments on English language sequence classification tasks which demonstrated that attention weights do not correlate with feature importance measures, and that attention weights generated by a trained model can be substituted and modified without detriment to model performance. While it is natural to assume that multiple plausible explanations for a model's decision can coexist, the authors show the existence of attention distributions that assign most of their mass to words seemingly irrelevant to the task, while still not affecting neither the decision nor the confidence of the model. In the follow-up work, Wiegreffe and Pinter (2019) find that, while such adversarial attention distributions do exist, they are seldom converged to in the training process, even when one introduces a training signal with the sole purpose of guiding the model to such distributions. In this paper, we aim to tackle the difficult question of the relationship between attention and explanation from a different angle -is there any modification we can make to the existing models so that attention could be reliably used as a tool of model transparency? For the sake of consistency, we follow previous work (Jain and Wallace, 2019;Wiegreffe and Pinter, 2019) and limit our scope to single-sequence binary classification tasks, where we consider models from the RNN + self-attention family. Concretely, we analyse single-layer bidirectional LSTM-s (Hochreiter and Schmidhuber, 1997) equipped with the additive (Bahdanau et al., 2014) and dot-product (Vaswani et al., 2017) selfattention mechanisms. Inspired by the recent results (Voita et al., 2019), which show that optimizing the masked language modelling (MLM) (Devlin et al., 2019) objective results in high mutual information between the input and output layers of models, we ask ourselves whether such a trait is beneficial for interpretability. The task of sequence classification in no way incentivizes a model to retain information from the input, and the model is likely to filter out information irrelevant to the task. 1 We believe this lack of enforced information retention causes a discrepancy between the input and hidden vectors, which results in reduced model interpretability. To enforce information retention, we propose a number of techniques to keep the hidden representations closer to their input representations, improving the faithfulness of interpreting models through inspecting their attention weights. The contributions of this paper are as follows: we (1) investigate whether the lack of a word-level objective causes attention not to be a faithful interpretation, (2) propose various regularization methods in order to improve interpretability through inspecting attention weights, and (3) quantitatively and qualitatively evaluate whether and how these methods help model interpretability. The rest of the paper is organized as follows. Firstly ( §2), we position ourselves within current work and discuss the use of attention as interpretation in NLP,. We then ( §3) present our experimental setup, introduce various regularization methods, and briefly describe the experiments we use to evaluate our regularized models. In §4, we offer a quantitative evaluation of the effect of regularizes on the trained models across a number of datasets. We then ( §5) qualitatively and quantitatively inspect the effect of regularization on a trained model, identifying what we believe to be the cause of negative results reported in previous work. Finally ( §6), we summarize our findings and propose possible lines of future work. Attention and Interpretability in NLP Preliminaries: Let the input sequence of word embeddings be denoted as {w t } T t=1 , where T is the length of the sequence. The sequence of hidden states produced by the encoder is then {h t } T t=1 , where each h t = rnn(x t , h (t−1) ). The RNN used is a bidirectional LSTM. When discussing a hidden state h t , we refer only to x t as its input for convenience. The attention mechanism produces a probability distribution over the hidden states, the elements of which we denote {α t } T t=1 , and refer to as attention weights. Attention as Interpretation When interpreting models through the attention mechanism, we assume that the attention weight on the t-th word, α t , is a faithful measure of importance of the input word x t for the classifier decision. This assumption allows us to interpret the decision of the classifier by retrieving the highest attention weights assigned by the model, and then identifying the input words in these timesteps. Thus, in the terminology of Doshi-Velez and Kim (2017), our cognitive chunk (a basic unit of explanation) is a single word. However, we are using a BiLSTM as an encoder, and every hidden state is contextualized by virtue of observing the entire input sequence, so the attention weights actually pertain to the input word in context. A faithful measure of importance should by definition accurately represent the true reasoning behind the final decision of the model. 2 So, if attention weights are a faithful measure of importance of word inputs, they will assign large weights to words relevant for the classifier decision. To define faithfulness more clearly, we can assume the existence of an oracle method which can partition each input sequence of words 3 into decision-relevant and decision-irrelevant words, where relevance is defined by the judgment of a human reading the text with respect to a task. By this definition, a faithful attention distribution would consistently attribute all or at least most of its probability mass to the decision-relevant words, making it a plausible explanation for humans. In contrast, a counterfactual attention distribution (Jain and Wallace, 2019) attributes most (or a significant amount) of its probability mass to task-irrelevant words. Obviously, infinitely many plausible and counterfactual explanations exist for a given input instance -merely by redistributing the original attention mass within the same set of words we can obtain infinitely many alternative interpretations that are still either plausible or counterfactual. Jain and Wallace (2019) and Vashishth et al. (2019) demonstrate that, if we permute or substitute the weights of a learned attention distribution, our model can still retain high (and in some cases, unchanged) classification performance and prediction confidence. Even more worryingly, some of the modified attention distributions assign high attention weights to task-irrelevant words while not affecting the instance classification. The existence of such counterfactual attention distributions raises doubts whether inspecting attention weights can be used as a faithful interpretation of the model's decision making process at all. Wiegreffe and Pinter (2019) provide two counterarguments -(1) Existence does not entail exclusivity, suggesting that, just because our model has converged to an attention distribution (a base attention distribution), that distribution is not necessarily unique, and alternative attention distributions can still be faithful; (2) while models which produce counterfactual distributions do exist and can be found by post-hoc modifications, these distributions are difficult to converge to naturally through the optimization process of a neural network. This is demonstrated by the authors in experiments where they specifically optimize for a distribution significantly different from the base one. In contrast, Rudin (2019) states that even if a small fraction of explanations produced by the model is counterfactual, one cannot trust other explanations produced by the same model. Lipton (2016) is more forgiving, and allows that models can still be trusted if they make mistakes, provided humans would also make mistakes on the same instances. The work of Pruthi et al. (2019) emphasizes the threat of interpreting models through attention weights, as they show a regularization term can be introduced to guide the attention weights away from focusing on subsets of words while retaining model accuracy, implying that models which exploit bias in data can be trained to hide the true reasoning behind their decisions. Among other work, Serrano and Smith (2019) apply an array of tests to analyse whether attention weights correlate with impact on model prediction, concluding again that attention is not a fail-safe (faithful) indicator of importance. The experiments of Vashishth et al. (2019) show that for single-sequence classification, learned attention distributions can be replaced without affecting performance -indicating that attention might not be all we need, after all. Experimental Setup The base model used in (Jain and Wallace, 2019;Wiegreffe and Pinter, 2019) is a single-layer bidirectional LSTM augmented with either a dotproduct or an additive attention mechanism, the output of which is then fed into a linear classifier (decoder). We use the same base model as a baseline throughout our experiments. Regularizing Models As mentioned before, we suspect that the lack of a word-level objective weakens the relationship between h t and x t , and, consequently, the faithfulness of interpreting attention weights α t as an explanation of the decision making process of the model diminishes. We will now present a number of methods constructed with the goal of improving information retention between the inputs and hidden states. Our self-attention augmented LSTM encoder with inputs x t is defined as: where attn is either the dot-product or additive attention mechanism. The sequence representation s is then fed into a linear decoder. The simplest way to retain information from input is to include it explicitly in the hidden representations. This can be done by concatenating the embeddings to the hidden representation: where [·; ·] is the concatenation operator. Another method is to incorporate a residual connection: We use these two methods as our regularized baselines (concat, residual), along with the unreguralized base model. Our next proposed method is to add a regularization term constraining the L2 norm of the difference between a word embedding and its corresponding hidden representation. As we suspect that the base model discards a lot of word information it deems task-irrelevant, we wish to penalize it for doing so where this information filtering is not crucial. The last model we propose is inspired by results in (Voita et al., 2019), where we introduce the masked language modelling objective (Devlin et al., 2019), in which input tokens from a sequence are masked at random. 4 The task of the model is then to correctly predict the masked tokens based on contextual cues from the unmasked tokens in the sequence. In addition to the standard model in (1), the MLM model also performs the following: The hidden statesĥ t for the corresponding masked tokens are then fed into a linear decoder which predicts the masked word. The encoder and embedding matrix are shared between the MLM and classification tasks. The MLM linear decoder also introduces no new parameters as we tie the weights (Inan et al., 2016) of the MLM decoder and the input embedding matrix and keep them frozen during training. Both of these choices are motivated by the fact that the model might converge to a solution which does not require retention of information from inputs. In order to apply weight tying, we have to ensure that the dimension of the BiLSTM hidden state equal to the input embedding, and therefore we increase the LSTM hidden state size to 150, compared to 128 in (Jain and Wallace, 2019;Wiegreffe and Pinter, 2019). We also use the new hidden state size for all experiments with the base model. The MLM setup introduces two hyperparameters: p mlm , denoting the probability of masking a token in a sequence, and η, denoting the weight of the MLM loss. We keep p mlm fixed at 0.15 throughout the experiments, as in (Devlin et al., 2019), and adjust η with respect to the average sequence length in various datasets so that the MLM loss would not dominate the optimization process. 5 Post-hoc Modification of Attention Distributions As suggested by Jain and Wallace (2019), robustness of classifier confidence with respect to atten-tion weight modifications is not a desirable property of interpretable models. Ideally, if a model produces the same decision for an alternative set of attention weights, we would like to be sure that the alternative explanation is faithful. This is not the case in practice as Jain and Wallace (2019) and Vashishth et al. (2019) show that a trained network is surprisingly robust to changes to the attention weights and produces nearly unchanged classification scores even for adversarial distributions. So, while attention is an integral part of training the network, the weights it produces do not greatly affect the classifier decision once trained. While we agree with the observation of Wiegreffe and Pinter (2019) that robustness of model decisions with respect to attention weights is not necessarily bad as the model is unlikely to naturally converge to such a solution, we believe that fragility of model decisions is an argument in favor of interpretability as it indicates that the number of explanations plausible to the model has been reduced, and we perform experiments with that in mind. Training an Adversary In the experiment introduced by Jain and Wallace (2019), for a trained model we attempt to find an adversarial attention distribution which maximizes the Jensen-Shannon divergence (JSD) from the base distribution produced by the trained model, while at the same time minimizing the total variation distance (TVD) from the confidence of the predictions of the base model. The authors demonstrate that it is possible to find an attention distribution that obtains a high JSD while still producing the same prediction confidence consistently across multiple tasks. As these adversarial distributions were found in an artificial setting, Wiegreffe and Pinter (2019) explore a more realistic scenario and construct an optimization task where, given a fixed (original) model, they train an adversary to minimize TVD from per-instance prediction confidences, while maximizing JSD between per-instance attention distributions of the original model and the adversary. The optimization objective for our adversarial model a given a base model b is defined as follows: This training setup introduces another hyperparameter λ, which weighs the JSD component of the optimization objective. TVD and JSD are defined as follows: Initially, we were enthusiastic about this setup and conducted the same experiments with our model variants, but drawing any conclusions from the analysis proved to be hard. Firstly, by optimizing for TVD from a trained model instead of on the raw labels, we bias our new model to make the exact same mistakes as the trained model. We believe this severely limits the search space of the adversarial model, as repeating the same mistakes will also bias the model towards exploiting similar patterns in data and, consequently, a similar attention distribution. Secondly, without knowing what the plausible explanations are for the dataset, it is impossible to determine whether a high JSD is a symptom of the model finding an alternative or adversarial explanation. Thus, we do not attempt to draw many conclusions from this experiment, but we reproduce it for completeness with previous work. Mutual Information To quantitatively evaluate whether the regularization has strengthened the relationship between the hidden states and input representations of our model, we look into a recent method of Voita et al. (2019) inspired by the "Information Bottleneck" (IB) theory (Tishby, 1999), where the authors measure an estimate of mutual information (MI). Originally applied to transformers (Devlin et al., 2018), this method is straightforward to adapt to the bidirectional LSTM. Similarly to our point of view, the IB theory states that neural networks, in general, aim to extract a compressed representation of input in which information relevant for the output is retained while irrelevant is discarded. Mutual information is used as a method of measuring how much information is lost between the input and hidden representation of a certain network. Voita et al. (2019) show transformer networks discard progressively more information in deeper layers. This phenomenon is different for the case of MLM in transformers, where MI is higher in the uppermost layers, likely due to the task of reconstructing corresponding input tokens. The strength of the relationship between e t and h t can be quantified by estimating MI. As MI is intractable to compute in the continuous form, we first discretize the vector representations and estimate MI in the discrete form. Following Voita et al. (2019) and Sajjadi et al. (2018), we perform this discretization by clustering the embedding and hidden representations to a large number of clusters and using the obtained cluster labels in place of the continuous vectors to estimate MI. Concretely, we select a subset of 1000 words from the vocabulary and gather at most 1M representations of these tokens at input and hidden level. We then cluster the obtained representations into k = 1000 clusters with mini-batch k−means with batch size of 100. We obtain the vocabulary sample in two ways: as the top 1k most frequent words (MF), as in (Voita et al., 2019), but also as a random sample (RS) of from the scaled unigram distribution. 6 Datasets We experiment on the following English language datasets for binary classification tasks, which were either originally built for this task or were adapted for it by Jain and Wallace (2019): (1) The Stanford Sentiment Treebank (SST) (Socher et al., 2013), a collection of sentences tagged with sentiment on a discrete scale from 1 to 5, where 1 is the most negative and 5 the most positive. We omit the neutral class (3) and conflate scores 1 and 2 as well as 4 and 5 into negative and positive class, respectively; (2) IMDB Large Movie Reviews Corpus (IMDB) (Maas et al., 2011), a binary sentiment classification dataset of movie reviews; (3) AG News Corpus, a categorized set of news articles from various sources. We limit ourselves to binary classification between articles labelled as world (0) and business (1); (4) 20 Newsgroups similarly, we consider the task of discriminating between baseball (0) and hockey (1) in this dataset of newsgroup correspondences labelled with 20 categories; (5,6) MIMIC ICD9 (Johnson et al., 2016), a dataset of patient discharge summaries from a database of electronic health records. Here, we 6 The sample is drawn from the unigram distribution raised to the power of 3 4 . analyse two classification tasks on different subsets of the data: whether a summary is labelled with the ICD9 code for diabetes (1) or not (0) (henceforth Diabetes) and whether a summary corresponds to a patient with acute (0) or chronic anemie (henceforth Anemia); For consistency, we use the train/test/dev splits produced by Jain and Wallace (2019). 7 Attention is Fragile We report the average F1-scores of five runs for the base model and the following regularization variants: concat, tying, and MLM. We omit results on residual due to space, but they are consistently comparable to concat due to their similar nature. For each model variant we report results of experiments with the dot-product ( • ) and additive (+) attention mechanism. Due to space constraints, we omit the full results and refer the reader to Appendix for more details. We report the performance of each model in scenarios where we use trained attention (Tr.), a random permutation of the trained attention (Pm.) or substitute the attention distribution with the uniform (Un.). For the uniform and permutation settings, we report the drop in F1-score when compared to trained attention performance. We omit the results on the Diabetes dataset, as every modification of attention weights on this dataset results with an F1-score of 0, due to a very small number of tokens being a high-precision indicator of the positive class, as noted by Jain and Wallace (2019). As shown in Table 1, regularization setups increase fragility of model performance with respect to modifications of the attention distribution, while retaining similar classification scores to the base model. These results indicate that we have successfully reduced the space of possible alternative explanations for the model by tying the input and hidden representations closer together. By doing this, we show that lateral information leakage (between hidden states) is reduced when proper regularization is applied, and that, as a consequence, alternative explanations are also plausible. Having shown this, we still need to determine whether a high attention weight on a hidden state is a faithful measure of importance of a corresponding input. 7 https://github.com/successar/ AttentionExplanation Figure 1: Averaged per-instance test set JSD (x-axis) and TVD (y-axis) Mutual Information is Higher In Table 3 we report mutual information scores across datasets for the most frequent words (MF) and a random sample drawn from the scaled unigram distribution of the vocabulary (RS). The increase in mutual information scores between inputs x t and hidden states h t implies that more information from the inputs is retained during encoding. While retention of input information is not a desirable trait of a model performing pure sequence classification, as the only goal the model optimizes is producing the correct class label with high confidence, it is beneficial for interpretability. If we wish to interpret classifier decisions through inspecting attention weights on hidden states, we have to ensure that a hidden state preserves a significant degree of information from the input. A significant increase in mutual information suggests that the base model was filtering or overwriting a large amount of information from the input, making attention inspection less credible. It is not possible to report mutual information for the concat setup as the dimensionality of the hidden vector is larger than the input embedding, so we report the results for Residual. The results for the Residual setup can be considered close to the best realistically obtainable MI score as the model explicitly includes the input embedding in the hidden state. Adversarial Attention Distributions are Harder to Find In Fig. 1 we report results where for a fixed oracle model we train an adversary with the objective of minimizing the TVD between the predictions of the model and, at the same time, maximizing JSD between per-instance averaged attention distributions. Due to space limitations, we only report results for the MLM regularised model, while the others fare comparably. The red dotted line indicates the imitation setup of the base model, and the green dotted line indicates imitation setup for the MLM model. Consistently, except for an outlier point in the Diabetes dataset, the imitation setup of the MLM model produces larger drops of TVD in order to increase the JSD between attention distributions, corroborating the claim that attention distribution of the MLM model is more fragile. Understanding the Effect of Model Regularization To visually demonstrate the undesired effect of attention mechanisms when trained in the base setting, as well as to illustrate the effect of regularizations we applied, we first analyse how we obtain the classifier prediction. The output of the classifier is an affine transformation of the attention output: We can reformulate this as a convex attentionweighted sum of logits (p t ) obtained by running each individual hidden state through the decoder. Once we scale the logits for individual timesteps, we obtain the prediction probability as if the whole attention mass was on that hidden representation. For attention weights to be a faithful measure of interpretability, this probability should be high only on tokens which are decision-relevant. In Fig. 2, we plot these token-level probabilities for a single example to demonstrate that in the base model, this is not the case. We can see that for the base model, the probabilities for most tokens have nearly the same probability as the final prediction, while the regularization keeps the representations for neutral words grounded closer to the decision boundary. As a direct result of this, the model predictions are much more fragile to change of attention weights, as only a small number of hidden states are far enough from the decision boundary to produce an equally confident classification. We now quantitatively formulate and measure this criterion -if the accuracy of a regularized classifier isn't hurt by the regularization, when optimizing for interpretability we should prefer models that have a lower per-token average prediction probability (given that the prediction for that instance is correct). Table 3: Average per-token prediction probability across models and tasks. From the perspective of interpretability, lower is better, given the classifier performance is not significantly affected. Conclusion We have identified the lack of a word-level objective as the likely cause of attention weights not being a faithful tool of interpretability in the case of sequence classification with attention mechanism augmented recurrent networks. We experimentally establish that we can add regularization methods to sequence classification which strengthen the relationship between the input and hidden states while not being a detriment to classification performance. If one wishes to interpret classifier decisions through inspecting attention weights, we strongly suggest inclusion of a technique such as weight tying or adding masked language modelling as an auxiliary. Adding such methods causes the model to become more susceptible to attacks modifying the attention weights of a trained model, and increases faithfulness of explanations produced by attention weights. While we believe our work is a step forward towards using attention weights as a faithful explanation, by no means do we claim that the modification is sufficient. As was our primary concern, the risk with using attention weights as a tool of interpretability is that a single bad explanation could have consequences in decision-making scenarios, and while our methods improve the faithfulness of such interpretability, it is by no means foolproof. We have only scratched the surface of faithful interpretability, and most of the datasets in our and previous work do not have human annotated rationales. In order to fully understand the cases in which attention provides a reliable explanation, we believe that datasets with annotated rationales or decision-relevant tokens should be used. This analysis should also be extended to more complex models which better capture the nuances of language. We believe that the experiments we presented demonstrate the shortcomings of interpreting model decisions through inspecting attention weights, however we acknowledge that this branch of research sorely lacks evaluation methods that include humans in the loop. A Model Hyperparameters Since we analyse a number of models and regularization techniques, we naturally also have a large number of hyperparameters. We do not tune any of them except for regularization-specific ones and we inherit others them from previous work (Jain and Wallace, 2019;Wiegreffe and Pinter, 2019). A notable change is the dimension of the hidden state, which we increase from 128 to 150 due to the nature of the MLM regularization. We, however, repeat the experiments for the base model with this increased dimensionality. We report our parameters in Table 4. While we have considered other values in a brief search for η and δ, but we have only ablated over the mentioned ones as they have proven to be (locally) optimal. We also report the statistics of datasets used in experiments in Table 5. The average instance length had a significant impact on the experiments as datasets with longer instances were naturally more fragile to attention distribution modifications. B Experiments on Multilayer LSTMs All of the experiments performed in the paper used single-layer LSTMs. Even though the considered binary classification tasks could be considered some of the simplest NLP problems, one still wonders what would the effect be if a more complex encoder was used. To this end, we perform a preliminary set of experiments where we use the best hyperparameters used for training of the singlelayer networks and increase the number of layers of the LSTM network. The results in Table 6, while far from conclusive, show that (1) among all tasks, the base model consistently becomes more robust to attention perturbation the more layers we add. Inconsistently, we further observe a (2) diminishing return of regularization techniques among tasks as the number of layers increases. In some cases, the 3-layer results do not follow this trend (but, curiously, the regularization seems to have a stronger effect). We believe that these results should be taken with a grain of salt prior to a careful ablation study, but still might interest the reader. C Importance of Initialisation in Dot-Product Attention Initially, the experiments we conducted worked well for additive attention but not for scaled dotproduct attention. While the various regularization techniques produced significant changes in F1-scores when the additive attention distribution was modified post-hoc, this was not the case for dotproduct attention and the F1-scores remained constant no matter the modification. This was caused by the fact that the attention distribution of the model consistently converged to a uniform one. After exhaustive experimenting, the only change that fixed this behavior was changing the default initialization scheme for the query parameter. The dot-product self-attention mechanism for a single instance (for illustrative purposes) is generally defined as follows: where q is the query vector, while K and V are stacked representations for each timestep. In practice, when using self-attention for single-sequence classification, the query is a model parameter, 8 while the keys and values are functions of RNN hidden states. In our case concretely (following Jain and Wallace (2019); Wiegreffe and Pinter (2019)), Table 6: % F1-scores for trained models (higher is better) and drops in performance (∆ F1) for LSTM models with multiple layers. The number of layers is indicated in the second column. the keys and values are the hidden states themselves. With this in mind, Eq. 10 can be written as follows: where L q is the trainable query parameter. In our Pytorch implementation, L q is a Linear layer, which is initialised from the Kaiming uniform 9 distribution with the scale parameter √ 5. With this initialisation, the dot-product attention distribution in our experiments has always converged to a uniform one. When we changed the initialisation to instead sample from a standard normal distribution, the dot-product attention converges to a sensible distribution. We suspect this problem occurs because the small initial weights of the linear transform scale down the difference norm between the attention probabilities too much to be distinguished from the uniform distribution. D Additional Visualisations of Regularization Effects To expand on Fig. 2, we now plot per-token prediction probabilities for multiple models. We sometimes omit the model classification probabilities not to clutter the plots too much. We select diverse examples (Figs. 3-7) from the first three batches of the SST validation split. 9 https://github.com/pytorch/pytorch/ blob/master/torch/nn/modules/linear.py# L79 Figure 3: A negative example: perhaps the analysed single-layer LSTM is unable to understand even the simple nuances of language. Here the instance is classified as negative across all models only due to presence of the word "difficult". Note that these models obtain a near 0.9 F1-score on this dataset.
7,382.2
2020-05-19T00:00:00.000
[ "Computer Science", "Linguistics", "Psychology" ]
Applicability of different energy efficiency calculation methods of residential buildings in severe cold and cold zones of China With the accelerated development of dynamic energy consumption simulation software, the accuracy and feasibility of steady-state calculation method for energy efficiency designs of residential buildings in severe cold and cold zones should be investigated. A six-story residential building model as a case study is introduced to be calculated and simulated by steady-state calculation method elaborated in ‘Design standard for energy efficiency of residential buildings in severe cold and cold zones’ and dynamic energy consumption simulation of EnergyPlus software respectively, in order to compare and analyze the differences between these two methods in five typical cities of China (Xi’an, Lhasa, Xining, Harbin and Hailar). The findings indicate that the index of heat loss of building obtained from both methods is different to typical cities with varied difference ratios. Especially in cities of high altitude, strong radiation and greater diurnal range, namely Lhasa and Xining, the difference ratio is as high as 43.83% and 16.63%. Thus, dynamic energy consumption simulation should be used for counting residential building energy efficiency instead of steady-state calculation method in above mentioned zones by analyzing main factors concerning the differences. Introduction The accelerated development of the building sector consuming a large amount of energy and natural resources leads to the energy crisis and climate change [1]. It has become the focus on different countries in the world, also China. At present, in order to control the excess heating energy consumption of residential buildings and its negative impact in northern heating zones, the Ministry of construction in China in 2010 promulgated the design code JGJ 26-2010, namely 'Design standard for energy efficiency of residential buildings in severe cold and cold zones', which proposes steady-state calculation method for guiding architects to consider building energy efficiency in the initial design phase [2]. However, complex upgrades of buildings, progress of energy-saving technologies and climate change are gradually restricting the application of steady-state methods in the building sector which was proved [3]. Meanwhile, another dynamic energy consumption simulation by computer software is being widely adopted in central and southern China because of its high precision and good applicability to complex buildings. Few researchers compare the difference between steady-state method and dynamic simulation in the application of building energy efficiency. Therefore, to further investigate the differences between above two methods, this paper utilizes EnergyPlus software to simulate a typical building in five typical cities for comparing with the steadystate method to identify the optimal method for sever cold and cold zones to improve the accuracy and efficiency of energy efficiency calculation. Research methods At present, there are two main calculation approaches used for analyzing and calculating the main energy consumption of residential buildings as follows. Steady-state calculation method At present, the effective heat transfer coefficient method is one of the steady-state calculation ways commonly used in China. The main building thermal energy efficiency design reference standard states that the effective heat transfer coefficient method is used to calculate IOHLOB based on the steady heat transfer theory, the specific formula is as follows: In equation (1), where qH is index of heat loss of building. qHT and qINF is the heat transfer through the building envelope and building infiltration heat removal respectively, in a unit building area per unit of time. qIH is the interior heat addition in a unit building area per unit of time and usually takes 3.8 W·m -2 . Dynamic energy consumption simulation Based on the unsteady heat transfer theory, dynamic simulation as an advanced method, including transfer function method, harmonic reaction method, and finite element difference method, uses computer simulation software to establish building models through introducing the hourly meteorological parameters, so that it can simulate and analyze dynamic change of building load under the constantly changing outdoor meteorological condition [4]. At present, there are all kinds of software with excellent performance developed by various countries to provide users with convenience. In addition, some researchers have verified the accuracy of EnergyPlus software and high precision of dynamic simulation method [5]. Hence, EnergyPlus 8.6 software is used in this study to simulate the typical buildings because of its mature system and wide application. It simulates dynamic loads of buildings by heat balance method and simulates transient heat transfer of building envelops based on the internal surface temperature of the wall by conduction transfer function (CTF) algorithm [6]. Case study In order to investigate the applicability and difference between steady-state calculation and dynamic simulation method in energy efficiency design of residential buildings in severe cold and cold zones, there are five typical cities, namely Xi'an, Lhasa, Xining, Harbin, and Hailar, selected in this analysis. The selection principle is that this five cities not only belong to each climate sub-zone of residential building energy efficiency design, but also are located in different provinces and at different altitudes. In addition, for further quantitative and qualitative analysis of the two methods, each typical city's heating period for calculation (HPFC) and mean outdoor temperature during heating period (MOTDHP) in dynamic simulation need to be the same as in steady-state calculation from building specification. Hence, relevant information and comparison group of each city are shown in Table 1. Steady-state calculation I (SCI) is defined that all HPFC, MOTDHP and relevant climate data used in steady state calculation are derived from design standard JGJ 26-2010. Steady-state calculation II (SCII) is defined that HPFC is identified in according with the design standard, and MOTDHP is identified as a constant value by CSWD hourly meteorological data. Dynamic simulation II (DSII) is defined that HPFC used in dynamic simulation is identified in according with the above standard, outdoor temperature with the dynamic change is obtained from CSWD hourly meteorological data but the calculated MOTDHP is consistent with SCII. Steady-state calculation and dynamic simulation process According to the parameters setting in Table 1 and Table 2, the building model is calculated and simulated by steady-state calculation and dynamic simulation methods mentioned above. To be specific, based on JGJ 26-2010, the building is adopted continuous heating running system, and the indoor heating air temperature in common rooms is required to reach 18ºC and in stairwell is required to be 12ºC. Meanwhile, the total internal heat gains from people, lights, and electrical appliances etc. are formulated to be 3.8 W·m -2 , and the average air change rates equal to 0.5 aches. In addition, Ideal Loads Air System is used in dynamic simulation to calculate the ideal load and the heat gains of the sample building [7]. Moreover, the dynamic simulations are carried out based on hourly data of typical meteorological year (TMY) of each typical city in the Chinese Standard Weather Date (CSWD) format downloaded from the EnergyPlus official website [8]. Table2. Thermal performance value of the building model envelope. Meteorological data of typical cities Air temperature and solar radiation as meteorological parameters are the relatively primary cause influencing the heating energy consumption which was proved [9].Therefore, daily range and global horizontal radiation of five typical cities in main heating months are shown in Figure 3 in order to assist in subsequent comparative analysis. Figure3 illustrates that both the daily temperature ranges and global horizontal radiation in Lhasa and Xining are obviously higher than in other typical cities, followed by Harbin and Hailar, and finally Xi'an. Particularly in Lhasa, the highest monthly hourly global horizontal radiation in heating period can reach 226.88 Wh·m -2 . In addition, comparing with Table 2, it is an interesting finding that those cities with strong radiation and greater daily range possess higher altitude than others. The reason why this phenomenon happens is that high altitude zones, such as Lhasa and Xining, with thin air and low cloud amount can enhance solar radiation which results in the increase of daytime temperature in the zones. It leads to large temperature difference between day and night compared with low radiation zones, and unstable outdoor air temperature into a dynamic change over time. In contrast, the cities with low solar radiation and low daily range, such as Hailar, Harbin, and Xi'an, are also relatively low in altitude. Comparative analysis of the methods After the above processes, the building model was used to study differences between steady-state calculation method and dynamic energy consumption simulation in the value of IOHLOB by manual computation and software simulation. The results of IOHLOB of residential buildings in above 5 typical cities obtained by SCI, SCII, and DSII are shown in Figure 4. Firstly, all of the results of each city got from either the steady-state method or the dynamic simulation meets the requirement of building energy efficiency design for IOHLOB as specified in the standard JGJ 26-2010. Figure 4 represents that the results vary greatly from city to city but the trend is basically similar by comparing the IOHLOB of different cities no matter which method is used. To be specific, the value of IOHLOB in Hailar is the highest about 19 W·m -2 . Second, the value in Harbin ranges from 15.63 W·m -2 to 18.22 W·m -2 , followed by Xi'an and Xining, and finally Lhasa, which has the lowest value less than half that of Hailar. The reason for these results can be found by combing Table 1 that there is a linear relationship between IOHLOB and heating period, and outdoor temperature, for most cities except of Lhasa and Xining with high altitude and strong radiation. However, Lhasa and Xining breaks this linear relationship which indicates that the height of altitude and the intensity of solar radiation have a strong impact on IOHLOB. Furthermore, SCI and SCII selects different climate data and specific date of heating period that leads to the different MOTDHP involved in calculation although both methods uses the same theoretical algorithm and same HPFC from design standard. On the other hand, comparing DSII with SCI or SCII, Figure 4 also shows that the values of IOHLOB calculated by steady-state calculation method is higher than that by dynamic simulation in all the cities except Harbin, especially when DSII and SCII adopt same parameters but produce different results. The main reason is that the theoretical algorithms of two methods are different. For example, the conventional steady-state method only considers the steady state heat exchange without the unstable characteristics of building materials, such as thermal mass performance, which has been considered by dynamic simulation. Furthermore, thinking about the special result of Harbin, the reason may be excessive consideration of extreme conditions which leads to the unreasonable choice of run time during the dynamic simulation. Difference ratios. In order to clearly compare the steady-state calculation and dynamic simulation method used in energy efficiency design in severe cold and cold zones, this research introduces the difference ratio, which is the ratio of differences between the two algorithms to dynamic simulation method in calculation results, usually expressed as a percentage. The difference ratio of IOHLOB in five typical cities between SCII and DSII are shown in Figure 5. Apparently, the difference ratios of IOHLOB between SCII and DSII differ greatly from city to city. More specifically, the difference ratio in Lhasa is up to 43.83%, which is the highest in all typical cities.
2,612.4
2019-03-04T00:00:00.000
[ "Engineering" ]
Killing in the name of: authors and authority in CSS ABSTRACT Thirty-two years after the publication of Ashley and Walker’s (1990) article, ‘Speaking the Language of Exile: Dissident Thought in International Studies’, critical IR still fails to de-centre structures of white, male authority. This essay will consider the charge of patricide (and related imputations) directed at those who have arguably done precisely this – insofar as they have explicitly, and without apology, illuminated the racist underpinnings of Foucauldian and Copenhagen School ontologies and, hence, the very foundations of a great deal of scholarship in Critical Security Studies (CSS). Far from just another barb in a fractious debate, this essay will argue that the charge of patricide deserves our attention. It reveals a great deal about what is at stake – not only in terms of what can be said, what can be heard, and who can speak, but also in terms of what drives these delimitations: our emotional attachments to authors in general and white, male authority structures in particular. Introduction Thirty-two years after the publication of Richard Ashley and Rob Walker's (1990) article, 'Speaking the Language of Exile: Dissident Thought in International Studies', critical IR still fails to de-centre structures of white, male authority. This essay will consider the charge of patricide (and related imputations) directed at those who have arguably done precisely this -insofar as they have explicitly, and without apology, illuminated the racist underpinnings of Foucauldian and Copenhagen School ontologies and, hence, the very foundations of a great deal of scholarship in Critical Security Studies (CSS). Far from just another barb in a fractious debate, this essay will argue that the charge of patricide deserves our attention. It reveals a great deal about what is at stake -not only in terms of what can be said, what can be heard, and who can speak, but also in terms of what drives these delimitations: our emotional attachments to authors in general and white, male authority structures in particular. Moving forward demands we retrace CSS' beginnings not to the work of the forefathers (however we might define these), but to the call that proceeded and invigorated them and a host of others -igniting the flame, so to speak, to take inspiration from elsewhere and to think anew. This was the call to liberate our desire and, as will be demonstrated, it is inherent to the promise of CSS. Current debates have drawn our attention to the fact that contrary to the call by Ashley and Walker (1990, 261) to heed the voices of dissidents in International Studies, the 'malemarked figure of "man" [as] the sovereign subject of knowledge' continues to loom large in International Studies and, even more remarkably, in CSS. The latter is of note precisely because the intellectual, political and ethical commitments to heed dissident thought, to expose the interplay of power/knowledge, to excavate subjugated knowledges, and to foreground the voices of the marginalised may all be said to lie at heart of the discipline of CSS. Not only do these commitments mark the discipline as unique, but, in disciplinary terms, are conceived of as integral to (rather than separate from) a broader commitment to academic integrity and rigour. Charges of patricide, cast as aspersions, sit uneasily here. And yet . . . As I write, I am cognisant of my own citational practices. I am cognisant that, thus far, I have traced the lineage of CSS to two white men. I am cognisant of the space from where I (as someone who considers herself a member of the CSS community) speaks and of the fact that this speech has very much been granted -enabled by those who came before me and the men and women who encouraged me. I am cognisant of my emotional and intellectual debts. Hence, as this essay explores the ongoing attachments to white, male authority that persist in CSS in spite of the countervailing intellectual, political and ethical commitments it will do so in terms of questions of fidelity: To whom and/or what do we owe? Do some of us owe more than others? What is and/or ought to be the referent object of our protection? And what might this tell us about the limits and/or potential of CSS? This essay will proceed in three parts: Section One will consider the nature of our debts, for better and worse, to the authors who have authored us into existence -interpellating us into their various worlds and expanding ours in turn as well as, via their authority, enabling our ascension into the academic world. It will highlight the unequal terms upon which our ascension into the world at large and the academic world takes place and will consider the implications of this in terms of recent charges of patricide as well as practices of resistance in CSS. It will situate resistance, understood in terms of intellectual dissidence, as central to the birth of CSS and Ashley and Walker's (1990, 262) call to break free from the figure of 'man' as the sovereign subject of reason. Section Two will borrow insights from critical psychoanalytic approaches to ask why, in spite of this and various other critical turns in IR, this figure of 'man' very much persists, looming large over current debates in the discipline. The Conclusion will explore where we might go from here, in terms of both limits and opportunities, and will argue that, contrary to the views of some, it may not be time to abolish CSS yet. Section one: the nature of our debts to authors and authority What can be said? Who can say it? How many times have we been relieved that someone else has said 'it' -the very thing we were thinking, but could not (yet) say or the very thought that until then had remained nameless and formless? Audre Lorde (2017, 1) describes this as the idea '[not yet] birthed, but already felt'. How many times have we felt gratitude because now we can speak it too? Authors allow us to name what has not been named. Authors extend and/or alter the world as we know (or knew) it. They give voice to our thoughts without form, our disillusionment and our unease -sometimes our rage. By speaking authors, we extend into the world and potential fields of belonging -through practices of publication and citation, of course, but, not least importantly, through practices of identification. Authors enable us to speak and, via an extension of their authority, to be heard. But, authors, as authority, can also circumscribe, fix, and delimit. In 'What is an Author? ' Michel Foucault (1984, 118-19) usefully reminded us that the author serves as an 'ideological figure' insofar as it is via reference to 'the author' that the proliferation of meanings inherent to an author's work, inherent to language, stops. The function of an author within an 'oeuvre' is to provide coherence, to individualise and to neutralise contradictions and slippages within and between texts. Likewise, the function of an author within a canon is, if not to tell us what we like, to tell us what is important, to tell us where and when to stop and pay attention (Battersby 1989, 124), to tell us what to cite and to inscribe us into the practices of a discipline -such that we too may one day be authors with authority. This essay is certainly not a call to abandon authors or even authority -a call that would make no sense beyond the most simplistic liberal ontology of autonomous, rational human. The assumption herein is that we are all already implicated assumes that we are all the products of not merely rational choices and autonomous free will, but of the discourses, social structures, places, loved ones, and various others who, for better or worse, called us into social existence, hailed us such that we could be (Butler 1997b, 21). The world is not our own. At each and every moment of interpellation we are indebted -even if not on terms of our choosing and even if not, so to speak, equally (Butler 1997a, b). When we criticise the authors who preceded us (and even perhaps 'schooled' us), the charge of patricide is essentially a reminder of this. As will be illustrated (and as suggested by the gendered nature of the term, patricide) it is also a reminder that the world essentially belongs to some of us, more than others. Borrowing from and extrapolating on the work of Sara Ahmed (2007, 153), it is a reminder that as a result of legacies of colonialism, the world has been 'made white' and, hence, bodies most 'at home' in this world are white. Due to the legacies of patriarchy, it is also fair to say (as others have variously said before me) that the world has also been made male (see, for example, Puwar 2004;Hooks 1997, 7;Collins 2009, 269-71;Särmä 2016). Amongst other things, this is to say that to inhabit whiteness and masculinity (to be seen and to recognise oneself in these terms) is to have certain things put within reach -not just physical objects, but also 'styles, capacities, aspirations, techniques and habits' (Ahmed 2007, 154). To this list we should add speech. Some of us are clearly better placed to speak and write the world as a result of these inheritances. As Patricia Hill Collins (2009, 269) elucidated in her groundbreaking book, Black Feminist Thought: Knowledge, Consciousness and the Politics of Empowerment, '[b]ecause elite white men control Western structures of knowledge validation, their interests pervade the themes, paradigms and epistemologies of traditional scholarship'. Bell hooks (20: 23-31), amongst others, has extended this analysis to include the 'critical' postmodern/post-structural turn in many disciplines by highlighting the exclusionary nature of postmodern discourses even while they invoke the experiences of difference and 'Otherness' to ground their claims to legitimacy and political relevance. As she explains, one might be forgiven for thinking black women do not exist -or, at the very least, have very little to say or worth hearing of an intellectual calibre -upon introduction to the classic texts associated with critical postmodern scholarship. Arguably, this is changing. The ascendance of post-and decolonial approaches within CSS (in ways that can both productively complement and call into question postmodern/post-structural scholarship) is clearly remarkable. But what nevertheless persists (as made abundantly evident by recent debates within CSS), is the persistence of white, male authority structures at the discipline's heart. Within these authority structures and the constraints they generate, comes the differential nature of belonging to the discipline and the differential nature of our ability to speak and be heard -and, more specifically, to be heard on our own terms. The result is that while some of us might be expected to follow in the footsteps of our forefathers -to take up the mantle, if you willothers might be perceived to have been granted speech at others' behest. In short, it is still the case, as Robin Diangelo (2018, 136-37) states, that '[w]hite men occupy the highest positions in the race and gender hierarchy' and thus 'they have the power to define their own reality and that of others. This reality includes not only whose experiences are valid, but who is fundamentally valid . . . what constitutes pain and whose pain is legitimate'. Accusations of patricide remind us of this as well as of our differing legacies of debt. As indicated in the introduction, what inspired this essay was the very choice of the term patricide -or, to be more precise, 'almost patricide'-to describe the work of two critical security studies scholars, Alison Howell and Melanie Richter-Montpetit. This charge of 'almost patricide' (qualified, presumably, because no one was killed) was made by student journalists in the University of Copenhagen's independently run University Post in the wake of Howell and Richter-Montpetit's (2020) controversial decision to use 'the R-word' (Rutazibwa 2016) to describe the ontological underpinnings of the Copenhagen School in addition to their (and others') related critiques of the racist underpinnings of the work of Foucault (see, for example, Howell and Richter-Montpetit 2019; Almond 2007; Afary and Anderson 2005; Managhan 2020). Whilst such phrasing might be considered hyperbolic, it could equally be considered unusually candid -bringing to light precisely what is at stake, but so rarely acknowledged: the (continued) symbolic authority of white men. For the purposes of illustration, it is worth considering the quote in its full context: . . . it's not just the Copenhagen School that is denounced by the two researchers. In fact, according to a post on Alison Howell's website, they are in the process of writing a book called 'Race and Security Studies', where they attempt to criticize security theories in a more general sense. This also applies to so-called Foucauldian security theory which, according to Alison Howell and Melanie Richter-Montpetit, repeats the philosopher's Michel Foucault's 'whitewashing of raciality and coloniality of modern power and violence'. This is almost patricide. The thought of Foucault is the theoretical basis of some of the most recognized feminist and postcolonial theories of the modern era, including those of Judith Butler (Rasmussen, Drude, and Pryner 2020, bold and italics added). In a later article, published by the University Post, another student journalist, making reference to the prior article, added the following: We wrote a story about what it has been like for the prominent peace researcher Ole Waever to have his research dissected in a particular type of anti-racist academic laboratory [further defined as 'a type of . . . laboratory' where the question is posed as to whether 'things are done fairly'] . . . Waever is respected in his field, a heavyweight from that part of the academic left that also fostered researchers like Howell and Richter-Montpetit (Zieler 2020, bold and italics added). Both quotes fulfil a similar function, offering analogous frameworks within which we can view the current controversies -one that reminds us that Howell and Richter-Montpetit (in addition to other critical, postcolonial and feminist scholars) were fostered by the likes of Waever and Foucault and, hence, granted their very speech by the grace of those whose work they now criticise. What should also be evident is that these statements do much more than offer frameworks; they are performative. Far from simply providing reminders, these statements reinscribe the work of Howell, Richter-Montpetit, Butler and 'some of the most recognised feminist and postcolonial theories of the modern era' (Rasmussen, Drude, and Pryner 2020) within particular intellectual traditions and legacies and not others. These statements (re) inscribe the intellectual birth of these authors (and the many others who locate themselves within critical international studies traditions) squarely within the symbolic order of white men, while at the same time casting particular white men in a transcendent relationship to the order they seemingly founded. Unlike the rest of us, that is, the forefathers appear selfgenerated and indebted to no one. At this point in the argument, in light of the power disparities in play, it is worth acknowledging the fact that these statements were made by students. So, let me be very clear about both my intentions and my own view as to how these statements should be read. My aim is not to castigate student journalists who have neither been afforded the life nor academic experience to be expected to have considered all the potential issues raised by their choice of words. The aim is simply to explore the assumptions and insinuations that underpin their statements -and to do so to the very extent that these betray the unspoken assumptions that demarcate the terrain upon which current debates about authors and authority in CSS are being waged. Indeed, what is most remarkable about these statements is what they reveal about the enduring legacy of modern, Western culture with the figure of 'man' at its centre (as the transcendent and sovereign subject of reason) and, notably, even within the discipline of CSS. Indebted to countless feminist scholars before them, Ashley and Walker (1990, 262) elaborate: In modern culture, it is the male-marked figure of 'man' who is understood as the sovereign subject of knowledge. It is the figure of 'man' who is understood to be the origin of language, the condition of all knowledge, the maker of history, and the source of truth and meaning in the world. It was in response to this figure and the place it continued to occupy within social scientific traditions that Ashley and Walker produced their seminal 1990 paper, by way of introduction, for the special edition of the journal International Studies Quarterly that they co-edited. In it, they urged international studies' scholars to resist 'knowing' (at least 'in the sense celebrated in modern culture' wherein to know 'is to construct a coherent representation that excludes contesting interpretations') and to instead adopt the language of 'exile' and concomitant praxis of intellectual dissidence advanced by Julia Kristeva (Ashley and Walker 1990, 261 and 262). Dissidence, Kristeva (1986, 299) described, as a commitment to 'the ruthless and irreverent dismantling of the workings of discourse, thought and existence'. Exile, she described, as inherent to the practice of writing -at least, I will add, to the extent that writing requires us to take the jump and to let go of secure sources of meaning guaranteed by 'a dead father'. (Kristeva 1986, 298; see also Alcorn 2002). This work may be described as a rallying call for a new intellectual praxis and new 'traditions' of dissidence as well as a call to recognise and celebrate 'the increasing volume and variety of work' being done in this vein (Ashley and Walker 1990, 263). The intention of the special edition was to 'provide an opportunity to publicly celebrate what . . . dissident works of thought [were already celebrating]': difference, not identity; the questioning and transgression of limits, not the assertion of boundaries and frameworks; a readiness to question how meaning and order are imposed, not the search for a source of meaning and order already in place; the unrelenting and meticulous analysis of the workings of power in modern global life, not the longing for a sovereign figure (be it man, God, nation, state, paradigm, or research program) that promises a deliverance from power . . . (1990: 264) The emergence of Critical Security Studies (CSS) can be understood as a response to this call -not in the promise of a new theoretical approach, but as a response to this articulation of desire. This was a desire, described by David Mutimer (2007, 54), 'to move beyond the structures of security as it was studied and practiced in the Cold War and in particular a desire to make that move in terms of some form of critique'. It was a desire to 'open up hitherto closed off connections and enable the construction and circulation of new ways of knowing and doing politics' (Ashley and Walker 1990, 263). However, it was also a desire whose realisation, as expressed by Kristeva (1986, 299), would require 'ceaseless analysis, vigilance and [the] will to subversion'. What the nature of contemporary debates within and about CSS makes abundantly clear is that in spite of this call and in spite of the associated rise of various associated post-traditions (post-positivism, postmodernism and poststructuralism, and postcolonialism for example) within International Studies and CSS, the figure of man, as the sovereign subject of reason, persists (Ashley and Walker 1990, 262). While various critical turns in IR may have made us all sceptics of the grand narratives and truth claims of the discipline's forefathers, in ways that have decentred (if not displaced) the authority of particular authors, it has clearly not 'absolved us of our need for authority' (Alcorn 2002, 7) or decentred the structures of white male authority within the discipline. The next section will draw upon insights from critical psychoanalytic approaches to begin to address the question of 'why?' Section two: making sense of the persistence of the figure of 'Man' and the implications for CSS In a sense, perhaps, the answer is simple. As Diangelo (2018, 136) noted, despite the extension of legal equality to women and ethnic and racial minorities, white men continue to hold the reins of various positions of authority (see also Crenshaw 2017). White men still have a unique power to write the world and, with it, to influence whose voices are recognised, whose pain is recognised, whose lives matter, and, even, who we can mourn (Diangelo 2018;Butler 2004;Hoffman 2017;Razack 2004). The latter is important and not just politically; it reveals a great deal about 'the psychic life of power' (Butler 1997b) -both in the way it shapes our most intimate selves (our symbolic and personal attachments) and in the power of our drives and psychic life. This is where it gets a bit more 'complicated' or at least where insights from critical psychoanalytic theory can be helpful. Critical psychoanalytic theory can help us understand that it is not simply that power determines desire (even as understood within a Foucauldian frame), but that desire begets power (see, for example, Alcorn 2002;Žižek 2008;Butler 1997b;Managhan 2020). That is to say that our interpellation by and through dominant discourses and others is the product of a more primary desire to relate, to love, and, in turn, to be seen and to be loved (Alcorn 2002, 66;Butler 1997a, b). Not surprisingly, as Marshall Alcorn (2002, 66) explains, 'these desires have the potential to create just and equitable communities'. These desires enable our entry into the world of language, meaning and community -even communities (in the plural). But, as we all know, the expansive potential of these desires can also be circumvented. The desires generated by Masters (i.e. by those who command authority), for example, require the repression of other desires and this is where an academic community like CSS can run into problems -in the desires generated by Masters and in our desire for Masters. To understand the acrimony of current debates, accusations of patricide, and related accusations of ingratitude, impertinence and the like (whether in the academic world or beyond -when, for example, black sportspeople take the knee), we have to understand power and the various ways it operates through entitlement, 'white fragility', etc. (see Diangelo 2018). Power gives shape to our desires. But, if we want to understand our attachment to power, our sense of indebtedness to figures who embody power/authority and the phenomenon wherein the powerful become the referent objects of our protection, we also need to understand how the free flow of desire can become blocked in ways that constrain who/what can be heard. A psychoanalytic reading of the infamous Milgram experiment is quite insightful in this regard. The experiment has been summarised by Alcorn (2002, 42) as follows: In Stanley Milgram's classic experiment on obedience, average people are told by an authority figure to administer an electrical shock to another person, a learner [or 'victim'] . . . The victim's task is to remember proper responses to verbal cues. The person chosen as the subject of the experiment must administer a shock whenever the victim makes a mistake. The authority figure, whom I term a teacher, and the victim are actually both actors, and the experiment is designed to see if the experimental subject (whom I term a student) will deliver a lethal dose of electricity purely upon request. The majority of students willingly applied the lethal dose. Despite the evident stress created by the conflicting demands of the situation (on the one hand, the continued reassurance of the teacher that the experiment must go on and, on the other hand, the pain of the victim alongside the danger warnings provided by the machine), most students put aside their own misgivings and administered the shock (Alcorn 2002, 42-49). From a Lacanian perspective, this experiment can be read as a testing ground for what can happen in the face of competing demands. 1 It reveals the propensity to short-circuit our own desire in the face of command (even if this requires us to ignore another person's suffering). It explains why, at the moment of decision, we may feel compelled to symbolically identify with 'the place of authority' -i.e. from the perspective of the Big Other (i.e. Reason, Science, Administration, State, God) -and emotionally identify with the figure who represents that authority (Alcorn 2002, 42-49 and 72; see also Žižek 1992). It also reveals the ways we reduce the anxiety these situations create by generating meanings that resolve the conflict -meanings that support the master's command and our self-understanding as active agents. In later interviews, students who exhibited considerable stress in the moment, reported they were doing what was necessary for the experiment, for scienceand/or that they believed the authority figure knew what he was doing in spite of a great deal of evidence to the contrary (Alcorn 2002, 42-29). What this experiment demonstrates, according to Alcorn (2002, 49), is that [i]n contrast to the humanist claim that each individual has an autonomous desire that responds freely to truth claims, the Milgram experiment suggests that most people operate not on their own desires but on the desires of particular others in authority. The problem, Alcorn (2002, 51) explains, is our own primary socialisation. The demand 'replays our most primary relations between meaning and pleasure' from the affirmation (a smile or encouraging nod) that greeted (or did not) our earliest attempts to symbolise, to all the subsequent moments that have marked our accension into the world of language, culture, and law (Alcorn 2002, 51). That is, it replays our entry into a world that is not only not our own, but symbolically and otherwise the order of white men. This can be described as 'the phallic order' both to denote its patriarchal character and what Žižek (1992, 76) describes as the groundless, contingent and senseless injunction upon which our accension into this order is based (see also Hook 2006;Neill 2020;Gunn 2008;McGowan 2020). In Lacan's telling of the Oedipal complex, the 'figure of the father . . . is not necessarily the actual literal father of the child' (Hook 2006, 63). The father figure signifies, rather, the first imposition of law -the sovereign source of reason. More accurately, it serves as a metaphor to mark our entry into the world of the Symbolic and the point where meaning, reason and law are installed from nowhere (see, for example , Hook 2006;Neill 2020;Gunn 2008;McGowan 2020). In answer to the question of 'Why?', the sovereign authority (the father figure, the source of law) answers, 'Because I said so'. In the face of hesitations by the subjects in the Milgram experiment, all the teacher had to say was 'The experiment must continue' and it did (Alcorn 2002, 44). What is noteworthy, from this perspective, about the Milgram experiment is that the signifier 'science' was meaningless. If anything, the ultimate signified was authority itself. From herein lay the force of the command. Moreover, it was he who embodied that authority who became the referent object of protection of those charged with responding to two competing desires. Empathy with the victims alone was insufficient to stop the experiment by refusing the command (Alcorn 2002, 50-51). While some clearly wanted the experiment to stop '[t]heir need to please and be loved by an authority [was] greater than [their] desire to avoid harming another person' (Alcorn 2002, 51). Arguably, we can trace a similar phenomenon at play in contemporary debates within CSS whereby ultimate authority is still grounded in the figure of rational, sovereign man and, in the face of competing truth claims, those who embody it become the referent objects of our protection. We can see this, for instance, in response to contentious accusations of paedophilia against Foucault (see Guesmi 2021;Sormon 2021;Campbell 2021). Consider the following excerpt from an interview with one of Foucault's biographers, James Miller (quoted in Kelly 2021), in response to the issue in dispute: 'Should we cancel Foucault?': Foucault directly challenges where a civilization chooses to draw the line between reason and madness, between the normal and abnormal, between good and evil. That challenge is at the heart of his work; it is what makes Foucault a truly radical thinker. His lifework remains deeply disquieting -as he meant it to be. Here and elsewhere in the interview, the scandal serves as a sign of Foucault's intellectual fervoura testament to the dialectic he employed through his engagement with 'limit experiences' in both his academic and private life (Miller quoted in Kelly 2021). The answers Miller provide (both in terms of what he does say and does not say) are suggestive: even if Foucault did pay children for sex, their potential suffering is deemed another order of concern. 2 In the context of Foucault's lifework, his possible victims are a footnote, if not a testament to the genius of the man. We can also see this phenomenon at play in the accusations of patricide laid against Howell and Richter-Montpetit. In fact, in this case it was not just students who came to the defence of highly esteemed men within their institutions. An entire campaign to mobilise support for these senior academics and to discredit Howell and Richter-Montpetit, as well as the journal that published their work and its editorial leadership, was unleashed via email and traditional and social media. What raised the eyebrows of many was that this campaign was launched by the senior academics whose work was being critiqued, Ole Waever and Barry Buzan. Although these academics were given an opportunity to defend their work by the same journal, and did so in the typical form of a scholarly rejoinder, they deemed this insufficient. Waever explained his intention was not to silence or bully younger female academics (as some accused him of doing), but to 'save [the] field' (quoted in Friis and Morthorst Rasmussen 2020). Disregarding the fact that the Howell and Richter-Montpetit's (2020) article was published in a peer-reviewed journal, Waever and Buzan hailed it as an example of faulty science and even fuel for the Far Right in terms of the 'mockery' it made of critical research (Rasmussen, Drude, and Pryner 2020; see also Waever and Buzan 2020; Friis and Morthorst Rasmussen 2020; Zieler 2020). Strikingly, throughout this campaign, Waever and Buzan set themselves up as Masters and/or 'ideal observers' (Battersby 1989, 124-25) -i.e. as uniquely placed to ascertain the merits of work in International Studies and to 'school' others in 'how to "responsibly" engage with questions of race and racism' (in spite of their lack of research expertise in the area) (Enloe et al. 2020). As expressed in an online letter, titled 'Security Studies Backlash -A Feminist Response', signed by 87 academics (at the time of counting), in so doing, Waever and Buzan not only dismissed alternative voices and alternative forms of critique, but also evaded accountability for the blind spots in their own scholarship (Enloe et al. 2020). Effectively, they also discounted the broader effects of these blind spots in terms of the simultaneous erasure of race and 'methodological whiteness' in the discipline-(s) of International Studies (see Bhambra 2017). It was their pain and virtue, rather, that was made front and centre in interviews: 'Dammit', Waever said, ' . . . I was active in the peace movement in the 1980s. It was in this context that we got the ideas . . . It was to intervene in these political debates' (quoted in Rasmussen, Drude, and Pryner 2020). The point, of course, is not that we should feel no empathy for a senior scholar or scholars whose life work has just come under sustained critique. We all get it wrong from time to time; we could all use some grace. Instead, the point is to be wary of the entitlement that accompanied Waever's campaign. It is also to be wary of the demand to fall in line and cede our desire to Mastersespecially if we know (or even just suspect) that in Howell and Richter-Montpetit's work and in other postcolonial and decolonial critiques of IR, there are points that, at the very least, deserve a hearing. Perhaps, echoing the times and spirit in which Ashley and Walker's (1990, 262) wrote, this is another exciting (if very uncertain) time in which alternative voices and perspectives are being heard and the pain of others is beginning to register. The questions perhaps, to return to the beginning of this essay are as follows: To whom and/or what do we owe our voice, our 'authority', and/or authorisation to speak (especially if and when we are speaking otherwise)? Do some of us owe more than others? What is and/or ought to be the referent object of our protection? And what might this tell us about the limits and/or potential of CSS? Before concluding, I will share the argument put forward by David Chandler and Farai Chipato (2021) that indirectly addresses these very questions. Chandler and Chipato's (2021) argument is multifaceted, but essentially boils down to the following: (1) They suggest 'it is necessary to explore and address the problem that race poses for the discipline not just at the level of overtly discriminatory and hierarchical strategies of power and control, but also at a deeper, ontological level' (61). (2) At a deeper, ontological level the 'recent controversy surrounding a critique of securitisation studies' has revealed '[t]he difficulty . . . of critiquing anti-Blackness without offending or bringing? into question the "critical" credentials of the scholars involved developing a leading approach'(61). (3) Considering this, in terms our understandings of 'security' in an anti-Black world, we have to consider that there may be no 'reparative ethico-political openings that can be made from 'within the subject position of critical security studies' (66). And (4), on this basis, 'the only possibility of a truly novel and ethical future [may lie in the] abolition of the entire intellectual, institutional, ontological edifice that critical security studies is embedded in' (66). Perhaps. I will grant them that. 'Perhaps' is also the word that prefaces their otherwise forceful conclusion (66). Their 'Perhaps' rests on the possibility of reparation within the discipline -the precondition for which, they say, lies in overcoming the various disavowals that lie at its heart. These include the assumption that the problem of race and racism can be located in the past and, like sexism, are legacies that must and can be overcome -rather than seeing these legacies as constitutive of the discipline, contemporary knowledge production and ways of knowing (63). The ultimate disavowal, according to them, would be to put the future of critical security studies at the forefront of concern -to invert the problematic such that CSS 'is now the solution rather than the problem' (63). On this, I agree. And yet, the recognition that race and racism is intimately entangled with CSS is already underway, even if via small, uneven and imperfect steps -without any security of getting it right, much less a conceivable end. The debate is happening and it is happening here and now. So, with that in mind, perhaps, it is not the time to cede the ground to what Chandler and Chipato's (2021, 65) describe as 'the hegemonic imaginaries of the discipline' or to the forefathers who insist we cannot relinquish them. My conclusion will offer a more detailed rejoinder. Conclusion Academia depends upon the circulation of desire -not in terms of the pursuit of pleasure, but in the opportunity and ability to respond to multiple desires in the social field such that we can retain an openness to competing truth claims and the lived experiences that ground them (Alcorn 2002;Sabaratnam 2011). If we accept that the 'normal' discursive practices of any culture or academic discipline always have a non-symbolised remainder -such as the repressed, yet formative, role of race in the creation of the discipline of International Relations -then, as Alcorn (2002, 56) concludes, 'efforts must be made to recover for signification what has been excluded, repressed and foreclosed' (also see Butler, Laclau, and Žižek 2000;Rancière 2009). This is nowhere truer than in the subfield of security studies that marks itself as critical. It is in this sense that current events in CSS are exciting -that 'things of uncertain consequence', but with great significance, are beginning to take place (Ashley and Walker 1990, 262). Many of these events are also decentring of the white, male symbolic order and the authority of some within it -particularly those who consciously or otherwise identify with the figure of the father. Naturally, healthy academic debate will ensue as well as clashes and resentments borne of different and competing libidinal attachments and investments that come to the fore in these debates. Not all will be regressive, but some may be. Lacanian theory explains the ways 'desire can operate in terms of pathological attachments that restrict the free flow of desire and thus constrain both discourse and the recognition of desires in others and in oneself (Alcorn 2002, 66; see also Žižek 1992). It also suggests that, despite what they may claim, even the most 'objective' scholars serve invisible masters -insisting 'like kings claiming divine right that their truth is true and that of another is false' and using their rationality to find ways, when necessary, to support the truth claims of their masters (Alcorn 2002, 84). A Lacanian reading of the Milgram experiment, in particular, shows that we have the propensity to short-circuit our desire in the face of demand. It also, however, shows that there is an alternative. At some point in the experiment, one-third of the experimental subjects became disobedient (Alcorn 2002, 48). These subjects were those who, according to Alcorn (2002, 48-49 and 110), 'knew their own desire' and acted on it. This is not to suggest that they acted on some a priori desire; it suggests they were able to respond freely to conflicting expressions of desire in the social field and overcome the constraints of an authority figure who sought to suppress these. We can refuse the sign of the father and the terms of the symbolic order that grants his authority. For Chandler and Chipato (2021), to refuse the sign of the father is to refuse the sign of a discipline rooted in Masters and, barring the possibility of reparation (particularly, it would seem, with the Masters), they urge us to consider abandoning CSS. This essay will suggest an alternate way forward and it will do so for two reasons. First, to leave now would be to repeat the initial cut that separated the study of race from International Relations, resulting in the (re)erasure of race from international and security studies (Vitalis 2015;Adamson 2020). Second, and relatedly, it would be to cede the ground to Masters yet again, relinquishing our right to speak the world (and at the very moment that the cracks are appearing in reigning paradigms allowing new openings). Returning to the initial question (To whom and/or what do we owe?), I stake my own fidelity not to 'the founding fathers' (however identified), but to the call outlined earlier in this paper that celebrates the circulation of desire and asks that we not cede our own. In this, I share with Chandler and Chipato (2021, 65) the aspiration to 'embrace the refusal of the settled order of academia [and] the flight from the institutional demands of disciplinarity'. 'Theory is always for someone and some purpose', Robert Cox (1981, 128) said. In Cox we listened; Cox we cite. 3 But who were the countless others who made this point prior or revealed it to us in their actions, songs and protests? Who were the original authors? Who, by way of just one example, were the un-cited members of the Black Panther Party, whom, as Brady Heiner (2007) illustrates, inspired and informed many of Foucault's key concepts as well as his conceptualisation of modern power? Perhaps Lorde (2017: 19) had a point when she said 'the master's tools will never dismantle the master's house'. Admittedly, when I first came across this statement as a graduate student (reading her, alongside Cox, the Copenhagen School, and Foucault), I did not understand this. Learning how to work the master's tools, albeit with critical aims, was what I was in graduate school to do and what I continued to work towards afterwards. But maybe I am beginning to understand. 4 The point is not to stop reading the Masters or to cancel authors -far from it. The point, rather, is to remember that for those of us who define our work in the tradition of intellectual dissidence, it is not to any Master to whom we owe our speech -at least not entirely. The argument of this paper is that it is in this gap (in the 'not entirely' and in the refusal to cede our desire) that the work of CSS began -and it is here, in tandem with other critical intellectual traditions and political communities and all variety of exiles, that CSS can continue to inspire. CSS is not a panacea, nor was it ever intended to be (Mutimer 2007;William and Krause 1997). And it is certainly not without its problems. But, if we can keep alive the flame that sparked its beginnings and the inherent promise within that spark -to make more space in global political studies for the circulation of desire and the concomitant interrogation of the grounds upon which we know, speak and write the worldthen perhaps, just perhaps, it is not time to abandon it yet. Notes 1. Milgram made sense of his own experiment in terms 'of the considerable power of what he termed obedience' (Alcorn 2002, 42). For more on the initial experiment and Milgram's analysis, see Milgram (1975). 2. For a discussion of Foucault's failure to consider the unequal power relations between adult and child in his intellectual engagement with the matter of paedophilia, see Alcoff (1996). 3. For an interesting discussion of the Eurocentrism and problem-solving tendencies in Cox's (1981) seminal text, known for introducing the oft-repeated distinction between problem-solving and critical theory, see Hobson (2012, 252-53). 4. For excellent discussions pertaining to the creative and dangerous potential of alternative epistemologies, ontologies and insights gleaned from spaces of marginality, see Collins (2009) (especially Chapter 11) and Hooks (2015) (especially Chapter 15). Disclosure statement No potential conflict of interest was reported by the author. Notes on contributor Tina Managhan is a Senior Lecturer in International Relations at Oxford Brookes University. She has research specializations in the cultural politics of identity and emotion.
9,893.8
2022-09-02T00:00:00.000
[ "Sociology", "Philosophy" ]
Quantifying Chaos by Various Computational Methods. Part 1: Simple Systems The aim of the paper was to analyze the given nonlinear problem by different methods of computation of the Lyapunov exponents (Wolf method, Rosenstein method, Kantz method, the method based on the modification of a neural network, and the synchronization method) for the classical problems governed by difference and differential equations (Hénon map, hyperchaotic Hénon map, logistic map, Rössler attractor, Lorenz attractor) and with the use of both Fourier spectra and Gauss wavelets. It has been shown that a modification of the neural network method makes it possible to compute a spectrum of Lyapunov exponents, and then to detect a transition of the system regular dynamics into chaos, hyperchaos, and others. The aim of the comparison was to evaluate the considered algorithms, study their convergence, and also identify the most suitable algorithms for specific system types and objectives. Moreover, an algorithm of calculation of the spectrum of Lyapunov exponents based on a trained neural network has been proposed. It has been proven that the developed method yields good results for different types of systems and does not require a priori knowledge of the system equations. Introduction The first part of the present work is focused on the numerical investigation of classical dynamical systems to estimate velocity of divergence of the neighborhood trajectories with the help of a measure coupled with the Kolmogorov entropy [1] (or metrics). In reference [1], based on the mathematical results of Oseledec [2] and Pesin [3], it has been shown that the numerically imposed relations can be treated as exact/true values. The method proposed by Wolf [1] is most widely used to verify and study chaotic dynamics. However, also the Rosenstein [4] and Kantz [5] methods are often employed to estimate the largest Lyapunov exponents. The state-of-the-art of papers devoted to the theoretical background of the Lyapunov exponents and methods of their computations has been carried out by Awrejcewicz et al. [6]. In particular, the method of the choice of an embedding dimension has been described. The method of the correlating dimension, the false nearest neighbor method and the method of gamma-test have been presented based on the Hénon and Lorenz attractors. In particular, the occurrence of high computational difficulties has been observed in the case of the Wolf method and its marginally successful employment to small values of the studied data. To avoid the abovementioned drawbacks, a novel neural network-based algorithm to estimate the largest Lyapunov exponents by considering only one coordinate has been proposed. In reference [6] have reported the neural network algorithm for computation of a full spectrum of Lyapunov exponents. A comparison of the results obtained by Golovko with the exact values of the Lyapunov exponents of the Lorenz and Hénon systems have exhibited small errors. In References [7,8], the method of largest Lyapunov exponent computation using the synchronization phenomena of identical systems has been proposed. A few types of coupling have been studied, depending on the type of the considered system. It has been pointed out that large computation time is required to achieve full synchronization. The method proposed in References [9,10] is particularly suitable to study chaotic dynamics of continuous mechanical systems. It should be emphasized that, owing to the results published by the authors of the present paper, the analysis of nonlinear dynamics based on the estimation of the Lyapunov exponents yields a conclusion that the mentioned problems have not been satisfactorily solved yet [1,4,5,9,10]. More recently, Vallejo and Sanjuan [11,12] have studied the predictability of orbits in coupled systems by means of finite-time Lyapunov exponents. This approach has allowed them to estimate how close the computed chaotic orbits are to the real/true orbits, being characterized by the systems shadowing properties. It is known that the fundamental property of chaos is the existence of strong sensitivity to a change of the initial conditions. The definition of chaos, given first by Devaney in 1989 [22], includes three fundamental parts. In addition to sensitivity to the variation of the initial conditions, a condition of mixing, known also as the transitivity condition and the regularity condition, measured by the density of the periodic points or classical notion of periodicity is also included. In 1992, Banks et al. [23] proved that the condition of sensitivity to the initial condition can be neglected, i.e., conditions of transitivity and periodicity imply the sensitivity condition. Knudsen [24] has defined chaos as a function given on a bounded metric space which has a dense orbit and essentially depends on initial conditions. Owing to the definition proposed by Gulick [25], chaos exists when either there is essential dependence on the initial conditions or a chaotic function has positive Lyapunov exponents in each point of the space and which does not eventually tend to a periodic orbit. This definition has been also employed in the present work. The Largest Lyapunov Exponent The following dynamical system was considered: . x = f (x) (1) where x stands for the N-dimensional state vector. Two closed phase points x 1 and x 2 were chosen (in the phase space). They stand for the origins of the trajectories (x 1 (t) and x 2 (t)). The change in the distance d between two corresponding points of these trajectories under evolution of system (1) can be monitored by: Entropy 2018, 20, 175 3 of 28 If the dynamics of system (1) is chaotic, d(t) increases exponentially in time, i.e.,: This yields the average velocity of the exponential divergence of the trajectories: or more precisely: The quantity h is known as the Kolmogorov-Sinai entropy (KS-entropy). Employing the KS-entropy, one can define the studied process, i.e., quantify if the process is regular or chaotic. In particular, if the system dynamics is periodic or quasi-periodic, the distance d(t) is not inversed in time and the KS-entropy is equal to zero (h = 0). If the system dynamics tends to a stable fixed point d(t) → 0, then h < 0. Contrarily, KS-entropy is positive (h > 0) if one deals with chaotic dynamics. KS-entropy is the maximum characteristic Lyapunov exponent that enables one to follow velocity of information lost with respect to the initial system state. Results The spectrum of Lyapunov exponents makes it possible to qualitatively quantify a local property with respect to the stability of an attractor. Consider a phase trajectory x(t) of the dynamical system (1), starting from the point x(0) as well as its neighborhood trajectory x 1 (t) as follows: The following function can be constructed: which is defined on the vector of initial displacement → ε (0) such that → ε (0) = ε, where ε → 0. All possible rotations of the initial displacements vector with respect to n directions of the N-dimensional phase space of the Function (7) will suffer the jump-like changes in the finite series of the values λ 1 , λ 2 , λ 3 , . . . , λ n . These values of the function λ are called Lyapunov exponents (LEs). Positive/negative values of LEs can be viewed as a measure of the averaged exponential divergence/convergence of the neighborhood trajectories. A sum of LEs stands for an averaged divergence of the phase trajectories flow. In the case of a dissipative system, i.e., a system possessing an attractor, this sum is always negative. As numerical case studies show, in some dissipative systems the LEs are invariant with respect to the chosen initial conditions. Hence, a spectrum of LEs can be understood as the property of an attractor. Usually, LEs are presented in a sequence of LE values in decreasing order. For instance, symbols (+, 0, −) mean that for the analyzed attractor, there is one direction in a 3D space, where exponential stretching is exhibited, the second direction indicates neutral stability, and the third one-exponential compression. It should be noted that all attractors different from stable stationary points always have at least one LE equal to zero (in average sense, all points of a trajectory are bounded by a compact manifold and they cannot exhibit divergence or converge). In what follows, relationships between the Lyapunov exponents and the properties and types of attractors are illustrated and discussed: (1) n = 1. In this case only a stable fixed point can be an attractor (node or focus). There exists one negative LE denoted by λ 1 = (−), (2) n = 2. In 2D systems, there are two types of attractors: stable fixed points and limit cycles. (3) n = 3. In 3D phase space, there exist four types of attractors: stable points, limit cycles, 2D tori and strange attractors. The following set of LEs characterizes possible dynamical situations to be met: In the majority of the studied problems, it is impossible to give an analytical definition of LEs, since the analytical solution to the governing differential equations would have to be known. However, there exist reliable algorithms to find all Lyapunov exponents numerically. Benettin Method We began with the numerical investigation of the Kolmogorov entropy of the Hénon-Heiles model. Numerical computations were carried out with accuracy up to 14 digits by means of employing the so-called method of central points. Observe that a similar method has been used in reference [26]. Based on the Lyapunov exponents, the ergodic properties of dissipative dynamical systems with a few degrees of freedom were numerically studied with the Lorenz system. The system exhibited the exponents spectrum of the (+, 0, −) type, and the exponents had the same values for the orbits beginning from an arbitrary point on the attractor. It means that the ergodic property of a general dynamical system can be quantified by a spectrum of the characteristic Lyapunov exponents. Below, a brief description of the used method is presented. Let a point x 0 belong to the attractor A of a dynamical system. An evolution trajectory of the point x 0 is referred to as a real/true trajectory. A positive quantity ε, being significantly less than the attractor dimension, is chosen. Furthermore, an arbitrary perturbed point x 0 is chosen in a way to satisfy x 0 − x 0 = ε. The evolution of points x 0 and x 0 is considered in a short time interval T, and new points are denoted by x 1 and x 1 , respectively. A vector ∆x 1 = x 1 − x 1 is called the perturbation vector. The first estimate of the exponent is found with the use of the following formula The time interval T is chosen in a way to keep the amplitude of perturbation less than the linear dimensions of the phase space nonhomogenity and the attractor dimension. The normalized perturbation vector ∆x 1 = ε∆x 1 /||∆x 1 || is taken, and a new perturbed point x 1 = x 1 + ∆x 1 is defined. Finally, the so far described procedure is implemented taking into account x 1 and x 1 instead of x 0 and x 0 , respectively. After repeating this procedure M times, λ is defined as an arithmetic average of the estimates λ l obtained on each computational step: In order to achieve a higher estimate, one can take large M and carry out computations for a different initial point x 0 . This method can be used when the equations governing the system evolution are known. It should be noted, however, that these equations are usually unknown for the experimental data. To compute the Lyapunov spectrum numerically, one can use another approach generalizing the Benettin's algorithm. In general, it is necessary to follow a few trajectories of the perturbed points instead of only one, fundamental trajectory (the number of perturbed trajectories is equal to the dimension of the phase space). For this purpose, a numerical approach based on derivation of the dynamic equations in variations can be used [27]. Since the largest LE plays a crucial role in the evolution of all perturbed trajectories, it is necessary to carry out orthogonalization of the perturbation vectors on each step of the algorithm. In what follows, a procedure of numerical estimation of the Lyapunov spectrum of a dynamical system is briefly described. To simplify, the considerations are limited to 3D systems. Let r 0 stand for a point of the chaotic attractor and ε be a fixed positive number, small in comparison to linear dimensions of the attractor. The points x 0 , y 0 and z 0 are chosen so that the perturbation vectors ∆x 0 = x 0 − r 0 , ∆y 0 = y 0 − r 0 , ∆z 0 = z 0 − r 0 have the length ε and are mutually orthogonal. After a certain small time interval T, the points r 0 , x 0 , y 0 and z 0 are transformed into points r 1 , x 1 , y 1 and z 1 , respectively. Then, new perturbation vectors ∆x 1 = x 1 − r 1 , ∆y 1 = y 1 − r 1 , ∆z 1 = z 1 − r 1 are considered. The orthogonlization using the well-known (in linear algebra) Gramm-Schmidt method is carried out. After this step, the obtained vectors of perturbation ∆x 1 , ∆y 1 , ∆z 1 become orthonormalized, i.e., they are mutually orthogonal and have the unit length. Then, the renormalization of the perturbation vectors is carried out again to get lengths of the vectors in terms of the magnitude ε: . We take the following perturbed points: Next, the process is repeated, i.e., instead of the points r 0 , x 0 , y 0 and z 0 , the points r 1 , x 1 , y 1 and z 1 are taken into account, respectively. Repeating the so far described procedure M times, one finds: Then, a spectrum Λ = {λ 1 , λ 2 , λ 3 } of LEs can be found by the following formulas: In this method, the choice of time interval T is crucial. If one takes too large time interval T, then all perturbed trajectories are inclined in the direction corresponding to the maximum LE, and hence the obtained results are not reliable. Wolf Method In Reference [1], a novel algorithm to find nonnegative Lyapunov exponents by using a time series has been proposed. It has been illustrated that the Lyapunov exponents are associated with either exponential divergence or convergence of the neighborhood orbits in the considered phase space. In general, the method is applicable only when analytical governing equations are known, and it is based on tracing the large time-consuming increase in the number of elements in a small volume of an attractor. We defined a Lyapunov exponent and a spectrum of Lyapunov exponents, and then illustrated how the system dynamics depends on the number of exponents with different signs in the spectrum. Our approach included reconstruction of an attractor and investigation of orbital divergence on the possibly smallest distances using the approximate Gramm-Schmidt orthogonalization procedure in the reconstructed phase. In order to estimate the largest Lyapunov exponent, a long trace of time evolution of the chosen pair of the neighborhood orbits was carried out. Note that a particular attention should be paid, since the reconstructed attractor may contain points belonging to different attractors. Two versions of the method are proposed. The first one includes the so-called fixed evolution time, where the time interval associated with the change of the points is fixed. The main idea of the proposed method is that the largest Lyapunov exponent is computed based on one time series and used when the equations describing the system evolution are unknown and when it is impossible to measure all remaining phase coordinates. Consider a time series x(t), t = 1, . . . , N of one coordinate of a chaotic process measured in equal time intervals. The method of mutual information allows one to define the time delay τ, whereas the method of false neighbors yields the dimension of the embedded space m. As a result of the reconstruction, one gets a set of points of the space R m : where i = ((m − 1)τ + 1), . . . , N. We take a point from the series (3) and denote it by x 0 . In the series (3), one can find a point x 0 , where the relation || x 0 − x 0 ||= ε 0 < ε holds, and where ε is a fixed quantity, essentially less than the dimension of the reconstructed attractor. It is required that the points x 0 and x 0 are separated in time. Then, time evolution of these points is observed on the reconstructed attractor until the distance between points achieves ε max . The new points are denoted by x 1 and x 1 , the distance is ε 0 , and the associate interval of time evolution is denoted by T 1 . After that, we again consider the Sequence (14) the find the point x 1 , located close to x 1 , where || x 1 − x 1 || = ε 1 < ε holds. Vectors x 1 − x 1 and x 1 − x 1 should possibly have the same direction. Then, the procedure is repeated for points x 1 and x 1 . By repeating the above procedure M times, the largest Lyapunov exponent is estimated: This method has been employed in the present research to test the accuracy of results by using the classical and known spectra of the Lyapunov exponents of the Hénon map, Rössler equations, chaos and hyperchaos exhibited by the Lorenz system, and McKay-Glass equation [28]. In addition, it has been also employed to study the Belousov-Zhabotinsky reaction [29] and the Couette-Taylor flow [30]. Wolf et al. [1] have pointed out certain restrictions on the choice of the embedding dimension and the time required for the attractor reconstruction to achieve the most accurate estimates of the Lyapunov exponents. Using the Rössler attractor [16] and the Belousov-Zhabotynskiy reaction [29], the authors have demonstrated the effects of the time change during the attractor reconstruction, the time of evolution of the system between steps of the time change, the maximum length of the replacement vector and the minimum length of the exchange vector on the values of the estimated largest Lyapunov exponent. Furthermore, it has been shown that variation (between 0.5 and 1.5) of the time of the system evolution leads to reliable estimates of the studied three chaotic attractors. Also, some data requirements that make it possible to obtain the most accurate estimate of the Lyapunov Entropy 2018, 20, 175 7 of 28 exponent, such as the use of small length scale data as well as some restrictions on the presence of noisy perturbations in the data (static and dynamic), have been discussed. The proposed algorithms can be used to detect chaos as well as to compute its parameters also for the experimental data with a few positive exponents. Furthermore, numerical studies have presented the topological complexity of chaos (the Lorenz attractor) and have shown that the deterministic chaos can be distinguished from white noise (the Belousov-Zhabotinsky reaction). Rosenstein Method Despite this method is simple in realization in comparison to the previous ones and it is characterized by high computational speed, it does not directly yield λ 1 , but rather the function: where x j is a given point, and x j denotes its neighbor. The algorithm is based on the relationship between d j and the Lyapunov exponents: d j (i) ≈ e λ 1 (i∆t) . The largest Lyapunov exponent is computed by estimating the inclination of the most linear part of the function. It should be mentioned, however, that finding this linear part does not belong to easy tasks. Kantz Method The algorithm proposed by Kantz [5] computes the LLE by searching all neighbors in vicinity of the reference trajectory and estimates the average distance between neighbors and the reference trajectory as a function of time (or a relative time multiplied by the data sampling frequency). The algorithm is based on the following formula: where x t stands for an arbitrary signal point; U t is a neighborhood of x t ; x i is a neighbor of x t ; τ-relative time multiplied by the sampling frequency; T-sample size; S(τ)-stretching factor in the region of a linear growth indicating a curve whose slope is equal to LE, i.e., e λτ ∝ e S(τ) . However, the assumption of a linear growth introduces new errors. Despite the fact that the method is useful and accurate for systems with known LEs, the choice of parameters and the region where the mentioned linear growth occurs is, in practice, arbitrary. The method yields correct results if the value of the Lyapunov exponent is known a priori, and hence the space with the tangent equal to that value can be chosen. Computation of LLE Based on Synchronization of Nonnegative Feedback In reference [7], the method of LLE computation based on synchronization of coupled identical systems has been proposed. The following k-dimensional discrete system: has been considered, where y ∈ R k , i ∈ (1, 2, . . . , k). The supplemental system has been proposed in the following way: where x, y, ∆y ∈ R k . Evolution of k-dimensional system is governed by k of LLEs. Consequently, synchronization of the perturbed and nonperturbed systems (19) is guaranteed by the following inequality: (20) where λ max stands for LLEs of the studied systems (18). Figure 1 shows synchronization between perturbed (first equation of (19)) and nonperturbed (second equation of (19)) systems for alogistic map. The synchronization starts at p equal to λ, and this value represents the largest Lyapunov exponent of the system. In reference [8], systems with excitations have been studied. The authors have proposed the following way of coupling of identical systems: The application of this approach is limited to the systems with known equations of evolutions, and the way of introducing the coupling of two identical systems depends on the type of the considered system. Jacobi Method This method has been proposed in references [31,32]. The main idea is to use an algorithm, the scheme of which is illustrated in Figure 2. A sphere of small radius is taken. After a few iterations m, a certain operator transforms this sphere into an ellipsoid having 1 , … , half-axes. The sphere is stretched along the axes 1 , … , > , where s is the number of positive LEs. For sufficiently small , the operator is close to the sum of the shear operator and the linear operator A. The LLEs are computed as averaged eigenvalues of the operator A on the whole attractor. In reference [8], systems with excitations have been studied. The authors have proposed the following way of coupling of identical systems: . The application of this approach is limited to the systems with known equations of evolutions, and the way of introducing the coupling of two identical systems depends on the type of the considered system. Jacobi Method This method has been proposed in references [31,32]. The main idea is to use an algorithm, the scheme of which is illustrated in Figure 2. A sphere of small radius ε is taken. After a few iterations m, a certain operator T m transforms this sphere into an ellipsoid having a 1 , . . . , a p half-axes. The sphere is stretched along the axes a 1 , . . . , a s > ε, where s is the number of positive LEs. For sufficiently small ε, the operator T m is close to the sum of the shear operator and the linear operator A. The LLEs are computed as averaged eigenvalues of the operator A on the whole attractor. A vector ς j is chosen, and a set ς k i (i = 1, . . . , N) of i-th neighborhood vectors is found. The following set of vectors y i ≡ ς k i − ς j , where y i ≤ ε, is taken. After m successive iterations, the operator T m transforms the vector ς j into ς j+m , and the vector ς k i into ς k i+m . Eventually, the vectors y i are transformed into Assuming that the radius ε is sufficiently small, one can introduce the operator A j as follows The operator A j describes the system in variations. To estimate the operator A, the least-square method can be employed: This yields the following system of equations of the dimension n × n: where V, C are the matrices of the dimension n × n, y k i stands for the k-th component of vector y i , and y k i + m is the k-th component of the vector y i + m . If A is a solution of the equations, then the LEs can be found in the following way where e j is a set of basic vectors in a tangent space ς j . In reference [8], systems with excitations have been studied. The authors have proposed the following way of coupling of identical systems: The application of this approach is limited to the systems with known equations of evolutions, and the way of introducing the coupling of two identical systems depends on the type of the considered system. Jacobi Method This method has been proposed in references [31,32]. The main idea is to use an algorithm, the scheme of which is illustrated in Figure 2. A sphere of small radius is taken. After a few iterations m, a certain operator transforms this sphere into an ellipsoid having 1 , … , half-axes. The sphere is stretched along the axes 1 , … , > , where s is the number of positive LEs. For sufficiently small , the operator is close to the sum of the shear operator and the linear operator A. The LLEs are computed as averaged eigenvalues of the operator A on the whole attractor. The algorithm can be realized in a way similar to the computation of LEs of the ODEs given analytically. Let us choose an arbitrary basis {e s } and then follow the changes in the length of the vector A j e s . As the vectors A j e s grow and their orientations change, it is necessary to perform their orthogonalization and normalization by using, for example, the Gramm-Schmidt procedure. The procedure is then repeated for the new basis. The mentioned method allows one to estimate a spectrum of nonnegative LEs. However, it has a serious disadvantage-it is highly sensitive to noise and errors. Modification of the Neural Network Method We have proposed a novel counterpart method to compute LEs based on a modification of the neural network method (see Figure 3). A single-layer feed forward neural network presented in Figure 3 has multiple input neurons, a layer of hidden neurons and one output neuron. The following notation is employed: a ij -weight of the connection between the i-th input neuron and the j-th hidden neuron; b i -weight of connection between the i-th hidden neuron and the output neuron. To realize the neural network algorithm, the following criteria were taken into account: (i) the network is sensitive to the input information (information is given in the form of real numbers); (ii) the network is self-organizing, i.e., it yields the output space of solutions only based on the inputs; (iii) the neural network is a network of straight distribution (all connections are directed from input neurons to output neurons); (iv) owing to the synapses tuning, the network exhibits dynamic couplings (in the learning process, the tuning of the synaptic coupling takes place (dW/dt = 0), where W stands for the weighted coefficients of the network). The hidden layer of neurons contains the hyperbolic tangent, which plays a role of an activation function ( Figure 4). A derivative of the hyperbolic tangent is described by a quadratic function, as it is in the case of a logistic function. However, contrarily to the logistic function, the space of the values of the hyperbolic tangent falls within the interval (−1; 1). This results in higher convergence in comparison to the standard logistic function. Prognosis of ̂ of a scalar time series is made by employing the following formula where stands for the number of neurons, is the number of the searched LE, stands for the The hidden layer of neurons contains the hyperbolic tangent, which plays a role of an activation function ( Figure 4). A derivative of the hyperbolic tangent is described by a quadratic function, as it is in the case of a logistic function. However, contrarily to the logistic function, the space of the values of the hyperbolic tangent falls within the interval (−1; 1). This results in higher convergence in comparison to the standard logistic function. The hidden layer of neurons contains the hyperbolic tangent, which plays a role of an activation function ( Figure 4). A derivative of the hyperbolic tangent is described by a quadratic function, as it is in the case of a logistic function. However, contrarily to the logistic function, the space of the values of the hyperbolic tangent falls within the interval (−1; 1). This results in higher convergence in comparison to the standard logistic function. Prognosis ofx k of a scalar time series x k is made by employing the following formulâ a ij x k−j (22) where n stands for the number of neurons, d is the number of the searched LE, a ij stands for the n × (d + 1) matrix of coefficients, and b i is the vector of the length n. The matrix a ij contains the coupling forces with respect to the network input, the vector b i is used to control the input of each neuron to the network output, whereas the vector a i0 is used for relatively simple learning based on data with nonzero averaged value. Weights a and b are chosen in a probabilistic way, and the dimension of the searched solution is decreased in the process of learning. The associated Gaussian is chosen in a way to have initial standard distribution 2 −j , centered with respect to zero in order to promote the most recent time delays (small values of j) in the phase space. The coupling forces are chosen in a way to minimize the averaged one step mean square error of a forecast: During the training of the network, sensitivity of the output is defined by computing partial derivatives of all averaged points of the time series in each time step x k−j : In the case of the network given by (22), the partial derivatives have the following form: The largest value j is the optimal embedding dimension, and the key role is played byŜ(j) as in the false nearest neighbors method. The individual values ofŜ(j) yield a quantitative estimate of the importance of each time step using the associated terms of the autocorrelation function or coefficients of the associated linear model. The weights of the trained neural network are substituted to the matrix of solutions, and the input data are used to define the initial state. The computation of the spectrum is realized by employment of the generalized Benettin's algorithm based on the obtained system of equations. Gauss Wavelets In the majority of engineering problems, the Fourier analysis is insufficient, since it deals with the averaged spectrum of the whole studied vibration signal and presents only a general picture of the signal. On the contrary, wavelets play a role of a "microscope" which allows one to observe the spectrum at each time instant, and detect births/deaths of the frequencies in time. A wavelet transform of a 1D signal is realized with respect to a basis being usually a soliton-like function with given properties. The basis is obtained by displacement and tension/compression of a function called a wavelet. In the present work, the Gauss wavelets, defined as derivatives of the Gauss function, were used. Higher-order derivatives have many zero moments, and hence they allow one to obtain information about higher-order features hidden in the investigated signal. The 8th order Gauss wavelets of the of the following form were employed: Analysis of Classical Dynamical Systems by LEs and Gauss Wavelets In this section, simple classical systems (Figures 5-9) have been studied with emphasis put on a comparison of the LEs (Tables 1-5) obtained using the Wolf, Rosenstein, Kantz and neural network methods. The convergence of the mentioned methods, depending on the number of iteration steps, has been illustrated and discussed (Tables 6-10). The Benettin method has been used as a reference because for most systems, there are no analytically calculated spectra of Lyapunov exponents. Moreover, the Benettin method calculates Lyapunov exponents based on the system equations. Logistic Map A logistic map describes how the population changes with respect to time: Here, takes the values from 0 to 1 and presents the population in the n-th year, whereas 0 denotes the initial population (in the year 0); R is a positive parameter characterizing an increase in the population (computations were carried out for R = 4). The first Lyapunov exponent and the Kaplan-Yorke dimension have been estimated by Sprott [33,34]. He has obtained: λ1 = 0.693147181, and the Kaplan-Yorke dimension: 1.0. The power spectrum is noisy and it is not possible to distinguish the dominating frequency. A similar situation is exhibited by the Gauss wavelet, where a large set of frequencies is visible. Dynamics of LLE changing increases for r > 3. As can be seen in Table 1, all computational methods were compared with Benettin's original results. Good coincidence was exhibited by the neural network method, the Rosenstein method, the Kantz method, and the method of synchronization. The Wolf method gave decreased/increased value of LLE in comparison to the original value. Logistic Map A logistic map describes how the population changes with respect to time: Here, X n takes the values from 0 to 1 and presents the population in the n-th year, whereas X 0 denotes the initial population (in the year 0); R is a positive parameter characterizing an increase in the population (computations were carried out for R = 4). The first Lyapunov exponent and the Kaplan-Yorke dimension have been estimated by Sprott [33,34]. He has obtained: λ 1 = 0.693147181, and the Kaplan-Yorke dimension: 1.0. Logistic Map A logistic map describes how the population changes with respect to time: Here, takes the values from 0 to 1 and presents the population in the n-th year, whereas 0 denotes the initial population (in the year 0); R is a positive parameter characterizing an increase in the population (computations were carried out for R = 4). The first Lyapunov exponent and the Kaplan-Yorke dimension have been estimated by Sprott [33,34]. He has obtained: λ1 = 0.693147181, and the Kaplan-Yorke dimension: 1.0. The power spectrum is noisy and it is not possible to distinguish the dominating frequency. A similar situation is exhibited by the Gauss wavelet, where a large set of frequencies is visible. Dynamics of LLE changing increases for r > 3. As can be seen in Table 1, all computational methods were compared with Benettin's original results. Good coincidence was exhibited by the neural network method, the Rosenstein method, the Kantz method, and the method of synchronization. The Wolf method gave decreased/increased value of LLE in comparison to the original value. Logistic Map A logistic map describes how the population changes with respect to time: Here, takes the values from 0 to 1 and presents the population in the n-th year, whereas 0 denotes the initial population (in the year 0); R is a positive parameter characterizing an increase in the population (computations were carried out for R = 4). The first Lyapunov exponent and the Kaplan-Yorke dimension have been estimated by Sprott [33,34]. He has obtained: λ1 = 0.693147181, and the Kaplan-Yorke dimension: 1.0. The power spectrum is noisy and it is not possible to distinguish the dominating frequency. A similar situation is exhibited by the Gauss wavelet, where a large set of frequencies is visible. Dynamics of LLE changing increases for r > 3. As can be seen in Table 1, all computational methods were compared with Benettin's original results. Good coincidence was exhibited by the neural network method, the Rosenstein method, the Kantz method, and the method of synchronization. The Wolf method gave decreased/increased value of LLE in comparison to the original value. Logistic Map A logistic map describes how the population changes with respect to time: Here, takes the values from 0 to 1 and presents the population in the n-th year, whereas 0 denotes the initial population (in the year 0); R is a positive parameter characterizing an increase in the population (computations were carried out for R = 4). The first Lyapunov exponent and the Kaplan-Yorke dimension have been estimated by Sprott [33,34]. He has obtained: λ1 = 0.693147181, and the Kaplan-Yorke dimension: 1.0. The power spectrum is noisy and it is not possible to distinguish the dominating frequency. A similar situation is exhibited by the Gauss wavelet, where a large set of frequencies is visible. Dynamics of LLE changing increases for r > 3. As can be seen in Table 1, all computational methods were compared with Benettin's original results. Good coincidence was exhibited by the neural network method, the Rosenstein method, the Kantz method, and the method of synchronization. The Wolf method gave decreased/increased value of LLE in comparison to the original value. Logistic Map A logistic map describes how the population changes with respect to time: Here, takes the values from 0 to 1 and presents the population in the n-th year, whereas 0 denotes the initial population (in the year 0); R is a positive parameter characterizing an increase in the population (computations were carried out for R = 4). The first Lyapunov exponent and the Kaplan-Yorke dimension have been estimated by Sprott [33,34]. He has obtained: λ1 = 0.693147181, and the Kaplan-Yorke dimension: 1.0. The power spectrum is noisy and it is not possible to distinguish the dominating frequency. A similar situation is exhibited by the Gauss wavelet, where a large set of frequencies is visible. Dynamics of LLE changing increases for r > 3. As can be seen in Table 1, all computational methods were compared with Benettin's original results. Good coincidence was exhibited by the neural network method, the Rosenstein method, the Kantz method, and the method of synchronization. The Wolf method gave decreased/increased value of LLE in comparison to the original value. The power spectrum is noisy and it is not possible to distinguish the dominating frequency. A similar situation is exhibited by the Gauss wavelet, where a large set of frequencies is visible. Dynamics of LLE changing increases for r > 3. As can be seen in Table 1, all computational methods were compared with Benettin's original results. Good coincidence was exhibited by the neural network method, the Rosenstein method, the Kantz method, and the method of synchronization. The Wolf method gave decreased/increased value of LLE in comparison to the original value. Hénon Map The Hénon map takes a point (X n , Y n ) and maps it into another point by the following formulas: The following parameters are fixed for numerical experiments: a = 1.4, b = 0.3. Since the Equations (28) do not correspond to a real object, the parameters are replaced with fixed values. Sprott [34] has computed the Lyapunov spectrum and the Kaplan-Yorke dimension of the map using the Benettin method [27] by solving (28). He has obtained the following LEs: λ 1 = 0.419217, λ 2 = −1.623190, and the Kaplan-Yorke dimension: 1.258267. Similarly to the logistic map, the power spectrum exhibits a uniform noisy shape. However, one can distinguish a dominating frequency (ω 1 ≈ 0, 45). This frequency is also visible on the wavelet spectrum as a region of the largest amplitudes along the whole signal (brighter regions in the graph). Plots of the change in the LLE correlate with bifurcation diagrams for the same interval of changes in the parameters a and b. Dynamics of the LLE changes increases with the increase in both control parameters. Starting with the graphs of LEs for a given set of control parameters, the system mainly remains in a periodic regime, but it exhibits chaotic dynamics for large values of the control parameters. Beginning from the results shown in Table 2, the majority of the employed computational methods yielded good results. However, the most accurate results were obtained by the neural network method (for whole spectrum of LEs), the Rosenstein method, the Kantz method, and the method of synchronization (in the case of LLEs). The Wolf method gave decreased estimated values of the LLEs. Table 4. Lyapunov exponents spectrum and LLEs computed by different methods (Rössler attractor). Table 6. Fourier power spectra (a) and Gauss wavelet spectra (b) obtained for ∆t = 1, 2 and the LLEs computed by different methods (logistic map). Table 7. Fourier power spectra (a) and Gauss wavelet spectra (b) obtained for ∆t = 1, 2 and the computed LLEs by different methods (Hénon map Table 7. Fourier power spectra (a) and Gauss wavelet spectra (b) obtained for ∆ = 1, 2 and the computed LLEs by different methods (Hénon map). Hyperchaotic Generalised Hénon Map To obtain the hyperchaotic Hénon map, one needs to take a point (X n , Y n , Z n ) and map it into the following one: The computations were carried out for the following fixed parameters: a = 3.4, b = 0.1. The Lyapunov spectrum reported in reference [14] is: 0.276; 0.257; 4.040. One can distinguish a large number of frequencies in the power spectrum. Frequencies with the largest amplitude are located in the interval [0.15; 0.3] (frequencies ω 1 − ω 4 ), but the remaining part of the spectrum is noisy. This interval corresponds to the brightest region on the Gauss wavelet, which is correlated with the values of the power spectrum. Changes in LLEs coincide with the bifurcation diagrams constructed for the same intervals of changes in the control parameters a and b. Dynamics of LLEs increases with the increase in the control parameters. As in the case of the Hénon map, the chart of LEs for the selected control parameters exhibits, for a majority of studied parameters, periodic dynamics. It transits into chaos for a ≈ 1.4, and is almost suddenly shifted into hyperchaos (2 positive LEs). Good results were obtained by the Benettin, Rosenstein and synchronization methods (divergence from the third decimal place). The neural network yielded slightly increased estimates of two first LEs, whereas the third LE was estimated almost exactly. The Kantz method gave a decreased result in comparison to reference data. The Wolf method resulted in the largest error. Rössler Attractor The following Rössler system of ODEs was investigated: and the computations were carried out for the following fixed parameters a = b = 0.2 and c = 5.7. The power spectrum contains the fundamental frequency ω 1 , which is accompanied by damped bursts (frequencies ω 2 − ω 10 ). In the whole time interval, the Gauss wavelet exhibits the brightest region of the fundamental frequency with darker peaks going to zero. Thus, the picture is analogous to the power spectrum. Contrarily to the studied maps, the bifurcation diagrams have a more complex structure. However, there is still correlation with the changes in LLEs for the corresponding control parameters. The parameter b has the smallest influence on the change in LLE. Graphs of LLEs also exhibit a more complex structure. Borders of different vibration kinds have complex forms, which illustrates the increase in the system complexity. Aside from the chaos and hyperchaos zones, there are drops indicating 3 positive LEs. Amabili et al. [35] have suggested to call all chaotic oscillations, for which at least two positive Lyapunov exponents exist, by hyperchaotic. As far as Table 4 is considered, the best results were yielded by the Benettin and Rosenstein methods. The method of neural networks gave very good results in the case of estimates of two first LEs, but underestimated the third exponent. The Wolf method yielded smaller value of the first exponent compared to the reference data. The most underestimated results were given by the Kantz method. The carried out numerical experiments showed that changing the sampling frequency did not affect the power spectrum and wavelet spectrum. This was also validated by results obtained by the Benettin, neural networks, and Rosenstein methods, which yielded the results very close to original ones. The Kantz method gave underestimated results for different sampling frequency, correlating with the results obtained for the standard sample size. Lorenz Attractor The system is described by the following ODEs: where r stands for the normalized Rayleigh number (nondimensional number defining fluid behavior under gradient): In the above equation, the following notation is used: g-gravity of Earth; L-characteristic dimension of the fluid space; ∆T-temperature difference between fluid walls; ν-kinematic fluid viscosity, χ-thermal conductivity of the fluid; β-coefficient of heat fluid extension; σ-Prandtl number (takes into account heat source property) governed by the following equation where: ν = η/ρ-kinematic viscosity, η-dynamic viscosity, ρ-density, α = ℵ ρC p -temperature transfer coefficient, ℵ-heat transfer coefficient, C p -specific heat capacity under constant pressure; and ρ-information about the geometry of the convective cell. The power spectrum of the attractor decreases uniformly when approaching a finite frequency, and there are no frequencies with a strongly dominating amplitude. The latter observation has been also verified by the Gauss wavelet spectrum. The bifurcation diagrams, similar to those for the Rössler system, exhibit a complex structure, but the correlation to the LLEs change is conserved. The richest/lowest dynamics of LLE is obtained for changing parameter r/σ. Based on the reported graphs of LEs, one can conclude that the system dynamics is fully chaotic. There are also narrow windows of hyperchaotic dynamics. A comparison of the results reported in Table 5 with the original results exhibits an excellent coincidence of the Benettin method (original results) and the neural network method (+4.79%). The Wolf and Rosenstein methods yielded the underestimated results of the LLE value. The worst estimation has been obtained by Kantz method. Changing the sampling frequency did not change Fourier and wavelet power spectra. This was also validated by the Benettin and Rosenstein methods, which yield the results very close to the original values in spite of the arbitrary choice of the sampling frequency. Concluding Remarks The analysis of the dynamics of the studied classical system by different methods leads to a conclusion that the most perspective and useful is the modified method of neural networks [9,10]. It gives excellent convergence to the original results and, as the only one (besides of the Benettin method), allows to compute the spectrum of all Lyapunov exponents. In addition, very good results were obtained by the Rosenstein and Kantz methods for all studied systems. However, this method can be used to estimate only the largest Lyapunov exponents. As far as the convergence is considered, the Wolf method yielded either over-or underestimated values of LEs. The method of synchronization worked reasonably well for the maps, but it was not useful in studying differential equations (the Rössler or Lorenz systems). The mentioned systems require the use of another type of coupling, which is a drawback of the method. It should be emphasized that this part of the paper serves as a preliminary study of a more complicated nonlinear continuous structural system, which is studied in Part 2. The carried out analysis of the works devoted to feasible methods for computation of Lyapunov exponents shows that there is no universal, verified, and general method to compute the exact (in the sense of numerics) values of the Lyapunov exponents. This observation leads to the conclusion that there is a need to employ qualitatively different methods to check the reliability of "true chaotic results". Furthermore, the carried out analysis is a helping tool to study systems of an infinite dimension. Such an analysis is the subject of the second part of the paper.
11,278.4
2018-03-01T00:00:00.000
[ "Mathematics", "Physics" ]
Focal areas of increased lipid concentration on the coating of microbubbles during short tone-burst ultrasound insonification Acoustic behavior of lipid-coated microbubbles has been widely studied, which has led to several numerical microbubble dynamics models that incorporate lipid coating behavior, such as buckling and rupture. In this study we investigated the relationship between microbubble acoustic and lipid coating behavior on a nanosecond scale by using fluorescently labeled lipids. It is hypothesized that a local increased concentration of lipids, appearing as a focal area of increased fluorescence intensity (hot spot) in the fluorescence image, is related to buckling and folding of the lipid layer thereby highly influencing the microbubble acoustic behavior. To test this hypothesis, the lipid microbubble coating was fluorescently labeled. The vibration of the microbubble (n = 177; 2.3–10.3 μm in diameter) upon insonification at an ultrasound frequency of 0.5 or 1 MHz at 25 or 50 kPa acoustic pressure was recorded with the UPMC Cam, an ultra-high-speed fluorescence camera, operated at ~4–5 million frames per second. During short tone-burst excitation, hot spots on the microbubble coating occurred at relative vibration amplitudes > 0.3 irrespective of frequency and acoustic pressure. Around resonance, the majority of the microbubbles formed hot spots. When the microbubble also deflated acoustically, hot spot formation was likely irreversible. Although compression-only behavior (defined as substantially more microbubble compression than expansion) and subharmonic responses were observed in those microbubbles that formed hot spots, both phenomena were also found in microbubbles that did not form hot spots during insonification. In conclusion, this study reveals hot spot formation of the lipid monolayer in the microbubble’s compression phase. However, our experimental results show that there is no direct relationship between hot spot formation of the lipid coating and microbubble acoustic behaviors such as compression-only and the generation of a subharmonic response. Hence, our hypothesis that hot spots are related to acoustic buckling could not be verified. Introduction Ultrasound contrast agents (UCAs) consist of coated gas microbubbles (1-10 μm in diameter) dispersed in an aqueous suspension. These blood pool agents aid in the diagnosis of for example liver [1] and kidney lesions [2] and in left ventricular visualization [3]. In the blood pool, uncoated microbubbles would dissolve in less than 0.3 s [4] which is too short a lifetime for diagnostic imaging; a coating is therefore essential for increased stability and thus longevity of the microbubbles. The coating reduces the surface tension and the corresponding capillary pressure that drives the gas out of the microbubble core into the surrounding fluid. In addition, it forms a barrier that reduces gas diffusion [5][6][7]. For medical purposes, ultrasound frequencies ! 0.5 MHz are typically used at varying acoustic pressures [8]. When exposed to an ultrasound wave, the gas core of the microbubble responds to the pressure change of the ultrasound by compression and expansion, which results in the vibration of the microbubble [5,7,9]. The vibration provides microbubble-specific nonlinear acoustic signals for contrast-enhanced ultrasound imaging (CEUS) [7,10] and can induce bio effects such as microbubble-mediated drug uptake [5,9,11]. In clinically used UCAs, the microbubble coating consists of albumin or lipids; the most prevalent coating consists of lipids [12]. Lipid-coated microbubbles can show various vibration regimes in an ultrasound field, which are characterized by the volumetric vibration dynamics and shape oscillations of the microbubble. These vibrations vary from gentle for imaging and drug uptake applications to violent for drug uptake and cell killing applications [5,9]. Microbubble behavior has been widely studied to improve CEUS and drug uptake, either by acoustic scattering and attenuation measurements [13][14][15][16] or by optical observations using ultra-high-speed cameras [17][18][19][20][21][22]. The optical ultra-high-speed cameras typically operate in a bright field imaging mode, thereby visualizing the diffract shadow image of the gas core, but not the lipid coating, since the coating cannot be resolved at optical resolution. At large radial excursions the inertial oscillation of the gas core dominates the microbubble vibration, and therefore bright field imaging is sufficient [5]. However, at small radial excursions the behavior of the coating may dominate the vibration of the microbubble [23], which requires a more direct visualization of the coating. For lipid-coated microbubbles, the coating can be visualized by incorporating lipid dyes or fluorescent lipids in the microbubble coating. Borden et al. [24] incorporated the lipid dye DiI in their in-house produced microbubbles which they insonified with repeated one-cycle pulses at a frequency of 2.25 MHz at 400 kPa peak negative pressure (P_). The fluorescence recordings (30 frames per second (fps)) in between the ultrasound pulses revealed bud formation on the lipid coating, characterized by a higher intensity spot of fluorescence signal, as well as formation of lipid strings and globular aggregates, all of which are considered to be a result of the collapse of the lipid monolayer coating [25][26][27]. These alterations of the lipid coating are likely related to the acoustically-induced microbubble deflation, as Borden et al. [24] measured a decrease in microbubble diameter. Microbubble deflation in the absence of ultrasound at a time scale of seconds has been shown to induce the formation of buckles, folds, and vesicles on the microbubble coating which were visualized by means of the lipid dye DiI [28,29]. Luan et al. [30] observed bud formation on their in-house produced microbubbles (1 MHz, 255 kPa P_, 500 cycles) after incorporating the lipid dye DiI in the coating and subsequent fluorescence recordings at a frame rate of 150 kfps. This frame rate allowed visualization of bud formation during insonification and also of movements of buds/lipid clusters along the coating interface during insonification. The time scale of the recording, however, was still too low to completely resolve fast the microbubble dynamics. Although these studies have led to improved understanding of the lipid coating on microbubbles, real-time visualization of the coating during insonification is still lacking. At present it is unknown at which vibration amplitude the lipid monolayer collapses and whether this occurs during the compression or expansion phase. So far, only irreversible collapse of the monolayer has been observed for the microbubble coating [24,30], while numerical simulations of lipid monolayer dynamics also predict reversible collapse [26]. The relationship between specific microbubble behavior and lipid coating behavior on a molecular scale has also not been studied experimentally before. For example, compression-only behavior, i.e. when a microbubble compresses substantially more than it expands, as observed in optical bright field studies with a fast framing camera [31,32], can be simulated by the models of Marmottant [33], Doinikov [34], and Paul [35] and is hypothesized to be related to buckling of the lipid coating by the Marmottant model [31]. Buckling of the lipid coating, however, has not yet been verified and observed in optical studies with a fast framing camera. Real-time visualization of the lipid coating on a molecular scale during insonification requires a camera capable of recording fluorescence movies at ultra-high speed (~5 Mfps to record microbubble vibration in a 1 MHz ultrasound field) with high sensitivity and spatial resolution to capture enough signals from fluorophores on a nanoseconds time scale. The recently developed UPMC Cam, an ultra-high-speed camera capable of recording bright field and fluorescence movies at 25 Mfps, meets these requirements [36]. In this study we investigated the relationship between the collapse of the lipid coating of the microbubble and the phenomenon of buckling and compression-only behavior at a tens of nanoseconds time interval. For this purpose, lipid-coated microbubbles were fluorescently labeled by chemically conjugating fluorescent Oregon Green 488 dye to the lipids. We recorded the behavior of the fluorescently labeled microbubble coating while being insonified at an ultrasound frequency of 0.5 and 1 MHz with the UPMC Cam ultra-high-speed camera, operated at~4-5 Mfps. The behavior of the coating during insonification was then correlated to the acoustic response of the microbubble. It is hypothesized that a local increased concentration of lipids, appearing as a focal area of increased fluorescence intensity (hot spot) in the fluorescence image, is related to acoustic buckling. Type 2 fluorescent microbubbles were made by coupling succinimidyl ester of Oregon Green 488 carboxylic acid (ThermoFisher Scientific Inc.) to 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[amino(polyethylene glycol)-2000] (DSPE-PEG(2000)-NH 2 ; Avanti Polar Lipids). DSPE-PEG2000-NH 2 reacted with the 20% molar excess of the ester of Oregon Green 488 carboxylic acid in chloroform:DMSO solvent mixture (2:1 volume ratio) in the presence of 20% molar excess of diisopropyl ethylamine base (Sigma-Aldrich). Excess of nonbound dye and base were removed from the conjugate by repeated extractions with aqueous buffered saline, followed by extractions with water, to remove salt. Resulting conjugate was lyophilized for storage and reconstituted in chloroform. Fluorescent microbubbles were made by evaporating the chloroform under Argon gas, drying the lipid film under vacuum for 5 min, and adding DSPC in PBS, glycerol and propylene glycol as described above. The lipid solution in the 2 mL glass vial was sonicated for 5 min in a sonicator bath (20 kHz; Model 75D, VWR International, Radnor, PA, USA) before adding C 4 F 10 . After making the microbubbles in the Vial Shaker, the microbubbles were incubated at room temperature for 1 h in MilliQ, washed, incubated for 2 h in C 4 F 10 -saturated PBS, and washed as for the other microbubbles. All incubation steps after making the microbubbles were thus kept the same. Microbubble size distributions were measured on a Multisizer 3 Coulter Counter (Beckman Coulter, Indianapolis, IN, USA). A 50-μm aperture tube was used, allowing quantification of microbubble diameters between 0.8 and 18 μm using 300 channels. Measurements were performed in PBS as diluent (20 mL diluent volume; 100 μL analytic volume) and repeated three times to obtain the mean microbubble diameter, size distribution, and concentration. Ultra-high-speed fluorescence recordings After an OptiCell (Nunc, ThermoFisher Scientific; note that this product has been discontinued) was pretreated with 2% bovine serum albumin (Sigma) in PBS for 1 h at room temperature and washed three times with PBS, 5 μL of fluorescent microbubbles were injected in the OptiCell filled with PBS. This resulted in a final concentration of~1×10 5 microbubbles/mL in the OptiCell. During ultra-high-speed imaging, single microbubbles were insonified at room temperature at a frequency of 0.5 MHz (V318-SU, Olympus Scientific Solutions Americas, Waltham, Massachusetts, USA; microbubble type 2) or 1 MHz (A302S-SU-F1.63IN-PTF, Olympus; microbubble type 1 and 2) with a single tapered 10-cycle sine burst unless mentioned otherwise. The ultrasound signal was generated by an arbitrary waveform generator (Agilent 33210A; Keysight Technologies, Santa Rosa, California, USA) and amplified by an amplifier (model 100A250A; Amplifier Research, Souderton, PA, USA). The P_ varied between 25 and 300 kPa, as verified with a 200-μm capsule hydrophone (HGL-0200, Onda Corp, Sunnyvale, CA, USA). Fluorescence microscopic recordings were obtained with the UPMC Cam [36], an ultra-high-speed imaging camera at the University of Pittsburgh Medical Center, equipped with a 60× magnification lens (LUMPLFLN 60X/W, NA 1.0, depth of field 0.61 μm, Olympus Corp, Tokyo, Japan), and pulsed 488 nm laser system (Genesis MX488-5000, Coherent, Santa Clara, CA, USA) driven by a pulsed current power supply (AV-106B-B, Avtech Electrosystems, Ogdensbugh, NY, USA). The 5 W laser source was triggered from the camera system such that the single laser pulse covered all 128 frames, starting 5 μs before the first frame and ending 5 μs after the last frame. The laser beam size covered the whole field of view. Frame rates were~4-5 Mfps to study the acoustic behavior of the microbubble coating. The frame rate at 0.5 MHz insonification was lower (~4 Mfps) than that at 1.0 MHz insonification (~5 Mfps) to ensure that 10 acoustic cycles at both insonification frequencies were recorded. The resulting spatial resolution of the overall optical system was 0.173 μm/pixel. Analysis Microbubbles were initially included in the study if they had not been insonified previously and if they were 'single' microbubbles, i.e. the neighboring microbubbles were at least two microbubble diameters away. Recordings of microbubbles were excluded if experimental errors had occurred, such as the microbubble was out of focus, or if the recording did not cover the full 10 cycles of ultrasound. Custom-designed image analysis software [37] was used to obtain D-t curves from all fluorescence microbubble recordings. The original software was designed for bright field images and tracked the inner inflection point of the fluorescent coating, underestimating the size of the microbubble. By inverting the minimum cost algorithm of the analysis software, the outer inflection point of the fluorescent coating could also be tracked. The mean between the inner and outer contour of the coating was then taken to obtain the size of the fluorescence microbubble during the recording. To improve the sensitivity, the D-t curves were resampled digitally at a 10 times higher sampling rate, resulting in 1280 points per curve instead of the original 128 data points. The initial microbubble diameter, D 0 , was determined from the average signal before the microbubble started to vibrate. The average diameter after the ultrasound burst was used as the final diameter, D end . The ratio between the D end and D 0 was used to quantify the diameter change after a single ultrasound burst. The D-t curves derived from the ultra-high-speed fluorescence movies were analyzed for the degree of microbubble expansion and compression. The maximum diameter (D max ) and the minimum diameter (D min ) were determined from the part of the D-t curve during which the microbubble was vibrating. The oversampling enabled a more accurate determination of D max and D min , which would otherwise only be based on a few data points and therefore sensitive for outliers. Symmetric behavior was defined elsewhere [31] for E/C = 0.5-2, compression-only behavior for E/C < 0.5, and expansion-only behavior for E/C > 2, where E = (D max −D 0 )/ D 0 is the relative expansion, and C = (D 0 -D min )/ D 0 is the relative compression. An additional parameter that was calculated based on E and C is the relative vibration amplitude, defined as (E+C)/2. The D-t curves were transformed to the frequency domain using a Fast Fourier Transformation (FFT), since analysis in this domain provides specific information on the frequency content of the recorded signal. The maximum vibration amplitude in a frequency band of 300 kHz centered around the transmit frequency (f T ) was determined. Likewise, the maximum amplitudes of the FFT were determined in a 300 kHz frequency band centered at 1 2 f T for the subharmonic frequency, 2f T for the second harmonic frequency, and 3f T for the third harmonic frequency (the sampling rate was sufficiently high to determine 3f T only for the 0.5 MHz insonification) [19]. When the amplitude at these frequencies was at least 6 dB above the noise level, these microbubbles were classified as responsive at the respective harmonic frequency [19]. The noise level was estimated from the average of the FFTs of microbubble recordings without the application of ultrasound (n = 38 for 0.5 MHz; n = 18 for 1 MHz). As a more robust measure for the asymmetry of a D-t curve we used the method introduced by Sijl et al. [20]. First, the D 0 was subtracted from the original signal and the FFT was recalculated. Next, the low frequency component A 0 was extracted from the time signal, expressing the offset of the D-t curve, using a low-pass Butterworth filter with a cut-off frequency at 125 kHz for f T = 0.5 MHz and at 250 kHz for f T = 1 MHz. The maximum negative amplitude of A 0 is a measure for the compression and the uncertainty in the determination was estimated from the maximum of A 0 . Before, during, and after insonification, the presence or formation of hot spots, defined as a focal area of increased fluorescence intensity, was determined manually in each recording. Statistics The diameter change results were expressed in a Tukey box and whisker plot. Comparisons were performed using a one-way ANOVA in GraphPad InStat version 5.04 (GraphPad Software). Differences were considered significant if p < 0.05. Results The number weighted mean microbubble diameter as determined from the Coulter Counter measurements was 3.7 μm with a standard deviation of diameter of 2.6 μm. In total, 137 randomly selected microbubbles that met the inclusion and exclusion criteria were studied optically, where the smallest microbubble had a diameter of 2.3 μm and the largest microbubble was 10.3 μm in diameter. About half of the microbubbles (51.8%; n = 71) showed inhomogeneities in the fluorescent coating before insonification. These were defined as focal areas of increased fluorescence; hereafter referred to as "hot spots". Interestingly, the occurrence of hot spots was higher in smaller microbubbles (D 0 < 6 μm) as shown in Fig 1A. On average 2.2 hot spots per microbubble coating were observed, with a range of one to five as shown in Fig 1B. Typical examples of these hot spots before ultrasound application are given in Fig 2B2, Table 1) were formed during insonification, irrespective whether they were reversible or irreversible, and an average of 1.7 new hot spots (range 1-6, see Table 1) persisted when the ultrasound was turned off. Overall, the fluorescence behavior was divided into six categories, depending on the presence of either a homogenous distribution of fluorescence or hot spots in the microbubble monolayer coating before ultrasound, and whether and when, these hot spots were observed during insonification, as summarized in At a frequency of 0.5 MHz, the microbubbles were insonified below their resonance frequency as indicated by the increase in the maximum vibration amplitude at the transmit frequency, f T , for larger microbubbles (Fig 4A and 4B). At 25 kPa, only two out of the 60 microbubbles (3%) showed reversible hot spots during insonification (red squares in Fig 4A); the other 58 microbubbles showed no change in fluorescence (97%, black spheres in Fig 4A). The half open symbols indicate microbubbles that already had hot spots before insonification. When these microbubbles were insonified again at 0.5 MHz and 50 kPa (Fig 4B), only 6 out of 40 microbubbles (15%) showed no change in fluorescence. The other microbubbles (85%) either formed reversible or irreversible (blue triangles in Fig 4) hot spots during insonification. At a frequency of 1 MHz and 50 kPa (Fig 4C), the fundamental amplitude was maximal for microbubbles with a D 0 of 4.5-5 μm, indicating these microbubbles were insonified near their resonance frequency. Around resonance, the majority of the microbubbles formed hot spots, while most of those above resonance showed no change in fluorescence. Microbubbles insonified at their resonance frequency vibrate at their largest vibration amplitude, and the resonance size of the microbubble type used in our study was identified from Fig 5C and found to be 4.5-5 μm. The microbubbles insonified at a frequency of 0.5 MHz and a P_ of 50 kPa (Fig 5B) had a larger relative vibration amplitude than those insonified at a frequency of 0.5 MHz and a P_ of 25 kPa (Fig 5A). Above a relative vibration amplitude of 0.3, all microbubbles formed hot spots, irrespective of the insonification frequency. Likewise, above a relative compression amplitude of 0.2, all microbubbles formed hot spots (see Fig 6). No clear distinction between reversible and irreversible hot spots could be observed, also not in the relative expansion amplitude of the microbubbles. Next, we assessed whether a relationship between hot spots and compression-only behavior was present. Several microbubbles showed compression-only behavior, indicated by E/ C < 0.5, predominantly when insonified at a frequency of 0.5 MHz (Fig 7A and 7B). Those microbubbles either had no change in fluorescence (mainly at a frequency of 0.5 MHz and P_ of 25 kPa; Fig 7A) or formed reversible or irreversible hot spots. The formation of hot spots in the coating also occurred when microbubbles vibrated symmetrically, E/C~1, most profoundly for 1 MHz insonification frequency and P_ of 50 kPa (Fig 7C). The maximum negative amplitude of A 0 is another measure for the compression of the microbubble and we observed that microbubbles were more likely to form hot spots for larger A 0 (Fig 8). We also assessed whether a relationship between hot spots and the response at the harmonic frequencies was present. At a frequency of 0.5 MHz and P_ of 25 or 50 kPa, five of the microbubbles (8% at P_ of 25 kPa; 13% at P_ of 50 kPa) had a measurable response at the subharmonic frequency; at a P_ of 25 kPa one out of these five (20%) formed a hot spot while this was four out of five (80%) at a P_ of 50 kPa (Fig 9). From the 77 microbubbles insonified at 1 MHz and 50 kPa, the response at the subharmonic frequency was present in ten microbubbles (13%); six of these (60%) formed a hot spot (Fig 9C). No differences were observed between the presence of a response at the subharmonic frequency and microbubbles that did or did not have hot spots before insonification. A response at the second harmonic frequency was present in the majority of the microbubbles that were insonified at a frequency of 0.5 MHz (64% at 25 kPa and 73% at 50 kPa). At a frequency of 1 MHz and a P_ of 50 kPa, 12% of the microbubbles had a response at the second harmonic frequency. All three types of lipid coating behavior, i.e. no change in fluorescence, formation of reversible, and irreversible hot spots, were amongst the microbubbles that had a response at the second harmonic frequency, see Fig 10. For the microbubbles insonified at 0.5 MHz, we could also assess the presence of a response at the third harmonic frequency. Three out of the 60 microbubbles (5%) insonified at 25 kPa had a response at the third harmonic frequency; none showed a change in fluorescence. At 50 kPa, 16 out of the 40 microbubbles (40%) had a response at the third harmonic frequency which contained a mix of all three types of lipid coating behavior. Fig 11 shows the change in the diameter of the microbubble between D 0 and D end , the final diameter after insonification. Microbubbles that had formed an irreversible hot spot were significantly smaller than microbubbles that showed no change in fluorescence or that formed a reversible hot spot. This was irrespective of insonification frequency and applied P_. On a subset of microbubbles that showed no change in fluorescence at a frequency of 1 MHz and P_ of 50 kPa (n = 41), the P_ was increased to 300 kPa in steps of 50 kPa. As shown in Fig 12, the majority of microbubbles formed hot spots at higher P_, i.e. at higher relative vibration amplitudes. For 12 out of these 41 microbubbles (30%), the formation of hot spots could not be determined because insonification was not assessed for all higher P_ (n = 7), or due to technical errors (n = 5). An example of a microbubble insonified at 300 kPa is shown in Fig 13A. In the expansion phases, a non-continuous lipid coating was observed, most evident during expansion later in the 10 cycles, suggesting that the coating had ruptured. In that same expansion phase, a bright fluorescence intensity spot was observed in the center of the microbubble. This signal could originate from a smaller microbubble that has temporarily pinched off from the original microbubble in the lateral direction, or a jetting phenomenon, as both observed before with bright-field ultra-high-speed imaging in side view [38]. After insonification, two more hot spots were present indicated by the arrows in the rightmost column of Fig 13A. The diameter of this microbubble was 30% smaller after insonification. For one microbubble insonified at a P_ of 150 kPa and 15 cycles, we observed the formation of a 5 μm long ligment in fluorescence signal (Fig 13B) that was not evident in bright field. This microbubble had a 10% smaller diameter after insonification. Discussion To the best of our knowledge, this is the first study that investigates on a nanoseconds time scale the dynamic lipid motion in the microbubble coating during insonification. We observed three different types of behavior of the fluorescently labeled lipid coating: (a) no change in fluorescence; (b) reversible hot spot formation during insonification (only in compression phase); (c) irreversible hot spot formation during insonification (in compression and expansion phase that persisted after ultrasound was turned off). Hot spots were first formed in the compression phase when the relative vibration of the microbubble was > 0.3, irrespective of the insonification frequency (0.5 or 1 MHz) and P_ (25 or 50 kPa). Hot spots Although observations of ultrasound-induced hot spot formation on the microbubble coating were made in earlier work by others [24,30], our new ultra-high-speed recordings show that these hot spots form on a nanosecond time scale. In our study, the formation of hot spots on the microbubble coating was first observed in the compression phase of the microbubble vibration. From lipid monolayer studies it is known that lipids condense upon lateral compression. If further compression is applied, the 2D lipid monolayer collapses into a 3D structure. Increased lipid concentration on microbubble coating during ultrasound insonification Upon further compression, the following 3D structures can be formed: buckles, bilayer folds, vesicles, tubes, or micelles [25][26][27]. Although the resolution of our microscopic systems does not allow us to observe the structure of the hot spots in detail, the hot spots are likely to be the result of lipid monolayer collapse through buckles, folds or vesicles than microstructures, although it is known that DSPC-based microbubbles have microstructures in their coating [28,[39][40][41]. Others have shown that folds in fluorescently labeled lipid monolayer films have a higher fluorescence intensity than microstructures [42,43] irrespective of the type of lipids in the monolayer and the fluorescent dye. In addition, folds and vesicles at the interface of DSPC-based microbubble coatings were also observed by Owen et al. [44] and Kim et al. [41] for their in-house produced microbubbles, albeit with electron microscopy (coating composition DSPC:PEG-40 stearate in molar ratio 9:1 and 10:1, respectively). Folds and vesicles were also observed on deflated in-house produced microbubbles by Longo et al. [28] (coating composition phosphatidylcholines in various acyl chain lengths including DSPC: DSPE-PEG (2000) in molar ration 9:1) and by Pu et al. [29] (coating composition phosphatidylcholines in various acyl chain lengths including DSPC:PEG-40 stearate in molar ratio 10:1) by fluorescence microscopy. An average amount of 1.9 reversible or reversible new hot spots were observed on the microbubble coating during insonification. The difference between reversible and irreversible lipid monolayer collapse is governed by the molecular composition of the monolayer and temperature, i.e. the monolayers' morphology and material properties [25][26][27]. DSPC, the main lipid (79 mol%) in our microbubble coating formulation, is always in the condensed phase while DSPE-PEG2000 can be in the expanded or condensed phase depending on the surface pressure [45]. Lipids in a condensed phase are semi-crystalline and therefore too brittle to bend, so upon compression a formed bilayer fold breaks at the point of attachment to the monolayer and deposits an independent fragment on top of the monolayer. This results in irreversible collapse because the collapsed materials cannot reincorporate into the monolayer when the surface pressure decreases [25,27]. This suggests that the reversible or irreversible collapse of microbubbles depends on the initial phase of the lipids and thus on the initial Increased lipid concentration on microbubble coating during ultrasound insonification surface pressure/lipid density of the microbubble coating. It may also explain why the microbubbles that formed irreversible hot spots have deflated, since it is likely that gas can more easily escape through fractures in a monolayer, and/or the increased surface tension leads to higher gas pressure and thus smaller gas volume. Although different lipid densities and surface pressures of the microbubble coating are predicted [20,23] and incorporated in for example the Marmottant [33] and Paul [35] models for microbubble behavior, future experimental studies are needed to verify this. In our study, 50.4% of the microbubbles already had hot spots before insonification. Our observation that significantly more microbubbles with diameters < 6 μm had hot spots than microbubbles > 6 μm, may suggest that the microbubbles < 6 μm originated from larger microbubbles that deflated after preparation. Microbubble deflation following preparation has been postulated [46] and reported when microbubbles are produced using flow-focusing techniques [47][48][49]. The deflation is driven by the Laplace overpressure inside the microbubble due to a difference in surface tension at the gas-surrounding fluid interface. The pressuredriven gas diffusion stabilizes when the internal and external gas pressures equalize, which occurs when the lipid domains reform and reach a packing density that eliminates the surface tension and resulting Laplace capillary pressure [46,49,50]. As mentioned before, microbubble deflation may result in the formation of buckles, folds, and vesicles on the microbubble coating [28,29], which is in line with the higher number of hot spots we observed for the smaller microbubbles (D 0 < 6 μm). Resonance of microbubbles The in-house produced microbubbles with a coating of DSPC (79 mol%) and DSPE-PEG (2000) (21 mol%) insonified at a frequency of 1 MHz and a P_ of 50 kPa were resonant for a diameter of~4.5-5 μm (Fig 4C). This resonance size is lower than what has been reported for other in-house produced microbubbles with a coating of DSPC (59.4 mol%), polyethyleneglycol-40-stearate (PEG-40 stearate) (35.7 mol%), and DSPE-PEG(2000) (4.9 mol%) at 1 MHz, namely~7.5 μm in diameter (50 kPa insonification) [51]. For another type of DSPC-containing microbubble, 10 μm diameter BR14 microbubbles were resonant at 1 MHz (< 40 kPa insonification) [37]. Although it is known that several factors have an influence on the resonance frequency, such as the applied P_ [15,52] and composition of the lipid coating [19], our finding that smaller microbubbles were at resonance at 1 MHz could also be explained by the difference in diameter as measured with fluorescence and bright field. Microbubbles appear larger in bright field than in fluorescence, because complex images are obtained in bright field due to refraction, i.e. diffraction, and scattering of the incident light, while the fluorescence signal directly originates from the fluorescent molecule in the microbubble coating [53]. Acoustic behavior of microbubble in relation to hot spots In our study, a relative vibration > 0.3 induced the formation of reversible or irreversible hot spots on the coating of our in-house produced DSPC-based microbubbles. Luan et al. [30] also found a relative vibration threshold of 0.3 for a different lipid behavior of their DPPC-based microbubble coating namely shedding of lipids, defined by the detachment of lipid material from the coating and subsequent transport of the shed material away from the microbubble. Possible explanations why Luan et al. [30] observed shedding and we did not at the identical 1 MHz insonification frequency could be the higher P_ (50- lipid in our study). The influence of the type of microbubble coating is further supported by the findings of Borden et al. [24] who reported that changing the type of lipid coating resulted in differences in the mechanism of lipid shedding for microbubbles that acoustically deflated. Luan et al. [30] reported that shedding occurred at the observed relative vibration of 0.3 indicating an approximated~50% surface reduction of the microbubble during the compression phase [30], which is close to the 41% surface reduction postulated for the collapse of a microbubble lipid monolayer [54]. Assuming that the microbubbles vibrated spherically, collapse of the microbubble lipid monolayer as assessed in our study by the formation of hot spots, was observed for a 55% median surface area reduction during the compression phase (interquartile range 47-75%; n = 71, i.e. all microbubbles that formed hot spots in compression), which is indeed close to the postulated 41%. De Jong et al. [31] hypothesized that microbubble shell buckling is related to compressiononly behavior, defined as E/C < 0.5. In addition, it is generally thought that buckling highly influences the vibration of the microbubble in terms of compression-only and response at the subharmonic frequency [20,31,33,55,56]. At the start of our study, it was hypothesized that a locally increased concentration of lipids, appearing as a focal area of increased fluorescence intensity (hot spot) in the fluorescence image, is related to buckling and thus influences the microbubble vibration. However, this hypothesis could not be experimentally confirmed because none of the acoustic behavior of the microbubble that we observed related to the formed hot spots. For instance, hot spot formation was observed for microbubbles with E/ C < 0.5, but hot spots were also formed for symmetrically vibrating microbubbles (E/C~1.0). On the other hand, there were also many microbubbles, especially at a frequency of 0.5 MHz and a P_ of 25 kPa, that had an E/C < 0.5 without forming a hot spot. Our findings do suggest that the relation between compression-only behavior and hot spot formation only exists when the relative microbubble vibration > 0.3. In line with previous observations that compressiononly is more pronounced in smaller microbubbles [20,31], we also found that compressiononly behavior was most present in microbubbles < 4 μm (Fig 6). Whereas E and C are based on two points in the D-t curve, D max and D min , respectively, A 0 is determined on the trend of the whole 10-cycle sine burst in the D-t curve. Although there is a relationship between A 0 and A 1 and thus E/C, no threshold value for compression-only behavior based on A 0 has been determined before. We did observe most irreversible hot spot formation when A 0 was < -0.2 (Fig 7); we therefore suggest -0.2 as the threshold for irreversible hot spot formation. Subharmonic responses are theoretically related to non-linear behavior of coated and noncoated microbubbles [57]. On the basis thereof, experimentally derived acoustic responses of microbubbles at the subharmonic frequency have been theoretically linked to compressiononly behavior [58], but only experimentally observed in a few BR14 microbubbles using ultrahigh-speed imaging by Sijl et al. [32,55]. On the contrary, van Rooij et al. [19] observed subharmonic responses only in the absence of compression-only behavior (defined as E/C < 0.5) in DSPC and DPPC-based microbubbles, in similar conditions as our current. In the present study, 20 out of the 177 microbubbles (11%) had a response at the subharmonic frequency. Only two out of those 20 microbubbles also had an E/C < 0.5. Both of these microbubbles were insonified at 0.5 MHz, but at different pressures. The microbubble insonified at P_ of 25 kPa had an A 0 of -0.13 (with E/C = 0.43; no change in fluorescence during insonification) while this was -0.27 for the microbubble insonified at P_ of 50 kPa (with E/C = 0.38; formation of irreversible hot spot during insonification). We also did not observe a clear relationship between the presence of a subharmonic response and a hot spot prior to insonification, since 60% of the microbubbles already had a hot spot before insonification. In addition, a similar amount of subharmonic responses was detected in microbubbles that did (45%) or did not show hot spot formation during insonification (55%, the sum of 20% hot spots only in compression and 35% hot spots in compression that persisted in expansion and after the ultrasound was off). Therefore, our study did not show any evidence of a relation between subharmonic response and hot spots. Experimental considerations To record lipid microbubble coating behavior on a nanosecond scale during insonification, we used the UPMC Cam and recorded the behavior in top view. A feature of this camera is that it records in 2D, which does not allow for discrimination between the different types of monolayer collapse (i.e., a buckle, fold, vesicle etc.) or the extension of the hot spot out of the focal plane. Simulations [26] have shown that the pathway of monolayer collapse from buckling to folding to vesicle formation is independent of monolayer composition, compression method, and compression rate, so assessing monolayer collapse by hot spot formation in 2D is already sufficient. On the other hand, lateral non-spherical microbubble shapes could be influenced by hot spots prior to insonification or have an influence on the formation of hot spots during insonification, which can be observed with side view recordings [38]. Because of the use of 2D recordings, the observed formed hot spots could have moved out of the focal plane during vibration. Consequently, they may appear to be no longer present. This has been observed by Luan et al. [30] for longer pulses, but only at a framerate of 150 kfps. On the one hand this may explain why we observed reversible and irreversible hot spots. On the other hand, hot spots present before insonification remained visible during the microbubble vibration (see Fig 2B2, 2B4 and 2B6) which suggests that movement or rotation of hot spots out of the recording plane is negligible for short tone-burst insonification. A microbubble with a diameter of 4 μm is expected to have 20.9×10 6 molecules of Oregon Green 488 dye on its coating. This number is based on previous work where a number of 2×10 6 lipids per μm 2 were reported for microbubbles [59]. The additional mass of the Oregon Green 488 dye (molecular weight (MW) 463 g/mol, Thermo Scientific Inc.) on the acoustic behavior of the microbubble can be neglected. First, Oregon Green 488 is only 14% of the total mass of the DSPE-PEG(2000)-Oregon Green 488 lipid (MW 3,343 g/mol). Second, the effective mass of the microbubble harmonic oscillator is 4πρ w R 3 , where ρ w is the density of water (998 kg/m 3 ), which for a 4 μm microbubble is 1×10 −13 kg, i.e. more than 1000 times the mass of the~2 nm shell. Moreover, we have reported in another study that the addition of streptavidin, which has a mass 130 times larger than Oregon Green 488, to the DSPE-PEG(2000)-biotin lipid does not change the resonance frequency of DSPC-microbubbles [60]. Implications Although we observed focal areas of increased fluorescence intensity (hot spots), hypothesized to be buckles as a result of a local increase in phospholipid concentration, we could not relate these to acoustic behavior of the microbubble such as compression-only and subharmonic responses as others did experimentally or theoretically [32,55,58]. Because we could not relate the presence of hot spots to the acoustic behavior of the microbubble, this could imply that the hot spots we observed were not buckles. However, we found significant evidence to assume the hot spots are in fact buckles or folds on a nanometer or sub micrometer scale. On the other hand, in the microbubble field the term 'buckling' has always been associated with bright field visualization of a larger part of the microbubble (macroscopic) and has been modelled as such. Therefore the definition of buckling may need refinement in terms of the formation of localized buckles on one hand, and the appearance of global buckling which affects the acoustic behavior on the other hand. Our ultra-high-speed fluorescence observations of the dynamic behavior of the lipid monolayer on coated microbubbles during insonification can be extrapolated to microbubble vibration studies in which the lipid monolayer coating cannot be visualized. If the microbubble has a relative vibration > 0.3, the lipid monolayer will collapse. Combined with acoustic deflation of the microbubble, the monolayer collapse is likely to be irreversible. Our findings may also hold for other lipid coatings and insonification frequencies, but this requires experimental verification. Conclusion Using ultra-high-speed fluorescence recordings, we observed the formation of focal areas of increased fluorescence or hot spots, on the lipid monolayer microbubble coating at relative vibrations > 0.3 at a frequency of 0.5 and 1 MHz at a P_ of 25 and 50 kPa. Around resonance, the majority of the microbubbles formed hot spots. Formation of hot spots was always observed in the compression phase and in 68% of the cases they also persisted in the expansion phase and after the ultrasound was turned off. If the microbubble also acoustically deflated, hot spot formation was likely irreversible. While we have observed that acoustic vibration leads to the formation of hot spots, we did not find a correlation of hot spot formation with nonlinear acoustic behavior of the microbubble. Therefore, we could not verify the previous hypothesis that monolayer collapse by buckling or folding of the lipid coating of the microbubble on a molecular scale leads to nonlinear acoustic behavior of the microbubble.
9,373
2017-07-07T00:00:00.000
[ "Physics" ]
The CARMENES search for exoplanets around M dwarfs Context. The CARMENES instrument, installed at the 3.5 m telescope of the Calar Alto Observatory in Almería, Spain, was conceived to deliver high-accuracy radial velocity (RV) measurements with long-term stability to search for temperate rocky planets around a sample of nearby cool stars. Moreover, the broad wavelength coverage was designed to provide a range of stellar activity indicators to assess the nature of potential RV signals and to provide valuable spectral information to help characterise the stellar targets. Aims. We describe the CARMENES guaranteed time observations (GTO), spanning from 2016 to 2020, during which 19633 spectra for a sample of 362 targets were collected. We present the CARMENES Data Release 1 (DR1), which makes public all observations obtained during the GTO of the CARMENES survey. Methods. The CARMENES survey target selection was aimed at minimising biases, and about 70% of all known M dwarfs within 10 pc and accessible from Calar Alto were included. The data were pipeline-processed, and high-level data products, including 18642 precise RVs for 345 targets, were derived. Time series data of spectroscopic activity indicators were also obtained. Results. We discuss the characteristics of the CARMENES data, the statistical properties of the stellar sample, and the spectroscopic measurements. We show examples of the use of CARMENES data and provide a contextual view of the exoplanet population revealed by the survey, including 33 new planets, 17 re-analysed planets, and 26 confirmed planets from transiting candidate follow-up. A subsample of 238 targets was used to derive updated planet occurrence rates, yielding an overall average of 1 . 44 ± 0 . 20 planets with 1 M ⊕ < M pl sin i < 1000 M ⊕ and 1 day < P orb < 1000 days per star, and indicating that nearly every M dwarf hosts at least one planet. All the DR1 raw data, pipeline-processed data, and high-level data products are publicly available online. Conclusions. CARMENES data have proven very useful for identifying and measuring planetary companions. They are also suitable for a variety of additional applications, such as the determination of stellar fundamental and atmospheric properties, the characterisation of stellar activity, and the study of exoplanet atmospheres. Introduction M-type dwarfs provide some advantages with respect to Sunlike stars in the search for exoplanets, particularly those with low masses. Their relatively small sizes and masses result in Full Tables 1 and 2 are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5)orviahttp://cdsarc. u-strasbg.fr/viz-bin/cat/J/A+A/ stronger planetary signals. Furthermore, their low intrinsic luminosities imply that temperate planets orbiting within their liquidwater habitable zone have shorter orbital periods, of the order of tens of days (Kopparapu et al. 2013). In addition, they constitute an abundant stellar population, comprising the majority of stars (78.5 %) in the solar neighbourhood (Reylé et al. 2021). The main drawbacks of M dwarfs as targets for exoplanet searches are their intrinsic faintness and the fact that a relatively large Article number, page 1 of 25 arXiv:2302.10528v2 [astro-ph.EP] 23 Feb 2023 A&A proofs: manuscript no. 44879corr fraction of them show magnetic activity phenomena, especially the later spectral types (Reiners et al. 2012). A number of efforts have successfully exploited the so-called M-dwarf opportunity for planet detection over the past few decades (e.g. Delfosse et al. 1999;Endl et al. 2003;Wright et al. 2004;Bonfils et al. 2005;Nutzman & Charbonneau 2008;Johnson et al. 2010;Ricker et al. 2015;Affer et al. 2016;Seifahrt et al. 2018;Bayliss et al. 2018). The CARMENES 1 instrument and survey were specifically conceived to search for temperate rocky planets around a sample of nearby cool stars (Quirrenbach et al. 2014). The spectrograph was designed to provide high-accuracy radial velocity (RV) measurements with long-term stability in a broad wavelength interval where M-dwarf stars have the peak of their spectral energy distribution. Moreover, such wide coverage provides a range of stellar activity indicators to assess the nature of potential RV signals as well as valuable spectral information that can be used to characterise the stellar targets. The CARMENES instrument is installed at the 3.5 m telescope of the Calar Alto Observatory in Almería, Spain (37 • 13 25 N, 2 • 32 46 W). It provides nearly continuous wavelength coverage from 520 nm to 1710 nm from its two channels: the visual channel (VIS), with a spectral resolution of R = 94 600, covers the range λ = 520-960 nm, while the nearinfrared channel (NIR) yields a resolution of R = 80 400 within a wavelength interval λ = 960-1710 nm (Quirrenbach et al. 2016). Both channels are coupled to the telescope by optical fibres, with a projection of 1 . 5 on the sky. A sample of about 350 M dwarfs across all M spectral subtypes comprises the targets of the main survey. A total of 750 useful nights were reserved as guaranteed time observations (GTO) for the CARMENES consortium, and these ran for five years, from 1 January 2016 to 31 December 2020. The present publication accompanies the release of the observations acquired with the CARMENES VIS channel over the course of the RV survey within the GTO programme, which we have dubbed the CARMENES Data Release 1 (DR1). This includes raw data, calibrated spectra, and high-level data products, such as RVs and spectroscopic indicators. The paper is structured as follows. Section 2 describes the design and execution of the CARMENES survey. In Sect. 3 we present the CARMENES GTO target sample and provide a description of its statistical distribution. Section 4 describes the observations collected within the GTO and the processing data flow from raw frames to calibrated RVs and ancillary data products. In Sect. 5 we discuss the properties of the CARMENES DR1 regarding internal and external precision, we provide information regarding the presence of periodic signals in the data, and we present and discuss the sample of exoplanets in the surveyed targets. Furthermore, we present revised planet occurrence rates considering all publicly released data. Finally, Sect. 6 provides the summary and conclusions of the work. The CARMENES survey The initial goal of the GTO survey was to collect approximately 70 spectra for each of the foreseen 300 targets (Garcia-Piquer et al. 2017), which would have yielded a grand total of ∼21 000 spectra. During the survey, we identified a number of targets with high-amplitude RV variations (RV scatter >10 m s −1 and v sin i > 2 km s −1 ), which we classified as RV-loud (Tal-Or et al. 2018). For each of them, we obtained about 11 observations and monitoring was subsequently discontinued. A similar approach was followed for spectroscopic binaries, for which we acquired a number of measurements just enough to derive reliable orbital solutions (Baroch et al. 2018(Baroch et al. , 2021. For some of the binaries with the longest periods, however, monitoring at very low cadence has been extended over time to constrain better the orbital and physical parameters of the components. Despite the discontinued targets, some time into the survey it was realised that reaching 70 observations per star would not be possible, mostly because of the large number of measurements needed to characterise newly discovered exoplanets as a consequence of the measured astrophysical jitter and also because of the telescope and instrument overhead times being somewhat longer than initially considered. Furthermore, with the launch of the Transiting Exoplanet Survey Satellite (TESS) mission in 2018 (Ricker et al. 2015), the CARMENES Consortium agreed to invest approximately 50 useful GTO nights in following up TESS transiting planet candidates with M-dwarf hosts (CARMENES-TESS follow-up programme). As a consequence of the new circumstances, it was decided that the survey should aim at acquiring a minimum of 50 observations per target, which would yield plenty of planet detections and provide meaningful constraints on planet occurrence rates. At the same time, we redefined the relative priorities of the sample to favour stars of spectral type M4 V and later to exploit optimally the CARMENES capabilities in a relatively unexplored range of stellar host masses. Such a decision implied that the faint end of the M2 V and M3 V targets in the sample would have lower chances of being scheduled because of the employed criteria (Garcia-Piquer et al. 2017). At the end of the GTO survey in 2020, the minimum number of 50 measurements had not been reached for all surveyed targets. The CARMENES DR1 therefore contains unequal number of observations, with a median of 30 observations per star. However, some of the targets, such as RV standards and stars with suggestive planetary signals, were observed up to a few hundred times. About two thirds of the targets have time series of at least three years, and for almost half of the targets the observations cover at least four years. Only 10% of the targets are observed for less than a year. The cadence is random and non-uniform, not only because of observability but also for scientific reasons (e.g. priority increased when a planet candidate signal required more detailed sampling). In 2020, a proposal was submitted to the competitive Calar Alto Legacy projects call, and an additional 300 nights were awarded to the CARMENES Consortium to complete the survey during 2021-2023 and, hence, fulfil the goal of attaining at least 50 observations per target. CARMENES GTO sample The CARMENES GTO sample of M dwarfs is generally composed of the brightest stars of every spectral subtype that are visible from Calar Alto (δ > −23 • ), as described in Alonso-Floriano et al. (2015). Effectively, this means that about 70 % of the full sky is observable by the CARMENES survey. We only excluded stars that are known members of visual binaries at separations closer than 5 . We explicitly did not bias our sample with regard to age, metallicity, or magnetic activity, nor did we exclude stars with planets that were already known. More information on the selection criteria was provided by Reiners et al. (2018b, hereafter Rei18b) and references therein. The sample described by Rei18b was composed of 324 stars. Throughout 1. Distribution of the CARMENES GTO target sample (excluding the SB2 and ST3 systems) as a function of distance (d < 20 pc) for different spectral types or absolute Gaia G-band intervals. Some stars in the sample are at greater distances, and this number is provided inside the rightpointing arrow. One of the targets at the K-M spectral type boundary has an M G value below 7.73 mag and is not included; hence, the total number of stars plotted is 344. The distance distribution of the GCNS for the same intervals is also shown, and the ratios between the two are depicted as black crosses with the scale in the right y axis. the survey we added nine additional targets as a result of supervening circumstances such as new exoplanet announcements, interesting targets (e.g. in the TESS continuous viewing zone), and revised spectroscopic classification. Furthermore, we added 18 targets from the CARMENES-TESS follow-up programme. As opposed to Rei18b, we also included in our current analysis double-and triple-line spectroscopic binaries and triples (SB2 and ST3, respectively) and some visual binaries. We have found 17 of such binaries in the sample, 11 of which are new additions to Rei18b, but six were present there because they had not yet been identified as SB2, ST3, or visual binaries. Table 1 presents a selection of relevant properties of the 362 targets in the CARMENES GTO sample. The different columns list basic stellar parameters (M, R, T eff ), rotation periods (P rot ), and the number of measurements in the release, both in the form of pipeline-produced RVs (N RVC ) and zero-point-corrected RVs (N AVC ). Descriptions of these two data products are provided in Sect. 4. The basic stellar parameters were taken from the latest version of Carmencita, which is the CARMENES input catalogue , and from the series of papers on the characterisation of the CARMENES GTO sample (Alonso-Floriano et al. 2015;Cortés-Contreras et al. 2017;Jeffers et al. 2018;Díez Alonso et al. 2019;Cifuentes et al. 2020;Perdelwitz et al. 2021). In the case of targets where more than one set of lines are visible in the spectra (SB2, ST3, and visual binaries), the basic parameters are not listed (as they are ill-defined) and the column N RVC provides the total number of CARMENES observations released. The penultimate column indicates if the target is part of the blind GTO survey or if it is a TESS exoplanet candidate. An asterisk marks targets already tabulated by Rei18b. We are not discussing here the statistical distribution of the target sample regarding brightness and spectral type. The general properties are equivalent to those in Figs. 2 and 3 of Rei18b, which already comprised most of the sample presented here (>90 %). The volume completeness of the CARMENES GTO sample can be investigated by comparing distance distributions with the Gaia Catalogue of Nearby Stars (GCNS; Gaia Collaboration et al. 2021), which is assumed to be complete at the brightness cuts and spectral types of interest. In Fig. 1, we show a collection of histograms as a function of distance out to 20 pc for several spectral type and Gaia G-band absolute magnitude (M G ) intervals. To allow for a comparison, spectral types of GCNS stars were estimated from M G following the corresponding relationship by Cifuentes et al. (2020). The ratio between the number of stars in the CARMENES sample and the number of known stars in the GCNS is also shown. The ratio, that is, the sample completeness, decreases with M subtype (from 27 % for early Ms to 6 % for late Ms), as expected due to brightness limitations. The global completeness of the CARMENES sample at 20 pc, including all spectral types, is 15 %. If we consider distances to 10 pc, then the ratio of sample stars to known stars exceeds 50 % in all intervals except for the latest Ms, where the ratio is 28 %. Altogether, the CARMENES GTO sample contains nearly half (48 %) of all known M dwarfs within 10 pc of the Sun (Reylé et al. 2021), and about 70 % of those accessible from the Calar Alto Observatory. Most nearby M dwarfs that are not in the sample have close companions at less than 5 . Observations The observations of the CARMENES GTO survey were collected in a signal-to-noise (S/N) limited fashion. That is, using the number of counts from the exposure meter of the NIR channel (c EM ) and a calibrated relationship -S/N ∝ √ c EM , -the integration was continued until reaching a S/N ratio of 150 at order 50 (∼1200 nm) of the CARMENES NIR channel, or was interrupted after an integration of 1800 s to avoid excessive contamination from cosmic rays and line broadening due to Earth's rotation. According to the calculations by Reiners & Zechmeister (2020), a spectrum with S/N = 150 at 1200 nm for an earlyto mid-type M dwarf produces a typical uncertainty of 1 m s −1 in RV from photon shot noise, which was the required value for the survey. A total of 19 633 spectra were acquired as part of the GTO programme. However, a small fraction of them do not have sufficient quality for precise RV work and were not considered in our subsequent analysis. They were flagged by the processing pipeline because of low S/N, high S/N implying saturation risk, contamination by twilight, Moon, or stray light. The total number of spectra yielding useful RV measurements is 19 161. The discarded 472 spectra are still accessible from the Calar Alto archive 2 in raw format but are not part of the CARMENES DR1. The processing of the data was done automatically with a pipeline, including the reduction of raw frames, the extraction and calibration of spectra, the determination of RVs using a template-matching algorithm, and the calculation of crosscorrelation function (CCF) products. Full details on the applied procedure are provided below. The data for SB2 and ST3 targets were only processed up to the extraction and calibration of spectra, and were not analysed to determine precise RVs because our methodology is not suitable when more than one set of stellar lines is present in the spectra. Finally, we provide the full set of data products for 18 642 out of the 19 161 good spectra. Processing pipeline The observations were reduced with the caracal 3 pipeline, with the data flow being described by Caballero et al. (2016). The extraction pipeline is based on the reduce package of Piskunov & Valenti (2002) but many routines have been revised. In particular, we developed the flat-relative optimal extraction (FOX, Zechmeister et al. 2014) and wavelength calibration scripts, which combine spectra from hollow-cathode lamps (HCLs) and Fabry-Pérot (F-P) étalons (Bauer et al. 2015). The data release in this work is based on caracal v2.20. Radial velocities The RVs for the CARMENES DR1 were computed with serval 4 ) and raccoon 5 . Both software packages were specifically developed for data coming from the CARMENES instrument, although they can process spectroscopic data from other precise RV instruments as well (e.g. Stefánsson et al. 2020;Hoyer et al. 2021;Wang et al. 2022;Turtelboom et al. 2022). The serval code implements a data-driven approach, where both RVs and templates are derived from the observations themselves via a least-squares fitting procedure similar to the Template-Enhanced Radial velocity Re-analysis Application (TERRA; Anglada-Escudé & Butler 2012). The co-adding is performed by cubic B-spline regression. For the barycentric correction, the default option is the Python implementation barycorrpy 6 (Wright & Eastman 2014;Kanodia & Wright 2018). The RVs for each spectral order are produced, and the global RV of the spectrum is subsequently computed as a simple weighted mean over the spectral orders. By default, the ten bluest and the ten reddest spectral orders are not used. In those regions, the instrument efficiency decreases. Furthermore, the red end is strongly affected by telluric contamination and dichroic cutoff. For faint late M dwarfs additional blue orders may be omitted because of low S/N. Since the present data release contains the order-wise RVs, a more sophisticated recalculation of RV values (robust means, re-weighting using a posteriori information) employing a detailed chromatic analysis is also possible (e.g. Zechmeister et al. 2019). Finally, corrections for instrumental drift and secular acceleration (Kürster et al. 2003) are applied to the global RV, yielding the so-called RVC (Radial Velocity Corrected) velocities. The RV error bar is calculated as the weighted mean of the order-wise RVs (see Eq. 15 in Zechmeister et al. 2018) and takes into account photon noise, readout noise, and model mismatch. The last contribution quantifies the difference between the spectrum and the template (caused, for example, by cosmic rays, telluric contamination or detector artefacts), and the excess scatter of the averaged individual orders (caused, for example, by telluric contamination affecting specific orders or a chromatic trend). Thus, formal RV uncertainties are based on the quality of the template fit and not on estimates of any physical effects during observation or calibration (e.g. modal noise). The CARMENES instrument was designed to minimise all such effects (Seifert et al. 2012;Stürmer et al. 2014) but, if anyway present to some extent, they will result in excess noise (instrumental jitter). A further RV data product is provided, namely AVC (Average Velocity Corrected) velocities. These are obtained from RVCs by correcting for nightly zero points (NZPs; see Sect. 4.4). AVC RVs are not calculated if no instrumental drift value is available. The total error bar of each AVC RV considers the uncertainties of the RVC and the corrections added in quadrature. In addition to RVs, serval provides a further set of useful parameters. These include the chromatic index (CRX; a measure of the wavelength dependence of the RVs), the differential line width (dLW), and spectral line indices (e.g. Hα, Ca i, Ca ii IRT (infrared triplet), and Na i D), which are valuable activity indicators (Fuhrmeister et al. 2019b;Schöfer et al. 2019). A full description of these serval products and their calculation methodologies is provided in Zechmeister et al. (2018). The raccoon code is based on the CCF concept (Baranne et al. 1996), whereby a weighted binary mask is used to calculate the convolution with each observed spectrum. In our implementation, we derived the mask from the serval template of the target itself. One of the outputs is the RVs, which are known to be less precise than values coming from template matching for M dwarfs (Perger et al. 2017), but still allow for a crosscheck with serval. Other relevant CCF parameters produced by raccoon are the contrast (CON), the full width at half maximum (FWHM), and the bisector inverse slope (BIS). These parameters can be regarded as moments of the CCF that carry information on the characteristics of the stellar lines and, therefore, can be used to assess variability coming from astrophysical sources. Further details can be found in Lafarga et al. (2020). The RVs in the CARMENES DR1 may differ from RVs that have appeared in previous CARMENES publications. This is because serval and raccoon are steadily maintained and new upgrades are continuously made. In addition, all parameters are recalculated when new spectra of a target are considered (i.e. producing a new template) and, thus, slightly different values may result. Finally, the NZP corrections can vary when new data are considered, also impacting on the final velocities. In any case, any differences with published data are generally minor. Telluric contamination correction The Earth atmosphere imprints spectral features from its molecular and atomic components (mostly H 2 O and O 2 in the VIS domain), called telluric lines (or tellurics, for short), onto the stellar spectrum. serval handles this contamination by simply masking telluric lines when computing the RVs (during co-adding, telluric lines are strongly down-weighted, and severely contaminated template regions are masked as well during RV computation). Masking lines is a straightforward and robust first-order approach. The default mask of serval flags regions where the telluric line depth is typically about 5 % or greater. Various tests using different thresholds and resulting mask widths showed this value to provide optimal results by trading off wavelength coverage (i.e. RV precision) and systematic effects from telluric contamination. While the telluric mask is static in the detector frame, it moves in the stellar rest frame because of Earth's yearly barycentric motion. To ensure that identical spectral regions are used for RV determination throughout the observing season, an alternative approach would be to mask out the full barycentric velocity range around each telluric feature. However, we preferred not to use such a procedure because it significantly diminishes the available wavelength range and, thus, the amount of RV information. There may be cases where the residual telluric RV content (due to high airmass or micro-telluric contamination) may still be significant. Such residuals can most likely affect cases where the RV internal precision is very high (e.g. high S/N observations) and where the stellar RV signal is weak (e.g. fast rotators). Residual telluric contamination can result in spurious RV periodicities, mostly yearly signals or their aliases . Hence, caution is advised in the interpretation of those typically long-period, low-amplitude signals. Improvements may be made by re-weighting spectral orders, reprocessing with more conservative masks, or employing a more sophisticated telluric modelling scheme (e.g. Nagel 2019). Nightly zero points Although the CARMENES spectrograph is usually wavelength calibrated each afternoon and nightly instrumental drifts are measured with the F-P étalon, stellar RVs from the same night often share common systematic effects, which produce NZP offsets generally of a few m s −1 with a median error bar of 0.9 m s −1 (see Fig. 2). We employ RV-constant stars (rms < 10 m s −1 ) to calculate NZPs, with the exact procedure being described in more detail by Trifonov et al. (2018). The resulting values are subsequently subtracted from each of the serval RV measurements. To avoid self-biasing the measurements, the zero point of RV-constant stars is calculated by removing the target itself from the calibration pool (Tal-Or et al. 2019). Tests revealed that NZPcorrected RVs improve the statistical significance of the best-fit models of CARMENES exoplanet discoveries, thus illustrating the benefits of the correction procedure. The same algorithm was applied by Tal-Or et al. (2019) to archival HIRES (High Resolution Echelle Spectrometer) Keck RVs and by Trifonov et al. (2020b) to reprocessed RVs from HARPS (High Accuracy Radial velocity Planet Searcher) spectra. In both cases the studies revealed and corrected systematic effects in those instruments. Table 2 provides NZP values for all the CARMENES GTO nights. Reasons explaining the nightly offsets can be various, including a drift of the F-P, degraded quality of aged HCLs, strong instrument drifts during the ∼15 min calibration sequence (F-P and HCL calibration frames cannot be taken simultaneously), and different injection of calibration light coupled with insufficient scrambling. We were able to reduce some fraction of the night-to-night variability found during the initial CARMENES operations through hardware configuration changes and by employing a different strategy when acquiring the daily calibration sequences. As a result, the NZP scatter diminishes slightly after two years of operation (Fig. 2). In addition to the night-to-night offsets, we also performed a correction for intra-night drift. The correction was found to be significant early in the survey and related to a temperature effect of the F-P subsystem. A hardware upgrade on 6 September 2017 (BJD 2458003) greatly decreased the temperature coupling and eliminated the need for such a correction. In any case, the effects of self bias and intra-night drift correction are small. Further details on the instrument performance are provided in Bauer et al. (2020). Results The CARMENES DR1 provides raw spectroscopic data for the total sample of 362 targets but only full data products (including RVs, spectroscopic indices, and CCF parameters) for 345 targets, that is, excluding 17 SB2 and ST3 systems. Precise RVs of the components of 12 of these spectroscopic multiple systems and full orbital and physical analyses were presented by Baroch et al. (2018Baroch et al. ( , 2021. The remaining five binary systems, namely J05084−210, J06396−210, J09133+668, J16343+571 (CM Dra), and J23113+085, do not have a publication using CARMENES data yet. The procedure described in Sect. 4 was applied to the 18 642 suitable spectra and these produced the same number of RV determinations and associated data products. However, a fraction of those measurements lack a velocity drift calculation because of the poor quality of the simultaneous F-P spectrum. As a consequence, the number of drift-corrected RV measurements is 17 749, and these correspond to 344 targets. Only the faint target J16102−193 (K2-33) is not in the final sample because all spectra were taken without simultaneous F-P. All Notes. Listed are the Julian date (valid from UT12:00 to UT12:00 next day), the velocity of the nightly zero point (NZP), its uncertainty estimate σ NZP , the number of RV-quiet star RVs used to calculate the NZP and a quality flag (where 0 indicates no issue with the calculation and 1 means that the NZP could not be calculated, in which case the NZP is replaced by a moving NZP average from adjacent nights). This is a sample list. The full table can be downloaded from CDS via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/. data products associated with the CARMENES DR1 and ancillary files are available online 7 . In Fig. 3 we illustrate the distribution of the formal uncertainties of the RV measurements (internal precision). Targets are grouped into four spectral-type bins using the same criteria as in Fig. 1. Brighter targets have typical uncertainties of ∼1 m s −1 , as their S/N at 1200 nm reached 150, but fainter targets have larger uncertainties due to the larger photon noise. The median value of the internal precision is 1.27 m s −1 , with the maximum of the distribution (mode) at 0.91 m s −1 . The distribution of observations and their dispersion are illustrated in Fig. 4, also grouped in spectral-type bins. The scat-7 https://carmenes.cab.inta-csic.es. where the epoch weights w n include a jitter term σ j , which is added in quadrature to the formal RV uncertainties σ RV,n w n = 1 The σ j and the re-weighted mean RVs were obtained selfconsistently for each star via a maximum likelihood optimisation 8 . For the calculation we used the RVs as measured, with NZP correction, and no known signals of any nature (activity, planets) were subtracted. The median and mode of the distributions are 3.9 m s −1 and 3.3 m s −1 , respectively. These values can be compared to those characterising the internal precision in Fig. 3 to conclude that the RVs are most likely dominated by jitter (and signal) from astrophysical sources, which is statistically estimated to have a median contribution of ∼3.5 m s −1 . No obvious rms trends as a function of spectral type are observed except for a much higher rms (27.5 m s −1 ) for the latest bin due to a large fraction of low S/N measurements. High-resolution spectroscopic time series data We compiled data tables of the time series of the RVs as described in Sect. 4, as well as additional ancillary parameters, such as stellar activity indices, for each of the 345 M dwarfs (excluding SB2 and ST3) in the CARMENES sample. The dataset includes RV, CRX, dLW, and chromospheric line indices (Hα, Ca i, Ca ii IRT, and Na i D) from serval, and the CCF RV, BIS, FWHM, and CON obtained with raccoon. Data from each spectral order are provided separately. For the RVs, both the values 8 See function mlrms in https://github.com/mzechmeister/ python/blob/master/wstat.py. produced by serval (RVC) and those obtained after applying NZP corrections (AVC) are included. Furthermore, the exposure time and airmass of the observations, and the instrumental drift, the barycentric Earth RV, and the secular acceleration corrections applied to calculate RVCs are also provided. Graphical representations of the time series of serval RVs corrected for NZPs as described in Sect. 4.4 have been produced for all targets with at least five valid NZP-corrected RV values and are available online 7 . An example is provided in Fig. 5 for the target J00051+457 (GJ 2). Periodogram analyses of the RVs and several relevant activity indices are also presented. Before computing the periodogram, we applied a clipping criterion to the measured values of the RVs and indices to avoid obvious outliers and poor-quality measurements. All data points deviating by more than 3σ from the mean were eliminated and so were measurements with error bars greater than the average value plus 3σ. Nightly averages were computed for the targets J00183+440 (GX And), J00184+440 (GQ And), and J07274+052 (Luyten's Star), as they had observations at higher cadence. We subsequently computed the generalised Lomb-Scargle (GLS) periodogram ) of the RVs, the CRX, dLW, Hα, and Ca ii IRT indices, and the CCF parameters BIS and FWHM. We grouped the indices and CCF parameters in pairs according to their expected sensitivity to the same activity phenomena, according to the analysis of Lafarga et al. (2021). Therefore, three panels with equivalent activity indicators are provided in Fig. 5 FWHM, and Hα & Ca ii IRT. We considered periods ranging from twice the time span of the observations (to identify longterm variations) to the Nyquist frequency as computed from the closest RV measurement pairs of each dataset. However, an upper frequency limit of 0.95 day −1 was set to avoid daily aliases, except for targets with known short-period periodicities (closein transiting planets and very fast-rotating stars). We calculated the false alarm probability (FAP) by running 10 5 bootstrap realisations of the datasets. From the bootstrapped data, we also computed the probability of each periodogram peak by assessing the number of times that the real periodogram at a given frequency is above all the realisations. This probability is related to the GLS power ) and is used in the graphical representation. Exoplanets in the CARMENES GTO sample The CARMENES sample was designed to preserve completeness as much as possible. Therefore, the initial target selection did not explicitly exclude known planet hosts. The inaugural CARMENES survey paper by Trifonov et al. (2018) analysed the CARMENES data for a sample of seven targets known to host 12 planets. In this study, a hitherto unknown second, long-period planet orbiting J11417+427 (GJ 1148) was reported, qualifying as the first exoplanet discovered by CARMENES. Shortly after, Reiners et al. (2018a) published the first exoplanet detected from data collected solely from the CARMENES survey. Since then, a succession of announcements has been made using data from the CARMENES blind survey, totalling 33 newly discovered planets in 28 planetary systems at the time of writing this paper. The new CARMENES planets are marked with a 'd' in column 'Type' of Table A.1. In some cases, CARMENES data were combined with precise RVs from other instruments (such as HARPS, HARPS-N, ESPRESSO, HIRES, IRD, MAROON-X, etc.) to enhance the statistical significance of the measurements. Moreover, over the course of the survey, five already announced exoplanets were re-analysed using CARMENES data. Together with the 12 known planets in Trifonov et al. (2018), this makes up a total of 17 planets that are marked with an 'r' in As explained in Sect. 1, the CARMENES Consortium decided to invest a fraction of the GTO time in following up on transiting planet candidates. Some of the targets came from the K2 mission (Howell et al. 2014), but most of them were provided by the ongoing TESS mission. The campaign has been fruitful, and CARMENES has led or contributed to the confirmation of 26 such planet candidates and helped measure their masses. The CARMENES planets resulting from follow-up activities are marked with an 'f' in Table A.1. The TESS planet candidate around J11044+304 (TOI-1806) has been followed up and validated with CARMENES; however, its parameters are not listed in the table because they are not yet sufficiently significant. The columns in Table A.1 provide basic information on the targets and their planets. The parameters are taken from each of the quoted references. N CAR and N other are the number of RVs from CARMENES and other instruments, respectively, that were used in the corresponding publication. N CAR may differ from the number of measurements in the DR1 release. Cases where N CAR is greater than the number in DR1 correspond to recent publications that include observations taken after 31 December 2020, as part of the new CARMENES Legacy+ survey, while cases where N CAR is below the number of measurements in DR1 are those where additional measurements within the CARMENES GTO were taken after the quoted publications. For four such planets we present new parameters in Table A In addition to the publications using CARMENES RVs, the DR1 includes measurements of targets for which there have been exoplanet detections or claims in the literature but do not have a specific publication using CARMENES data at the time of writing. These are listed in Table 3, along with the number of released CARMENES epochs. Besides planet detections and confirmations, there are some targets in our sample for which planets have been announced and are listed in exoplanet catalogues but could not be confirmed or are controversial given the data obtained with CARMENES or other instruments. The list of such planets is provided in Table 4. We are not including a planet around J00183+440 (GX And b) because the CARMENES observations now seem to support a planetary scenario for the 11.44-day signal (Trifonov et al., in prep.), in contrast to the initial CARMENES data , which were casting doubt on its nature. Some of the challenged planets were already discussed in dedicated publications, as listed in Table 4. In two other cases, we carried out the analysis as part of the present work. Particularly, the candidates announced around J02222+478 (GJ 96) and J09561+627 (GJ 373) can be quite confidently ruled out as planets. For GJ 96, Hobson et al. (2018) announced a planet candidate based on 72 SOPHIE RVs 9 . The periodograms in Fig. 6 show that the 75-day signal present in SOPHIE data is absent in the 53 RVs from CARMENES. Instead, the dominant signal is at 28.5 d, which Hobson et al. (2018) already attributed to stellar activity. Indeed, this period is also present in the dLW, Ca ii IRT, and Hα time series of the CARMENES data, thus ruling out its planetary nature. Phase-folded plots are shown in Fig. 7. The signal in GJ 373 at about 17.8 d announced as a planet by Tuomi et al. (2019) and Feng et al. (2020) can most definitely be at- Notes. N CAR is the number of measurements in CARMENES DR1. tributed to stellar rotation modulation since it appears strongly in CARMENES activity indicators such as dLW and Hα and some CCF parameters, as can be seen in Fig. 8. The sample of planets in Table A.1 is represented graphically in Fig. 9. Scatter plots combine the stellar mass, minimum planetary mass, orbital period, and RV semi-amplitude. The planet samples in the diagrams comprise those with CARMENES analyses (showing 'd', 'r', and 'f' separately in Table A.1) and those coming from the NASA Exoplanet Archive 10 . The latter correspond only to RV-detected planets (i.e. planets discovered through photometric transits are excluded). Also, histogram distributions of each of these quantities for the planets analysed with CARMENES data are depicted as side plots. A few features in Fig. 9 are worth discussing. Regarding stellar mass, a majority of CARMENES planets have host stars with masses between 0.25 M and 0.65 M , which constitute the bulk of the sample. Remarkably, half of the 24 RV planets with stellar hosts below 0.25 M known to date have been discovered by CARMENES, a testament of the advantage offered by a red-optimised RV spectrometer in the late-type host regime. In terms of planetary mass, the majority of the CARMENES planets are in the super-Earth to the Neptune-mass domain, although several Earth-mass planets have been detected orbiting some of the lower-mass targets in our sample. Remarkable cases are two systems, J02530+168 (Teegarden's Star; Zechmeister References , each with two Earth-mass planets within the liquid-water habitable zones of their stars. Also, CARMENES has discovered six Saturn-and Jupiter-mass planets, some of them around very low-mass primaries, thus defying canonical planet formation models , which predict very low occurrence rates of giant planets around M-type dwarfs (e.g. Schlecker et al. 2022). As expected for detectability reasons, most of the CARMENES planets have orbital periods from a few days to a few tens of days. Although not discussed here, the CARMENES GTO survey also announced (Baroch et al. 2021) two brown dwarf candidates on very long-period orbits (P 3000 days), around 10504+331 (GJ 3626) and J23556−061 (GJ 912). Planet occurrence rates Using the CARMENES DR1 sample we calculated planet occurrence rates in a similar way as was done by Sabotta et al. (2021, hereafter Sab21). In that work, preliminary occurrence statistics were calculated using a subsample of 71 targets having at least 50 CARMENES RV measurements. The re-analysis in the present work applies similar target selection criteria. From the initial 362 targets in the CARMENES GTO sample we excluded 124 targets because of several possible reasons: (i) they were added later for transit follow-up (mostly TESS candidates; 20 targets); (ii) they are spectroscopic binaries and triples (23 targets); (iii) they are part of the RV-loud sample as defined by Tal-Or et al. (2018) (52 targets); or (iv) we obtained fewer than ten RV measurements (29 targets). The sample therefore comprises a total of 238 targets, including 69 of the 71 targets in Sab21. For the two targets not included in the CARMENES DR1, one was excluded after being classified as a late-K dwarf (J18198−019, HD 168442), and the other one was subsequently classified as a resolved binary (J23113+085, NLTT 56083). For the planetary sample, we re-ran the signal retrieval and vetting algorithm from Sab21 (see the results in Table A.2). The only change that we made was the period limit used for the long period planets. Sab21 included every signal if the time baseline was longer than two orbital periods, while here we include every signal with time coverage of at least 1.5 times the orbital period. If we considered the more conservative period limit of Sab21, we would exclude several giant planets from the planet sample and that would therefore reduce the statistical soundness of our analysis. As a result, we regard the new criterion as a better balance between being too conservative but still making sure that the signal is indeed periodic. Using this criterion, we identified 37 planets that can be confirmed with CARMENES data alone and three additional planet candidates (around J05033−173, J17033+514, and J18409−133). We also include 13 planets with fewer than 50 RVs that were detected using data from other surveys (mainly HADES, HArps-n red Dwarf Exoplanet Survey, Pinamonti et al. 2022, andHARPS, Bonfils et al. 2013) if they induce an RV semi-amplitude of K > 2 m s −1 . We assume that we would have detected such planets if we had not terminated the observations because of our independent knowledge. We obtained those targets from a comparison with the two exoplanet databases on The Extrasolar Planets Encyclopaedia 11 (Schneider et al. 2011) and the NASA Exoplanet Archive 10 . In Table A.2, we mark planets that are listed in one of the databases and are well below our detection limits, planets with fewer than 50 RVs that are included in our planet sample, and archive planets that are not supported by CARMENES data. In this way we increase the planetary sample in Sab21 by 26 planets (from 27 to 53) for the recalculation of the occurrence rates. The total number of 53 planets reside in 43 planetary systems. We calculated individual planet detection maps for all targets following the procedure described by Sab21. The numerical and graphical results are available online 7 . The global detection probabilities across the period-mass plane and the planets mentioned above are shown in Fig. 10, with the colour scale indicating the average of all detection probabilities for the individual grid points. There are five planets in a low-probability region, which means that we can only detect such planets for a small fraction of our sample. Unsurprisingly, these are Teegarden's Star b and c, YZ Cet c and d, and Wolf 1069 b, all of which are Earth-mass planets with very low-mass stellar hosts. Using the same method as in Sab21, we obtained the powerlaw distribution in M pl sin i for the occurrence rate estimate (Fig. 6 in Sab21). The updated power-law with N pl, corrected = a (M pl sin i) α is only slightly shallower, with the slope changing from α = −1.14 ± 0.16 to α = −1.05 ± 0.01 for planets with masses below 30 M ⊕ , and from α = −0.26 ± 0.17 to α = −0.14 ± 0.25 for higher-mass planets (see Fig. 11). For our occurrence rate determination, we used this power-law as an initial assumption on the M pl sin i distribution, instead of assuming a log-uniform distribution 12 . The results are summarised in Table 5. We report both the number of planets per star (n pl ) and the frequency of stars with planets (F h ). To obtain the latter, we repeated the analysis but instead of including all planets, we reduced the planet sample and took only the single planet with the highest K amplitude in the system. We then inspected the complete period-mass plane with periods of 1 d to 1 000 d and M pl sin i of 1 M ⊕ to 1 000 M ⊕ . In this parameter range, we determined an overall occurrence rate of n pl = 1.44 ± 0.20 planets per star and F h = 94 +4 −9 % stars with planets. This means that the planet multiplicity in our sample is around 1.5 planets per system. The analysis of Sab21 yielded occurrence rates that are larger by a factor of two for planets with 10 M ⊕ < M pl sin i < 100 M ⊕ and by 30 % for the low-mass planets with 1 M ⊕ < M pl sin i < 10 M ⊕ with respect to the results obtained here for the full sample. The lower occurrence rates cannot be due to the looser requirement on orbital period coverage (1.5 instead of two orbital periods), since, if anything, this would result in larger occurrence rates. The smaller occurrence rates observed for the full CARMENES sample thus illustrate the effectiveness in terms of planet discovery of the pre-selection of targets that are ob- served more intensively than others. Human intervention bias in this case leads to an over-estimation of occurrence rates. The survey sensitivity is higher for stars with planets because targets showing interesting signals that could be of planetary nature were observed more intensively. Sab21 pointed out this effect, and explicitly introduced the bias by rejecting all targets with fewer than 50 RVs, but it affects all targeted surveys that change the observing strategy based on acquired knowledge. In fact, by aiming at a specific number of observations for all of our targets, we minimised this effect. In the CARMENES DR1, we reach this number of 50 RVs for 42 % of our targets, which corresponds to 112 stars. We are continuing the survey as part of the CARMENES Legacy+ programme. Even if the planet detection efficiency may not be as high as in the early stages of the GTO, the statistical value of the sample will greatly increase. We compare our low-mass planet occurrence rates around M dwarfs to those of other surveys in Fig. 12. Our updated occurrence rates are consistent with the values obtained from the HARPS (Bonfils et al. 2013) and HADES (Pinamonti et al. 2022) surveys, but our results are based on a significantly larger statistical sample. The agreement is good despite the fact that both estimates by Bonfils et al. (2013) and Pinamonti et al. (2022) assumed a log-uniform distribution in planet mass, as opposed to our power-law relationship. If we also utilised a uniform distribution for our occurrence rate calculation, we would have obtained a lower occurrence rate of 0.58 +0.11 −0.09 low-mass planets per star in orbits of up to 100 d (indicated as the grey square in Fig. 12). In this parameter range, we obtained instead 1.06±0.17 planets per star assuming a power-law distribution in M pl sin i. The difference occurs only in those regions of the period-mass plane with a strong sensitivity gradient, that is, below 10 M ⊕ . For higher planet masses, the choice of distribution does not affect our results significantly. The comparison to transit surveys is not as straightforward due to the lack of an exact correspondence between the observed parameters. The expected value of sin i assuming randomly oriented orbits is ∼0.8 (e.g. Hatzes 2019) and, therefore, our M pl sin i bin of 1-10 M ⊕ on average corresponds to a bin of 1.25-12.5 M ⊕ in true M pl . In this mass regime, planets could be rocky, water worlds, or puffy sub-Neptunes with very different densities (Luque & Pallé 2022). According to the mass-radius relation of Kanodia et al. (2019), this mass interval corresponds on average to the R pl interval of 1.3-3.7 R ⊕ . In log-space this is only 75 % of the radius interval of 1-4 R ⊕ that Sab21 used for comparison with transiting planet statistics. Thus, in Fig. 12, we plot lower occurrence rates for the transit surveys (75 % of those in Sab21). In any case, all occurrence rate estimates agree within a factor of two despite all the involved assumptions and the fact that we infer the occurrence rates from an overall detection sensitivity of 15 % (considering the full period-mass plane). The discussion above is relevant if one wants to find an absolute number of planets per star or to compare with tran- Fig. 9. Scatter plots of the CARMENES DR1 exoplanet sample compared to the complete sample of catalogued planets in the NASA Exoplanet Archive detected via RVs (903; small dots). Different symbols indicate planets newly detected from the CARMENES blind survey (33; stars), planets confirmed from transit follow-up (26; circles), and known planets re-analysed with CARMENES data (17; triangles). The three panels correspond to pairs of different relevant parameters, with the complementary colour scale introducing a third dimension. The histograms along the axes show distributions of the corresponding parameters for the CARMENES planet sample. The blue shaded band in the top-right panel represents the liquid-water habitable zone with limits defined by the 'runaway greenhouse' and 'maximum greenhouse' criteria (Kopparapu et al. 2013). siting planet surveys or surveys targeting other stellar masses. Moreover, these calculations also serve as a valuable benchmark for planet formation theories that aim to reproduce populationlevel trends of exoplanets (e.g. Ida & Lin 2004;Bitsch et al. 2015;Miguel et al. 2020;Izidoro et al. 2021;Schlecker et al. 2021a,b;Mishra et al. 2021). Using the results of Sab21 as input, Schlecker et al. (2022) (Mordasini et al. 2012;Emsenhuber et al. 2021; and found three main discrepancies. The first one is the observational finding of an excess of giant planets around lower-mass stars compared to the theoretical prediction. The simulations do not produce any giant planets around host stars with masses below 0.5 M . As was done by Sab21, we split the full CARMENES sample at a stellar mass of 0.337 M and calculated giant planet occurrence rates. The median stellar masses of the two subsamples are 0.24 M and 0.45 M . Using a strict limit for the giant planet mass of M pl > 100 M ⊕ , we obtained a rate of 0.021 +0.018 −0.011 planets per star and 0.045 +0.021 −0.016 planets per star for the low-mass and the highmass stellar subsamples, respectively. The resulting occurrence rate ratio, f high−mass / f low−mass = 2.14, is marginally consistent with the giant planet frequency as a function of stellar mass published by Ghezzi et al. (2018), assuming similar stellar metallicities, [Fe/H], in both samples. The second discrepancy between model and observation concerns the shape of the planet mass distribution. The distribution of M pl sin i in the synthetic population is bimodal, whereas its counterpart in the observed sample is consistent with a powerlaw. In fact, our planet mass distribution does not deviate significantly from that in Sab21 (see Fig. 6 therein). A third mismatch between the observed and model-predicted planet demographics as identified by Schlecker et al. (2022) is the orbital period distribution around stars with masses higher than 0.4 M . Short-period planets (P orb < 10 d) are found to be significantly scarcer in the observed population compared to the synthetic one. The drop in occurrence rates at periods of less than 10 d, which was previously observed for stars with different stellar masses, does not hold for targets with masses below 0.4 M , with period distributions showing a good match. One possible explanation is a migration barrier having higher efficiency in protoplanetary disks around early M dwarfs that is not adequately accounted for by current models. For targets with M < 0.337 M we calculate 0.56 +0.15 −0.14 and 0.63 +0.23 −0.18 low-mass planets per star for the intervals 1-10 d and 10-100 d, respectively. Conclusions The CARMENES GTO survey ran from 1 January 2016 to 31 December 2020 and obtained 19 633 spectroscopic measure- Gaidos et al. 2016, Hsu et al. 2020, and rates from the HARPS, HADES, and CARMENES RV surveys are represented as squares (Bonfils et al. 2013;Sabotta et al. 2021;Pinamonti et al. 2022, and this work). The grey square shows the occurrence rate from this work with the assumption of a log-uniform distribution in M pl sin i. ments of a sample of 362 targets. The sample was designed to be as complete as possible by including M dwarfs observable from the Calar Alto Observatory with no selection criteria other than brightness limits and visual binarity restrictions. To best exploit the capabilities of the instrument, variable brightness cuts were applied as a function of spectral type to increase the presence of late-type targets. This effectively leads to a sample that does not deviate significantly from a volume-limited one for each spectral type. The global completeness of the sample is 15 % of all known M dwarfs out to a distance of 20 pc and 48 % at 10 pc. The present paper accompanies the release of a large dataset, the CARMENES DR1. Raw data, pipeline-processed data, and high-level data products are provided, including 18 642 precise RVs for 345 targets (removing double-and triple-line systems). After correction of a NZP offset, the median internal precision of early and intermediate M-dwarf types is ∼1.2 m s −1 . This value increases to ∼5.4 m s −1 for late M spectral types due to their intrinsic faintness. The median rms of the RV time series of all the targets in the sample is calculated to be ∼3.9 m s −1 , where no signal has been subtracted. A comparison between the internal and external precisions indicates that the RV variance has a contribution of ∼3.5 m s −1 on top of the instrument error when treated as uncorrelated random noise. This external noise component is unlikely to be of instrumental origin. It is instead believed to arise from astrophysical effects, including Keplerian signals from planets but, most importantly, RV variability arising from stellar activity (e.g. active region rotation and evolution). The CARMENES time series data have been analysed in the search for RV signals of a planetary nature. So far we have identified 33 new planets from the blind survey observations, which are complemented by 17 planets that we have re-analysed with CARMENES data and 26 planets from transit search space missions that we have confirmed and measured. The number of blind survey planets is in good agreement with the initial estimates considering the properties of the stellar sample, the survey design, and the assumed planet occurrence rates (Garcia-Piquer et al. 2017). The new planets cover a broad region of the parameter space in terms of stellar host mass, planetary mass, and orbital period. A remarkable result is that CARMENES has discovered half of the RV planets known to orbit stars of masses below 0.25 M . This fact illustrates the prime 'hunting ground' of CARMENES thanks to the competitive advantage of the optimised red-sensitive design and the possibility of undertaking a massive survey with a large fraction of dedicated 4 m class telescope time over five years. With the CARMENES DR1 data, we have calculated new planet occurrence rates around M dwarfs to update the results already presented by Sab21. We have employed a subsample of 238 stars that fulfil a set of specific requirements. We still find a high long-period giant planet occurrence rate of around 3 %, a high number of low-mass planets (1.06 planets per star in periods of 1 d to 100 d), and an overabundance of shortperiod planets around the lowest-mass stars of our sample compared to stars with higher masses. For our complete period-mass parameter space, we determine an overall occurrence rate of n pl = 1.44 ± 0.20 planets per star and a fraction of F h = 94 +4 −9 % stars with planets. We calculate the overall CARMENES survey sensitivity to be 15 % and find planets around 43 of 238 targets (i.e. 18 % of the stars), which again shows that nearly every M dwarf hosts at least one planet. In the present description of the CARMENES GTO data, we have focused on their use for precise RV work in the field of exoplanet detection and characterisation. Nevertheless, we have shown in a number of publications that these data are also of high value to a variety of science cases within stellar astrophysics, such as studying atmospheric parameters (T eff , log g, and chemical abundances; Passegger et al. 2018Passegger et al. , 2019Passegger et al. , 2020Passegger et al. , 2022Fuhrmeister et al. 2019a;Marfil et al. 2020Marfil et al. , 2021Abia et al. 2020;Shan et al. 2021), determining fundamental properties (M, R, and magnetic field; Schweitzer et al. 2019;Shulyak et al. 2019;Reiners et al. 2022), and analysing magnetic activity (Tal-Or et al. 2018;Fuhrmeister et al. 2018Fuhrmeister et al. , 2019bFuhrmeister et al. , 2020Fuhrmeister et al. , 2022Schöfer et al. 2019;Hintz et al. 2019Hintz et al. , 2020Baroch et al. 2020;Lafarga et al. 2021;Jeffers et al. 2022). CARMENES VIS channel data have also proved useful in addressing the study of exoplanet atmospheres via transit transmission spectroscopy (Yan et al. 2019(Yan et al. , 2021Casasayas-Barris et al. 2020Sánchez-López et al. 2020;Khalafinejad et al. 2021;Czesla et al. 2022) and the Rossiter-McLaughlin effect (Oshagh et al. 2020;Sedaghati et al. 2022). The CARMENES GTO survey is now complete. In terms of exoplanet RV detection, the survey has provided about 60 planet discoveries and confirmations, some of which are of very high scientific relevance, and, as a sample, is of great statistical value, thus contributing to a complete census of the planetary population in the solar neighbourhood. The initial goals of the survey have therefore been fulfilled. The CARMENES sample continues to be observed within the CARMENES Legacy+ programme. The ultimate goal is to reach 50 measurements for all suitable targets (i.e. excluding multiples, RV-loud stars, etc.). The CARMENES Legacy+ extension of the survey is expected to run at least until the end of 2023 and, eventually, to lead to a second release of CARMENES survey data with 50 measurements or more for about 300 nearby M dwarfs. Through the present release and future additions, the CARMENES data will continue to yield new exoplanet discoveries and enable abundant studies in other domains within stellar astrophysics and exoplanetary science. N other is the number of measurements from other spectrographs used in the analysis. The stellar mass column, M , lists the value used by the quoted publication and is consistent with the planet's minimum mass, M pl sin i. Small differences may exist with the values tabulated in Table 1.
13,591.2
2022-12-20T00:00:00.000
[ "Physics" ]
A Markov Switching Approach in Assessing Oil Price and Stock Market Nexus in the Last Decade: The Impact of the COVID-19 Pandemic We revisit the oil price and stock market nexus by considering the impact of major economic shocks in the post-global financial crisis (GFC) scenario. Our breakpoint unit root test and Markov switching regression (MRS) analyses using West Texas Intermediate (WTI) oil price and Standard & Poor’s 500 (S&P 500) market index show that among the major economic events, the recent coronavirus (COVID-19) pandemic is the most significant contributor to market volatilities. Furthermore, our MRS results show that the relationship between oil price and the stock market is regime-dependent; the stock market experiences substantial and positive shocks in a volatile oil price regime. Our results provide valuable insights to investors and policymakers regarding risk management and financial market stability during economic crisis periods, specifically during the COVID-19 pandemic. Introduction Stock markets are often linked to economic performance, and oil price is one of the commonly accepted economic phenomena that affect stock returns (Tchatoka et al., 2019). The relationship between oil prices and the stock market has been the subject of interest, especially today due to the dynamic nature of the oil prices and stock market nexus. Interest in the stock market movement reemerged after the previous financial crisis of 2008 to 2009. Subsequent events, such as the ongoing European debt crisis, the oil price collapse in 2014, the United Kingdom's Brexit referendum in 2016, and the United States (US) decision to leave the Paris Climate agreements are among the major global events during the last decade (Antonakakis et al., 2013;Bachmann et al., 2013;Baum et al., 2010;Bloom, 2009;Caggiano et al., 2017). Recently, the oil and stock markets are crisis-ridden due mainly to the breakdown in negotiations between the Organization of the Petroleum Exporting Countries (OPEC) and non-OPEC members led by Russia and the outbreak of the Coronavirus (COVID-19) pandemic. Although the COVID-19 risk is somewhat transmitted to economic activities (Hasan, Mahi, Sarker, & Amin, 2021); the oil market appears to have been the primary receiver of volatility spillovers along with the financial markets due to the dramatic collapse of oil prices during the pandemic (Arafaoui & Yousaf, 2022;Yousaf, 2021). Also, Goldman Sachs highlighted the rapid drop in global storage availability and disruptions in physical distribution networks in the current scenario, placing continued pressure on prices (Masson & Winter, 2020). The rapid spread of the pandemic adversely impacted the global financial markets and created an unprecedented level of risk, irrespective of the type of stock market (Hasan, Mahi, Hassan, & Bhuiyan, 2021), causing investors to suffer significant losses in a brief period . The EU and US equity markets dropped by as much as 30% (Gormsen & Koijen, 2020). Market volatility has had a significant positive link with the new infection and volatility (Albulescu, 2021). Notably, the announcements of government social distancing measures had a direct negative effect on stock market returns due to their adverse impact on economic activities (Nadeem, 2020). Baker et al. (2020) quantifies the impact of news related to COVID-19 on the stock market and concluded that it has had a much more significant impact on the market than other similar diseases, such as Ebola. The economic and financial implications of the COVID-19 pandemic are so substantial, with some researchers comparing it to the global financial crisis of 2008 (Ozili & Arun, 2020;Sharif et al., 2020). The pandemic also weakened the transmission of monetary policy to financial markets to a more significant degree (Wei & Han, 2021), making policy intervention ineffective for a substantial period. The interconnectivity of the global economy and financial render both susceptible to crises, as the latter affects international investors' decisions on asset allocation (Liu et al., 2020). In such market dynamics, one particular stock market that plays a pivotal role is the US stock market. Previous studies evidenced that the US stock market had a strong contagion effect during the earlier crises, particularly during the 2007 to 2009 global financial crisis (GFC), which is regarded as the first truly major global crisis since the Great Depression of 1929 to 1932 (Bekaert et al., 2014;Jin & An, 2016). During the crisis, the US stock market plummeted by 43%, the emerging markets by 50%, and the frontier markets by 60% (Samarakoon, 2011). Hence, the US stock market's co-movement with oil price fluctuations implies special significance for the global economy and financial market in other crises, including the COVID-19 pandemic. Due to the lack of scholarly endeavor to comprehend the impact of the COVID-19 pandemic with a broader time horizon, this paper assesses the relationship between oil prices and stock markets in the post-GFC period to understand the impacts of different events in the market dynamics, notably including the COVID-19 crisis period. The study enables us to capture a unique aspect that existing literature has yet to address-it compares the shocks generated in the market due to the COVID-19 pandemic and other events (i.e., the European debt crisis, Brexit). Previous studies focused on studying the impacts of COVID-19 mainly restrict their analysis within the pandemic period, thus failing to clarify the differential impact of the COVID-19 pandemic compared to other crises. Accordingly, we can concurrently observe the effects of adverse changes in oil prices, variations in market conditions over time, and the pandemic's intensity . To this end, we first aim to identify the structural changes in the oil price and stock market index using a breakpoint unit root test to understand the relationship. The recent economic crisis induced by COVID-19 has caused significant market fluctuations, thereby recognizing a structural break in the time series. Using the Chow test, we further confirm the structural break date (4/27/2020). After securing the series structural break, we investigate the oil-stock relationship using the MRS model. The MRS model results show that the oil-stock relationship is statedependent. Notably, the association is significant in the ''high oil price fluctuation'' state-an unstable oil market creates a positive shock in the stock market returns. Accordingly, our study makes a three-fold contribution. First, we contribute to the existing literature by providing a comparative impact of the COVID-19 pandemic on the oil-stock dynamics and offer the latest empirical insights. As the COVID-19 pandemic has a time trend, the interactions between various economic variables entailed in this process would change over time (Wen et al., 2022). Second, from the methodological aspect, we offer empirical evidence using a nonparametric approach, while most of the earlier studies used parametric models to examine oil market shock and stock market volatilities (Kilian & Park, 2009;Lv et al., 2020;Mollick & Assefa, 2013;Phan et al., 2015;Ready, 2018). However, one fundamental limitation of the studies employing parametric models is that these models may not uncover the underlying relationship and how it has changed over time. Notably, due to many (known and unknown) events that may have had significant impacts on the oil-stock price relationship, the parametric model appears too restrictive for capturing the nature, and the extent, of changes in the underlying relationship (Silvapulle et al., 2017). Finally, our analysis of factors in the market dynamics during the COVID-19 pandemic and other global politico-economic phenomena has significantly impacted oil-stock relationships in the post-GFC scenario. Accordingly, our study offers the prospect of contrasting the recent COVID-19 and the plummeting oil price-induced market turmoil with other noteworthy events disrupting the market equilibrium. Hence, the findings are essential for policymakers on the onset of ongoing critical economic conditions and the volatile oil market condition induced by the COVID-19 pandemic and oil price war. The remaining part of this article is organized into several sections: Section 2 discusses the methodological aspects of the study. Section 3 presents the relevant findings and discussion. Finally, Section 4 summarizes the results and highlights the conclusions of our research. Oil Price and Stock Market Nexus Oil is considered a strategic commodity for economies as oil price fluctuations is not only limited to petroleum or other commodity markets but notably affect the financial markets, particularly stock markets (Mensi et al., 2022). Mensi et al. (2022) highlighted the oil-stock market linkages from two specific channels-microeconomic and macroeconomic. From the microeconomic theoretical perspective, cash flow and discount rate are two crucial factors that affect stock values in the market. Generally, a steeper oil price is linked with a higher rate of inflation in the economy, which leads to a higher interest rate, thus resulting in a higher discounting factor in valuing stocks (Basher & Sadorsky, 2006). Consequently, from the second or macroeconomic theoretical channel, oil price fluctuations are supposed to induce central banks to adjust interest rates to control inflation, leading to a decrease in stock prices (Basher & Sadorsky, 2006;Mensi et al., 2022). However, there has been little agreement regarding the effects of oil price changes on stock performances, as the relationships are not straightforward. Mainly, the heterogeneity of the impact of oil price on stock prices, owing to the nature of the business, that is, oil-producing or oilconsuming company (Gomes & Chaibi, 2014;Kumar et al., 2012;Lv et al., 2020), as well as the oil status of the economy, that is, oil-exporting or oil-importing economy (Filis et al., 2011;Salisu & Isah, 2017;Tchatoka et al., 2019) as highlighted in existing studies. The oil price effects are different for the markets in countries that are oil exporters compared to those that are oil importers (Degiannakis et al., 2018;Guesmi & Fattoum, 2014;Salisu & Isah, 2017). An oil price increase will likely affect an oil-exporting country positively, as their income will increase. The income increase is expected to result in a rise in expenditure and investments, which creates greater productivity and lower unemployment. Therefore, stock markets tend to respond positively to such an event (Arouri & Rault, 2012;Park & Ratti, 2008;Wang et al., 2013). On the contrary, an oil price increase for an oil-importing country will lead to higher production costs, as oil is one of the most critical production factors (Arouri & Nguyen, 2010). It is transferred to the consumers, leading to lower demand and, thus, consumer spending (Hamilton, 1996) and lower production (Lardic & Mignon, 2006). In such a case, stock markets would react negatively (Filis et al., 2011;Sadorsky, 1999). Earlier Empirical Studies The US remained a net crude oil importer until 2020 (EIA, 2022). Previous studies used different methodologies to examine the relationship between oil prices and stock market in the US. One of the leading studies investigating the impact of oil price on the US stock market is that of Kilian and Park (2009), who considered the crude oil price as an exogenous shock in their structural VAR model, and reported that the shocks differ substantially, depending on the underlying causes of the oil price increase-whether the change in the price of oil is driven by demand or supply shocks in the oil markets. However, irrespective of the sources, their study estimated that the demand and supply shocks in the global crude oil market jointly account for 22% of the long-run variation in the US' real stock returns. Their findings confirm the strong impact of oil price shocks on the US stock market. Tsai (2015) examines the US stock returns due to oil price shocks in the pre-crisis, during the financial crisis, and post-crisis scenarios using firm-level stock returns. By estimating OLS with panel-corrected standard errors, the author finds that stock returns to an oil price shock for most sectors within the crisis period are generally positive and heterogeneous, thus concluding that the crisis period and structural break have substantial impacts on the oil-stock price nexus. An earlier study by Sadorsky (2008) used the generalized least squares (GLS) corrects for autocorrelation and heteroskedasticity in panel data sets to assess the impact of oil prices on firms of different sizes listed in the S&P 1500 and find a significant relationship between oil price movements and stock prices to varying extents for firms of various sizes. Many studies have also investigated the oil-stock price nexus and used GARCH models to examine market volatilities. Several examples of such studies are based on the US stock market employed different versions of GARCH models, including GARCH (1,1) (Falzon & Castillo, 2013;Phan et al., 2015), MGARCH-DCC (Mollick & Assefa, 2013), BEKK-GARCH (Lv et al., 2020) as well as MRS-GARCH (Ready, 2018). Other studies used different methodological stances to investigate the relationship more meticulously. These studies mainly considered the asymmetry in the relationship. For example, Sim and Zhou (2015) examine the relationship using the quantile-on-quantile approach for oil prices and the US stock market. They reported two key findings-large and negative oil price shocks (i.e., low oil price shock quantiles) affect US equities positively when the US market is performing well (i.e., at the high return quantiles); while negative oil price shocks impact the US stock market, the influence of positive oil price shocks is weak. Basˇta and Molna´r (2018) analyzed the different time frequencies using wavelet transformation and used the implied and realized volatilities in the US market. They reported different results for both types of volatilities considered-implied volatility of the stock market leads to the implied volatility of the oil market. However, no such relationship was observed for realized volatilities. Bahmani-Oskooee et al. (2019) utilized an asymmetric Granger causality test and failed to find a significant long-run causal relationship between oil prices and stock returns of nine different sectors of the US economy. Alternatively, Bu et al. (2020) employed the copula-MIDAS-X model to capture low-frequency to highfrequency data to examine the relationship between oil prices and the US stock market (S&P 500 index). Their results show that the relationship is asymmetric, and the dependence on oil and stock markets is influenced by aggregate demand and stock-specific negative news. Although studies started considering the nonlinear or asymmetric aspects of the oil-stock nexus, specifically in the US market, they lack empirical evidence on how the recent significant events played a role in the relationship. In particular, existing studies failed to accommodate the impacts of structural break (or breaks) in the relationship and its potential influence in describing the asymmetry. Hence, the literature lacks a comprehensive interpretation that gains critical momentum in light of the sudden shock brought about by the COVID-19 pandemic. Materials and Methods This section explains the empirical approach taken to investigate the oil-stock nexus. We first describe the empirical approach to identify the potential structural breaks due to different significant events over the last decade that we focus on for this study. Then, we explain the methodological aspects of the MRS model. We subsequently present the data and variables of the study. Breakpoint Unit Root Test The empirical analysis in a time series usually begins with investigating the variables' order of integration by applying the unit root tests. However, time series components, including seasonality components, trends, cyclical, and irregular changes (Mahi et al., 2020), and sudden economic or financial market shocks can create structural break(s) in the series. When there is a structural break(s) in the time series, the conventional unit root tests' power is unstable (Sun et al., 2017). Moreover, Perron (1989) argues that breakpoints and unit roots are related, and conventional unit root tests are biased when determining the correct order of integration. Therefore, this study utilizes the Perron (1989) breakpoint unit root test method to determine the presence of unit roots and accommodate the structural break(s) in the time series of the variables under consideration. To determine the break in the intercept of the time series variable, the following general specification can be used; where 1(Á) indicates the value 1 if the argument (Á) is true (or when there is a break) or 0 otherwise. Accordingly, we specify the Dickey-Fuller regression to identify unit-root in the time series with intercept break as follows: The model yields a test of a random walk against a stationary model with an intercept break. Also, as with the conventional Dickey-Fuller unit test, to eliminate the error correlation structure's effect on the asymptotic distribution, the k-lagged differences of y are included in the equation. Also, we consider the Innovational outlier model that assumes the break takes place gradually, with a break following the same dynamic path as the innovation. Markov Switching Model Regression analysis is a standard tool for exploring the correlations between continuous variables. There are three main types of multiple regression: simultaneous regression, hierarchical regression, and stepwise regression, to examine the association of the variables and to predict a particular outcome. However, the time series components such as linear trends, irregular patterns, and seasonal changes affect the findings' results, which lack precision and accuracy (Phoong et al., 2019). Accordingly, the MRS framework is advantageous as the dataset incorporates several economic, financial, and geopolitical events (as outlined earlier) relevant to the oil and stock market dynamics. Considering the analytical value of nonlinearity in the oil-stock nexus, we investigated the relationship using the Markov-switching regression (MRS) model. This technique is advantageous compared to conventional linear regression as the nonlinear nature of the time series might result in findings that lack precision and accuracy . A switching regression helps us detect the existence of nonlinearity in the relationship. The MRS model can capture asymmetry or nonlinearity in economic/financial time series relationships. This approach allows model parameters to switch between different regimes, while other regime-dependent parameters can be estimated (Uddin et al., 2018). Also, the MRS framework has been proven useful when the adjustment seems to be mainly driven by exogenous events (Basher et al., 2016). Therefore, the technique helps detect whether and how the oil market fluctuations affect the return in the stock market. The regression model without switching is: y t = ax t + e 1 , e 1 ;i:i:d: The x is a 1 3 m exogenous variables and coefficient of the independent variables. The evolution of the variable s t may be dependent upon s tÀ1, s tÀ2 , . . . , s tÀn and hence the process of the discrete variable, s t , is named as n-th order Markov switching process. The first-order Markov switching process with the following transition probabilities: where p 0 and q 0 are unconstrained parameters. The transition probabilities for a two-state Markov switching are then iterated to obtain P½S t = jS tÀ1 = i, i, j = 0, 1 (Kim & Nelson, 2007;Phoong et al., 2020). The transition probability estimation is essential as it provides information about each state's expected duration of the switching model (or economic condition), providing information on the asymmetric properties of the business cycle. Generally, a Markov switching regression model used in this study assumes that there is a different regression model correlative with each regime. The unobservable variable, X t and R t , the conditional mean of y t in regime m (m = 1, 2) for a two-regime model can be written as: where } m and b are vectors of coefficients. In this study, we consider the dynamics of West Texas Intermediate (WTI) and Standard & Poor's 500 (S&P500) as state-dependent (the variable descriptions are provided in section 3.3). The parameters' coefficients may differ for each state since the state can have low or high volatility, recession, or expansion . The framework for the MRS is to be memoryless in each state (de Martino, 2018); the switching properties can be calculated using the following equation: where m i = m 1 if i = 1 (or state 1), and m i = m 2 if i = 2 (or state 2). Transition probabilities for a two-state model are: where p 11 + p 12 = 1 and p 21 + p 22 = 1. The transition probability estimation is important as it provides information about each state's expected duration of the switching model (or economic condition), providing information on the asymmetric properties of the business cycle. We can obtain each regime's average duration from the transition probability p jj (j = 1, 2). Precisely, the average duration of regime j (j = 1, 2) is specified as: where j represents the state or regime. Data and Variables We use daily WTI crude oil price and S&P 500 stock market index data to proxy the oil price and stock price, respectively. The WTI oil price is one of the most widely recognized international benchmarks for crude oil pricing. On the other hand, the S&P 500 is commonly regarded as the best single gauge of large-cap equities in the US, including 500 leading companies, and covers 80% of available market capitalization. The data cover the period from January 1, 2010, to December 31, 2021. There is a total of 3,118 data points. Both data series are obtained from the Refinitiv Datastream database (Table 1). Figure 1 shows the plot of the evolution of the daily prices of the WTI and S&P500 data series. The standard deviation for the oil price is high, as is the standard deviation for the S&P500. The larger the standard deviation, the more significant the change in the time series. We noticed that our post-GFC sample shows some fluctuations in the last decade. As mentioned above, the oil price collapse since 2014, the Brexit vote in the 2016 UK referendum, and the US decision to leave the Paris Climate Agreement are among the major global events during this decade. However, the oil and stock market suffered a massive shock recently, particularly in early 2020. The sharp spikes in both data series during the same period indicate that the oil-stock market nexus could suffer from a significant structural break; hence, the relationship needs a proper empirical revisit. Therefore, in the following section (Section 4), we investigate the relationship from alternative perspectives with relevant econometric techniques described subsequently. For the empirical analysis, following Jiang et al. (2014), we calculated the log returns of the time series data by determining the difference between the two consecutive values, as follows: Test of Structural Break and Unit Root To begin with, in the investigation to find the oil-equity nexus, we first run the unit root test considering the possible break in each series, respectively. The results are presented in Table 2. Based on Table 2, the suggested break date is 4/27/ 2020 and 2/04/2010 for the WTI return and S&P 500 return, respectively. However, considering the influential transformation of shock from the oil market to the stock markets, we selected 4/27/2020 as the break date and then employed the Chow test to investigate the impact of this break date on the regression between WTI and S&P500 returns. The null hypothesis of the Chow test is no break in the considered period (i.e., 4/27/2020). The Chow test fundamentally examines whether a single regression line or two distinct regression lines fit the dataset (Chow, 1960). The results in Table 3 show that the test statistics are significant at a 5% significance level for all the variables, thus, confirming the structural break during the break date considered in the time series variables under consideration. Therefore, the time series regression without considering the specified break date would result in a biased estimation. Figure 2 illustrates the considerable changes in the linear regression slope before and after the break date, which further justifies the specified break date. Accordingly, we estimated the dynamics between WTI and S&P500 using a two-state MRS model, and the results are discussed in the following section. regime changes. We investigated the possibility of a nonlinear relationship between oil price and stock price using the MRS model. The results are summarized in Table 4. We consider two regimes of the MRS model suggested by previous studies (Brunner, 1992;Neftc xi, 1984). For the MRS outputs, the magnitude of volatility is determined by the overall size of each regime's standard deviation (s). The higher (lower) coefficients' standard deviations regime is the high (low) volatility regime. A Marquardt step is used in the Markov switching regression model to estimate an unobserved state's parameters for the MRS estimation in Table 4. Accordingly, we define regime 1 as the ''low oil price fluctuation'' state and regime 2 as the ''high oil price fluctuation'' state. For regime 1, the impact of WTI on the S&P500 is significant at 5% significance. On the other hand, we find that the impact of WTI on the S&P 500 is positive and significant at 1% significance level for regime 2. This positive association in regime 2 indicates that a high price fluctuation in the oil market creates a positive stock market shock. In other words, the oil price shock transition is significant in the volatile market condition. Our findings align with previous studies that the oil-stock relationship is unstable and varies in different phases over time (Balcilar et al., 2015;Lee & Chiou, 2011). Results From Markov Regime Switching Model We also noticed that the standard error of the regression coefficient in regime 2 (0.04929) is around seven times larger than the standard error of the coefficient in regime 1 (.006882). A standard error was used to measure the variability of the coefficient. The standard error of the coefficient is always positive. The smaller the standard error, the more precise the estimate. From the findings, although the standard error in regime 2 is higher than in regime 1, both values are close to 0. This indicates that the statistic has no random error, which also means that the sample of data is sufficient and the estimated value is close to the true value. The range for Durbin Watson statistics is 0 to 4, where an acceptable range is 1.50 to 2.50. The value below 1.5 indicates the presence of positive autocorrelation, while the statistic above 2.5 indicates the presence of negative autocorrelation. The Durbin-Watson statistic is 2.21, which is within the acceptable range. This means that no first-order autocorrelation happens in this regression model. Table 5 presents the probability of transition from one regime to another. The transition probabilities from regime 1 to regime 2 (p 12 ) are lower than those from regime 2 to regime 1 (p 21 ). This indicates that regime 1 is relatively permanent. The process of transition from regime 1 to regime 2 is very low. Moreover, the expected duration of regime 2 is close to 41 days, and the anticipated period of being in regime 2 is close to 4 days. This confirms that regime 1 is more stable than regime 2. Furthermore, p 11 and p 22 have a high value; thus, we reject the null hypothesis of no regime shifts. Robustness Test To check the robustness of the MRS analysis, we used an alternative proxy for stock market return. We used the Dow Jones Industrial Average (DJIA) index return and regress against the WTI return series. Unlike the S&P 500, DJIA represents 30 large-cap companies. The index is not weighted by market capitalization; rather, the index was calculated by summing the listed stock prices divided uniquely by a factor called ''Dow divisor'' to Note. The number of states: 2. Initial probabilities obtained from ergodic solution. Common standard errors and covariance using a numeric Hessian Random search: 25 starting values with 10 iterations using 1 standard deviation (rng = kn, seed = 798,178,604). Convergence was achieved after 18 iterations. p Value is reported in the parenthesis. SE = standard error; SD = standard deviation; LL = log-likelihood. smooth over infrequent changes like stock splits and new index constituents (Langley, 2020). Hence, the choice of DJIA offers a distinctive prospect to investigate the stock market movement due to changes in oil prices or the shock generated in the oil market. A Bai-Perron breakpoint test was used to investigate the potential structural change or breaks in the variables' series. The results are presented in Table 6. The Bai-Perron test is an algorithm for determining the structural break in a linear regression model by trimming 15% of the data. The purpose of trimming the 15% at the beginning or end of the sample data is to avoid the presence of serial correlation or heterogeneity in the data or errors that might occur across the segments. Based on the results in Table 6, there is 1 structural break (2/25/ 2020) in the model. Next, a two-state MRS regression was used to examine the correlation between oil price and stock price. The sample data ranged from 1/1/2010 until 12/31/2021. The duration of the sample data is similar to the S&P500. A total of 3,118 data was calculated using the MRS model, and the results are reported in Table 7. The presence of two regimes is evident in Table 7. We defined regime 1 as the ''high oil price fluctuation'' state and regime 2 as the ''low oil price fluctuation'' state based on the standard deviation (SD) values of the regression coefficients, respectively. Similar to our main findings, our robust MSR confirms a significance on the stock market when the oil price fluctuations are high. Moreover, the relationship remains significant and negative in the low oil price fluctuation state, similar to earlier findings. Besides that, the Durban Watson statistic is 2.21, comparable with the findings in Table 4, indicating that no autocorrelation occurs in the regression model. Thus, the results confirm switching behavior in the oilstock nexus and a positive shock transition in the stock market when oil price volatility is high in the market. Furthermore, the transition probabilities and expected duration results presented in Table 8 show comparable values. Regime 1 is more permanent than regime 2, as the anticipated duration for regime 1 is 232 days, and the expected time of being in regime 2 is 1 day. This confirms that regime 1 is more stable than regime 2, which agrees with the findings of the S&P500. Consequently, we demonstrated that the oil-stock relationship is subject to rapid regime changes due to price shocks. Conclusions The relationship between oil price and stock price is critical, and the significance of this study stems from the economic and policy-related significance of the oil-stock nexus. The relationship dynamics are also subject to change over time from sudden changes in the economy, financial market, or oil market, and this study aims to revisit this relationship against the backdrop of several global events in the post-GFC scenario. The recent COVID-19 pandemic crisis has created a substantial shock in the economy and markets, and to analyze the effect of the pandemic on the relationship, we used the daily data of WTI crude oil price and S&P 500 stock market index data to proxy the oil and stock prices. We consider the possibility of a structural break in the time series and employed the ''breakpoint unit root test.'' We uncovered evidence of a significant structural break on 7/28/2020 and confirmed it using the Chow breakpoint test. We also examined the oil-stock relationship, considering the nonlinearity in the time series. We employ a two-state MRS model and discovered significant variations in the relationship between the two regimes. Notably, the relationship is significant in the ''high oil price fluctuation'' state, or stock market return is significantly affected by shocks generated in a volatile oil price regime. Hence, we empirically confirmed that Note. The number of states: 2. Initial probabilities obtained from ergodic solution. Common standard errors and covariance using a numeric Hessian Random search: 25 starting values with 10 iterations using 1 standard deviation (rng = kn, seed = 12,346,587,827). p Value is reported in the parenthesis. SE = standard error; SD = standard deviation; LL = log-likelihood. the oil-stock relationship is nonlinear and asymmetric, and the stock market is susceptible to fluctuations when oil prices are unstable. Our findings are of particular significance for investors and policymakers. First, policymakers can formulate appropriate strategies to keep oil prices stable, which will help prevent market contagion. Specifically, the US, the producer of WTI crude oil, is expected to prioritize dealing with risks induced by the pandemic. Second, our results suggested that the most significant break occurred during the COVID-19 period compared to other events, including the recent oil price war. Policymakers should also concentrate on alternative energy sources, such as renewable energy, to decrease the high dependence of economic activities on oil. Third, stock market investors can also monitor the changes in oil prices while investing and constructing portfolios. We suggest that achieving a well-diversified portfolio should involve the consideration of oil price shocks, which, as a consequence, could also help improve the accuracy of hedging against the risks generated by the high fluctuation of oil prices. Finally, the findings can be helpful to investors and financial market regulators-they need to be more vigilant to noneconomic crises besides economic ones to adapt their investment strategies and minimize financial loss. Overall, these findings provide valuable insights to investors and policymakers on spillovers across non-economic (i.e., health) crises, oil price fluctuations, diversification, and risk management strategies in the stock market. Our study is not without limitations, which can be addressed in future research with careful modeling and estimation. One such limitation is that we used only two variables; however, the oil-stock relationship can be affected by other factors as well. Future research can expand the model to a multivariate framework to include other financial or economic variables, such as the exchange rate, the stock market volatility index (VIX), the oil volatility index (OVX), and the like. Further studies with this method can be extended to a three-regime MRS model to measure the oil-stock nexus in three states: high oil price fluctuation, stable oil price condition, and low oil price fluctuation regime.
7,796.6
2023-01-01T00:00:00.000
[ "Economics", "Business" ]
Unraveling T cell exhaustion in the immune microenvironment of osteosarcoma via single-cell RNA transcriptome Abstract Osteosarcoma (OS) represents a profoundly invasive malignancy of the skeletal system. T cell exhaustion (Tex) is known to facilitate immunosuppression and tumor progression, but its role in OS remains unclear. In this study, single-cell RNA sequencing data was employed to identify exhausted T cells within the tumor immune microenvironment (TIME) of OS. We found that exhausted T cells exhibited substantial infiltration in OS samples. Pseudotime trajectory analysis revealed a progressive increase in the expression of various Tex marker genes, including PDCD1, CTLA4, LAG3, ENTPD1, and HAVCR2 in OS. GSVA showed that apoptosis, fatty acid metabolism, xenobiotic metabolism, and the interferon pathway were significantly activated in exhausted T cells in OS. Subsequently, a prognostic model was constructed using two Tex-specific genes, MYC and FCGR2B, which exhibited exceptional prognostic accuracy in two independent cohorts. Drug sensitivity analysis revealed that OS patients with a low Tex risk were responsive to Dasatinib and Pazopanib. Finally, immunohistochemistry verified that MYC and FCGR2B were significantly upregulated in OS tissues compared with adjacent tissues. This study investigates the role of Tex within the TIME of OS, and offers novel insights into the mechanisms underlying disease progression as well as the potential treatment strategies for OS. Graphic Abstract Supplementary Information The online version contains supplementary material available at 10.1007/s00262-023-03585-2. Introduction Osteosarcoma (OS) is associated with malignant, highly aggressive and high heterogeneity tumors, which can seriously endanger the health and physical function of children and juveniles [1].Despite various treatments for OS, including surgical resection, chemotherapy, immunotherapy, and targeted therapies, it continues to have a poor prognosis [2].The pathophysiology of OS remains elusive; however, several studies have demonstrated a strong correlation between its pathogenesis and genetic characteristics [3].Simultaneously, the intricate molecular heterogeneity and consequential functional perturbations within the tumor immune microenvironment (TIME) facilitate the tumor progression and emergence of chemoradiation resistance [4].Hence, understanding the intricate TIME is important for discovering novel therapeutic targets. The T cells residing within the TIME play a central role in cancer immune surveillance by impeding tumor progression.Intracellular pathogens and malignant cells are countered and eradicated by the influx of immune cells, most notably the CD8 + cytotoxic T lymphocytes, which directly kill tumor cells, thus forming the cornerstone of cancer immunotherapy [5].A reduction in the number or functionality of CD8 + T cells within the host portends a decline in antitumor immunity, thereby increasing the risk of neoplastic growth and metastasis [6].However, as the cancer progresses, the T cells that infiltrate the TIME may experience a late-stage exhaustion due to sustained stimulation by tumor antigens-a phenomenon called T cell exhaustion (Tex), which decreases the number or functionality of effector T cells, thereby enabling tumor immune evasion and progression [7].Exhausted T cells (ExTs) isolated from advanced tumors exhibit the characteristics of tumor-infiltrating lymphocytes: they fail to secrete effector cytokines or cytotoxic molecules in response to tumor cells that express multiple inhibitory receptors [8].In contrast, exhausted CD8 + T cells display a plethora of hallmark features to functional effector cells and memory T cells, such as a gradual loss of effector function, sustained upregulation of multiple co-inhibitory receptors, and changes in epigenetic regulation and metabolism [9].Mounting evidence suggests that co-inhibitory checkpoint molecules, including programmed cell death-1 (PD-1), cytotoxic T lymphocyte-associated protein-4 (CTLA-4), and lymphocyte-activation gene-3 (LAG-3), are involved in the development of Tex [10].Reversal of Tex within the TIME may represent a feasible strategy for controlling cancer progression.However, the precise contribution of Tex to the pathogenesis and progression of OS remains ambiguous, and understanding its role clearly will be crucial in identifying novel therapeutic targets and prognostic biomarkers for OS, which may improve clinical management and decision-making. Conventional transcriptomic approaches fail to reliably characterize the complex TIME of OS because they investigate overall gene transcription in tumor samples, thus lacking the adequate resolution to identify specific cell types.The advent of single-cell genome sequencing has enabled the determination of rare cellular subsets and corresponding functional changes in the TIME [11].In this study, we have integrated single-cell and bulk RNA-seq data to explore the role of Tex in the TIME of OS.We have also developed and validated a prognostic predicting model based on Tex-associated biomarkers, while predicting target drugs for OS patients.These findings shed light on the oncogenesis and progression of OS as well as its potential therapeutic strategy. Datasets utilized for analysis and preprocessing of the data Single-cell RNA sequencing (scRNA-seq) datasets GSE169396 and GSE162454 were obtained from the Gene Expression Omnibus (GEO) database.GSE169396 harbored four health human bone tissues, while GSE162454 comprised six specimens obtained from osteosarcoma (OS) patients.The Seurat package was employed to manage both datasets.Additionally, bulk RNA sequencing (bulk RNA-seq) data and associated clinical details from the TARGET dataset and GSE21257 database were used as training and validation sets.The clinical characteristics of the OS patients from the TARGET dataset and GSE21257 database are presented in Table S1.For quality control purposes, genes expressed in at least five cells were retained, while cells exhibiting either less than 250 genes or more than 5000 genes were eliminated.Furthermore, cells exhibiting more than 10% mitochondrial reads were excluded.The NormalizeData and ScaleData functions were applied to standardize and scale the gene expression matrix, respectively.The top 3000 highly variable genes were identified using the FindVariableFeatures function, which served as input for principal component analysis (PCA).Batch effects of ten samples were corrected using the R package "Harmony".Following this, the FindNeighbors and FindClusters (resolution = 0.2) functions were executed to detect cell clusters.Then, the RunTSNE function was executed to achieve further dimensional reduction for cluster visualization.Additionally, T cell expression matrix was extracted and reclustered using the FindClusters (resolution = 0.6) function.Subsequently, CD8 + T cells were identified via recognized marker genes. Identification of cell populations and assessment of exhaustion in CD8 + T cells The cell clusters in the TIME were annotated based on the well-established cell-specific markers from the previous literature [12,13].The CD8 + T cells were identified using CD2, CD3D, CD3E, CD3G, CD8A, and CD8B markers.The DotPlot function and ggbeeswarm package were employed to graphically depict the expression of marker genes in each cluster.The prop.table function was used to determine the proportions of CD8+ T cell subtypes with the stat_compare_means function to analyze the significant difference by t test.To calculate the exhaustion scores, AddModuleScore function was utilized by taking the average expression of five exhaustion-associated genes (CTLA4, PDCD1, ENTPD1, LAG3, HAVCR2) across all CD8 + T cells of normal bone and OS samples.Then, the exhaustion score of each cell was visualized in t-distributed Stochastic Neighbor Embedding (tSNE) plots.Furthermore, the VlnPlot function was used to compare exhaustion scores between normal bone and OS, while t test was conducted to determine the significance of exhaustion score in two groups. Pseudotime trajectories analysis CD8 + T cells The analysis of pseudotime trajectories CD8 + T cells were conducted using the Monocle package (version 2.26.0) in R. Plot_cell_trajectory function was employed to represent the differentiation trajectory.To ascertain the expression levels of exhaustion-associated genes along the trajectory, the Plot_genes_in_pseudotime function was utilized.Furthermore, identification of cluster-specific genes was achieved through the implementation of the FindAllMarkers function.The expression levels of Top 15 genes were then depicted in a pseudotime heatmap using the plot_pseudotime_heatmap function. Gene set variation analysis (GSVA) in CD8 + T cell subtypes We procured hallmark gene sets from the Molecular Signatures Database that encapsulate well-defined biological states, processes, or tumorigenesis.The differential signatures of cellular pathways in CD8 + T cell subtypes were ascertained by employing the GSVA package in R. The GSVA heatmap was generated by hepheatmap package. Identification of Tex-specific marker genes and screening of prognostic genes Tex-specific markers were identified using the FindAllMarkers function, with the heat map of the top ten presented through the DoHeatmap function.Candidate genes for our prognostic risk model were screened through univariate Cox regression, least absolute shrinkage and selection operator (LASSO), and multivariate Cox regression analysis based on two independent cohorts.The risk score for OS samples was calculated as Σ n i (Coef i × X i ).Then, the patients were grouped into high-and low-risk categories according to the median risk score.The survival probability of the two risk groups was compared through the KM curve, while the predictive accuracy of our risk score prognostic model was evaluated by the receiver operating characteristic (ROC) curve.Finally, univariate and multivariate Cox regression analysis were conducted base on the risk score and clinical features. Immunofluorescence To confirm the expression level of candidate genes, we gathered a total of 10 pairs of paraffin-embedded OS tissues and corresponding normal tissues for immunofluorescence and immunohistochemical.This research was approved by the Institutional Review Board of Xijing Hospital, Fourth Military Medical University.And, the informed assent/consent was obtained.Sections were subjected to a 20-min treatment with 0.3% TritonX-100, followed by a 1-h blockade using a 5% BSA blocking solution at ambient temperature.The corresponding primary antibody was introduced to the sections and left to incubate overnight at 4°C.Subsequently, the sections underwent incubation with the secondary antibody (dilution 1:200) at room temperature for an hour, conducted in darkness.Post a 10-min PBS wash, nuclei were stained with DAPI.The employed antibodies include the following: anti-MYC (dilution 1:100, GB13076, Servicibio, China), anti-FCGR2B (dilution 1:500, GB114833-100, Servicibio, China), and anti-CD8 (dilution 1:50, GB12066, Servicibio, China). Construction and validation of a nomogram for OS patients We utilized a nomogram to effectively visualize the Cox proportional hazard model, predicting the 3-and 5-year overall survival rates of OS patients.Furthermore, we employed ROC, calibration, and decision curve to gauge the predictive accuracy of the model. Drug susceptibility prediction In order to predict drug susceptibility in the two Tex risk groups, we employed the "pRRophetic" package.The halfmaximal inhibitory concentrations (IC50s) of the drugs were analyzed and presented in a box plot with the Wilcoxon signed-rank test. Immunohistochemistry Following standard protocol, all tissue sections were dewaxed and fixed with an antigen.Subsequently, they underwent blocking and incubation with primary and secondary antibodies (MYC, 1:200, Bioss, bs-4963R; FCGR2B, 1:500, Servicebio, GB114833-100).The slides were then developed using a DAB kit (CWBIO, CW2035S) and counterstained with hematoxylin, before being observed and recorded under a microscope.For analysis, we employed the ImageJ software. Statistical analysis For data analysis, we employed a range of software tools, including R (version 4.2.3),SPSS (version 21.0), and Graph-Pad Prism (version 8).To compare the data of two groups, we utilized either the t-test or the Mann-Whitney U test, as appropriate.All differences among and between groups were considered statistically significant at p values of < 0.05 (*p < 0.05; **p < 0.01; ***p < 0.001). Preprocessing of single-cell RNA sequencing (scRNA-seq) data and annotation of distinct cellular subpopulations To discern the compositions of the cells in the immune microenvironment of OS, we conducted scRNA-seq analysis on four healthy bone tissues and six primary OS specimens obtained from patients who had not undergone neoadjuvant chemotherapy.Figure S1 shows the results both pre-and post-filtering of cells and features.Following quality control procedures, 25,264 features and 58,617 cells remained out of the original 33,538 features and 85,212 cells (Figure S2A, B). Figure S2C displays the top 3000 variable features S2.Consequently, the cells were classified into eight distinct subclusters: osteoclasts, endothelial cells, plasmocytes, B cells, osteoblastic cells, NK/T cells, pericyte cells, and myeloid cells.The scaled relative expression of these specific marker genes and their respective percentages of expression are visualized in Fig. 1B. Identification of CD8 + T cells and pseudotime trajectories analysis To elucidate the exhaustion of CD8 + T cells in OS, we isolated and classified CD8 + T cells into three distinct subtypes.The tSNE results were split by tissue origin (Fig. 2A).The expression patterns of specific marker genes within the CD8 + T cell subsets were visually represented using a bee swarm plot (Fig. 2B).All three subtypes exhibited positive expression of CD2, CD3D, CD3E, CD3G, CD8A, and CD8B, thereby validating the accuracy of our cellular annotation.The CD8 + T cell subtypes were distributed notably differently between normal bone and OS samples, with a significant elevation of C3 in the latter (P value = 0.0056; Fig. 2C, D).The monocle algorithm evinced a marked divergence in the differentiation trajectory of CD8 + T cells between normal bone tissue (Fig. 2E) and OS samples (Fig. 2F). CD8 + T cell exhaustion within normal bone tissue and OS The plot_genes_in_pseudotime function was used to discern the correlation between the relative gene expression of 13 Tex-associated genes (extracted from the literature) [14] and the pseudotime trajectories in both the normal bone tissue (Figure S5A) and OS (Figure S5B) groups.Five genes-CTLA4, PDCD1, ENTPD1, LAG3, and HAVCR2-were upregulated towards the end of the trajectory in OS (Fig. 3A) compared with that in normal bone tissue (Fig. 3B).Subsequently, we employed a scoring framework based on these five Tex-associated genes to compute exhaustion scores for both normal bone tissue and OS specimens, which were projected onto the tSNE plots for the same (Fig. 3C, D).Furthermore, we employed violin plots to highlight the disparity in Tex levels between the two groups (Fig. 3E).Remarkably, a significant distinction in Tex levels was observed between the two groups (P value < 0.01; Fig. 3F), indicating a greater prevalence of exhausted CD8 + T cells in OS.The three distinct CD8 + T cell subtypes exhibited divergent paths along the pseudotime trajectories, with C3 CD8 + T cells predominantly occupying the terminal position (Fig. 4A, B). Figure 4C, D illustrate the inferred states at different points along the pseudotime continuum.Subsequently, the FindAllMarkers function was employed to identify clusterspecific genes within the CD8 + T cell subtypes, and their expression patterns were depicted in a pseudotime heatmap (Fig. 4E), which delineated that the C3 CD8 + T cell subtype encompassed a considerable proportion of cells positioned towards the terminus of the trajectory.These results indicate that C3 represented an exhausted population of CD8 + T cells that was significantly elevated in OS. Gene set variation analysis (GSVA) of CD8 + T cells in OS We employed GSVA to further investigate the molecular signaling pathways involved in CD8 + T cell subtypes isolated from OS samples.The GSVA scores for hallmark pathways are recorded in Table S3, and a GSVA heatmap was generated to visualize the total hallmark pathways (Fig. 4F).We found that apoptosis, fatty acid metabolism, xenobiotic metabolism, and interferon (IFN) pathways were highly activated in the exhausted T cells (ExTs) cluster of OS samples.These results provide critical insights into the molecular mechanisms that contribute to Tex and ultimately support the proliferation, invasion, and metastasis of OS cells. Screening of Tex-specific prognostic genes and construction of the prognostic model We identified a set of Tex-specific genes using the Find-AllMarkers () function (Fig. 5A, Table S4), and screened prognostic genes in two independent cohorts using univariate Cox regression analysis.The Venn diagram shows the 20 promising candidates (Fig. 5B, Table S5).Additional screening was conducted using Least Absolute Shrinkage and Selection Operator followed by multivariate Cox regression analysis (Fig. 5C, D), ultimately providing us with two genes, MYC and FCGR2B.Double immunofluorescence showed the co-expression of these two genes in CD8 + T cells (Fig. 5E).The formula for the Tex risk score was defined as 0.569 × MYC-0.937× FCGR2B.Using the model, we stratified patients in the TARGET training cohort into low-and high-risk groups, and observed that the low-risk group had a longer survival time than the high-risk group (Fig. 6A).Kaplan-Meier survival analysis revealed a significantly poorer prognosis associated with the high-risk group (P value < 0.05; Fig. 6B).The area under the curve (AUC) values at 1, 3, and 5 years was 0.904, 0.718, and 0.651 in the TARGET training cohort, respectively (Fig. 6C).These findings indicated that the Fig. 4 Pseudotime analysis and GSVA.The pseudotime trajectory coloring by cell type in normal bone tissue (A) and osteosarcoma (B).The pseudotime trajectory coloring by state in normal bone tissue (C) and osteosarcoma (D).E The expression of specific genes in different CD8 + T cell subpopulations along the pseudo-time trajectory.F The heatmap of GSVA of hallmark pathways between CD8 + T cell subpopulations ◂ Tex risk score accurately predicted the prognosis of OS patients, which was validated in the GSE21257 validation cohort (Fig. 6D-F).The AUC values in validation cohort were 0.776, 0.855, and 0.806 at 1, 3, and 5 years, respectively.Then, we evaluated the clinical prognosis value of the Tex risk score by integrating with clinical characteristics of OS patients in the training cohort.The Tex risk score differed significantly between patients in the M0 and M1 stages, but did not in gender or age, indicating a higher metastatic risk in the high-risk group (Fig. S7A-C).Univariate and multivariate Cox regression analysis revealed that the Tex risk score was an independent prognostic indicator for OS patients (Fig. S7D, E). Construction and validation of a prognostic nomogram model We employed the rms package in R to construct a nomogram for predicting patient survival time.Using Cox regression based on survival time, status, and five clinical characteristics, we achieved a C-index of 0.788 (95% confidence interval: 0.704-0.872;Fig. 7A).To evaluate the prognostic accuracy of our model in the training and validation cohorts, we performed calibration, decision, and receiver operating characteristic curve analyses.Our calibration and decision curves demonstrated excellent predictive ability in both cohorts (Figs.7B, C, S6).Moreover, the AUC for predicting 3-and 5-year survival were 0.811 and 0.762, respectively, in the training cohort, and 0.913 and 0.925 respectively in the validation cohort, respectively, demonstrating robust prognostic accuracy (Fig. 7D, E). Drug susceptibility analysis and immunohistochemistry validation The low-risk group demonstrated markedly lower half-maximal inhibitory concentrations for Dasatinib and Pazopanib than the high-risk group (Fig. 8A).Furthermore, we used immunohistochemistry analysis to confirm the expression of MYC and FCGR2B, both of which were significantly upregulated in OS tissues relative to adjacent normal tissues (Fig. 8B, C). Discussion Tumorigenesis, progression, drug resistance, and immune escape are promoted by defective antitumor responses of immune cells within the highly heterogeneous and complex TIME of OS [15].T cell exhaustion (Tex) has been demonstrated to exert immunosuppressive effects in the TIME and constrain T cell-based immunotherapies [16].However, its specific role in the context of OS remains unclear.Comprehensively understanding Tex within the immune microenvironment of OS holds the key to overcoming it and enhancing clinical checkpoint blockade immunotherapies.Targeting Tex has emerged as a promising approach in the field of cancer immunotherapy [17].Because targeting co-inhibitory checkpoints like PD-1 and CTLA-4 alone is insufficient to fully restore T cell function, identifying other molecules involved in Tex is imperative.The advent of single-cell sequencing technologies has enabled interpretation of the TIME of OS at the resolution of individual cell level.This approach facilitates identification of rare subpopulations within the TIME and exploration of their pro-tumorigenic functions as well as phenotypic plasticity. In the current study, we performed single-cell clustering analysis in the single-cell RNA transcriptome data obtained from OS samples and normal bone tissues, and successfully identified eight distinct cell types within the integrated dataset.To unravel the heterogeneity of CD8 + T cells within the TIME of OS, a secondary clustering analysis was conducted to extract and categorize CD8 + T cells into three distinct subtypes in both the OS and healthy bone tissue groups.We found a substantially higher infiltration of C3 cluster cells in the OS samples.By assessing the expression levels of Tex marker genes across all cells, we observed varying degrees of Tex in the three CD8 + T cell clusters in OS compared with healthy bone tissue.Enrichment of Tex in the TIME has been associated with poor prognosis in various tumors.Shen et al. demonstrated that patients with high Tex in chronic obstructive airway disease exhibited poorer prognosis, whereas patients with low Tex responded better to chemotherapy and immunotherapy [18].Similarly, the Tex risk score was significantly associated with survival prognosis in esophageal adenocarcinoma [19].Subsequently, we conducted pseudotime analysis on CD8 + T cells, revealing that the C3 cluster occupied the terminal state in the pseudotime trajectory.This finding confirmed the functional exhaustion characteristics of the C3 CD8 + T cell cluster.Furthermore, we examined the expression patterns of Tex marker genes along the pseudotime trajectory.In comparison to healthy bone tissue, we found that the expression of various Tex marker genes (PDCD1, CTLA4, LAG3, ENTPD1, HAVCR2) gradually increased with pseudotime in OS.PD-1, a pivotal co-inhibitory receptor on activated T cells, Fig. 5 Identification of the Tex-specific genes associated with the prognosis of OS patients.A The heatmap of Tex-specific genes.B Venn plot of overlapping genes in prognostic gene and Tex-specific genes.C, D Lasso regression analysis to further screen candidate genes.E Representative micrographs of osteosarcoma sections stained by double immunofluorescence showing the co-expression of MYC, FCGR2B, and CD8 in CD8 + T cells.Scale bar indicates 20 μm ◂ interacts with overexpressed programmed cell death ligand 1 (PD-L1) on cancer cells, tumor-infiltrating lymphocytes, and stromal cells, detrimentally affecting the cytotoxicity of CD8 + T cells and consequently mediating immunosuppressive responses [20].The upregulation of PD-L1 and PD-1 was shown to be correlated with adverse prognosis as well as relapse or metastasis in OS patients [21].During tumor treatment, blockade of the PD-1 pathway can reactivate exhausted CD8 + T cells by reprogramming metabolism, promoting proliferation, and enhancing the expression of effector molecules [22].However, in comparison to other tumors, OS is associated with low PD-L1 expression, poor immune infiltration, and limited response to checkpoint blockade [23].CTLA-4 is an inhibitory receptor predominantly expressed on T cells that binds to CD80/CD86 on antigen-presenting cells, leading to impaired T cell function.In vivo studies have demonstrated that OS patients exhibit enhanced anti-tumor activity of cytotoxic T cells following treatment with CTLA-4 antibodies, revealing the potential of CTLA-4 inhibitors in the treatment of OS [24].LAG-3 is highly expressed by tumor-infiltrating lymphocytes in cancer and represents a non-immunoreceptor inhibitory receptor with a tyrosine-based inhibitory motif.It negatively regulates the cell cycle and cellular functions through the KIEELE motif [25].ENTPD1 (CD39) is a cell surfaceexpressed enzyme that hydrolyzes extracellular ATP.The binding of extracellular ATP to P2X receptors on T cells induces cytokine production and proliferation.Therefore, by hydrolyzing extracellular ATP, ENTPD1 impairs the function of effector T cells and mediates Tex [26].HAVCR2 can interact with PD-1 and galectin-9, thereby regulating Tex and the efficacy of immunotherapy [27].In conclusion, we ultimately identified five co-inhibitory molecules that mediate Tex in OS, thus providing valuable insights and novel immunotherapy targets. The functional alterations to CD8 + T cell subpopulations in OS were investigated using GSVA.Apoptosis, fatty acid metabolism, xenobiotic metabolism, and the IFN pathway were found to be significantly activated in ExTs in OS.The microenvironment of OS is primarily hypoxic and acidic.Tumor cells predominantly depend on aerobic glycolysis and fatty acid metabolism to meet their energy demands, a metabolic strategy shared by highly efficient effector T cells [28].Consequently, the competition for glucose as well as other fuel sources, such as fatty acids and oxygen, may detrimentally affect the proliferation and activation of effector T cells, culminating in a state of exhaustion within the TIME.Simultaneously, the immunosuppressive metabolic by-products generated by the tumor itself may impede the functionality of T cells.The PD-1 signaling pathway is intricately connected to T cell metabolic pathways, and the impact of PD-1 blockade on the metabolism of ExTs has been investigated.The PD-1 signaling pathway inhibits the activation of protein kinase B, thereby suppressing the activity of mammalian target of rapamycin and eventually inhibiting T cell glycolysis [29].Blocking the PD-1 signaling pathway reactivates the synthetic metabolism of ExTs and enhances glucose uptake in a mammalian target of rapamycin-dependent manner, which may contribute to the improvement of tumor-infiltrating lymphocyte function and tumor regression [30].These findings suggest that cellular metabolic reprogramming may represent a crucial strategy for Tex reversal during immunotherapy.We also noted a pronounced activation of the IFN pathway in ExTs in OS.IFN-α/β are crucial pro-inflammatory cytokines that exhibit dual roles in tumors.They can suppress tumor growth by inducing anti-tumor activity within the immune system and activating innate immune cells [31].The IFN-α/β signaling pathway is indispensable for T cell development and the generation of effector and memory T cells [32].However, during cancer, IFN-α/β levels may rise, inducing the expression of PD-L1, a negative regulator of the immune system [33].IFN-α/β can also promote the functional exhaustion of activated T cells through Fas/FasL-mediated T cell death [34].Highly activated IFN-α/β signaling can also promote the terminal exhaustion of functional T cells by interfering with the transcription factor T cell factor-1, thus antagonizing the reservoir of progenitor ExTs [35].Targeting these molecules or their corresponding receptors represents a promising strategy to reverse the impact of the tumor microenvironment on T cell function. To uncover the prognostic role of Tex in OS, we assessed the prognostic value of Tex-specific genes using two independent OS bulk RNA-seq datasets.Subsequently, two Texspecific biomarkers (MYC and FCGR2B) were selected to construct a Tex risk model.MYC, an oncogene encoding a nuclear phosphoprotein, plays a role in cell cycle progression, apoptosis, and cellular transformation.Amplification of this gene is frequently observed in many human cancers [36].Previous studies have demonstrated that c-Myc is necessary for the S100A9-induced upregulation of PD-1/PD-L1 [37].The overexpression of MYC induces the expression of CD47 and PD-L1 in tumor cells, allowing them to evade immune surveillance.Therapies aimed at inhibiting MYC expression and activity may potentially restore immune responses against human cancers [38].The hypoxic state of the tumor microenvironment induces mitochondrial defects and promotes Tex in the TIME via the MYC regulatory pathway [39].FCGR2B, a low-affinity receptor in the Fc region of the immunoglobulin gamma complex, is known The aberrant expression of tyrosine kinases is intricately associated with invasiveness, metastasis, and angiogenesis of tumors [42].Consequently, diverse tyrosine kinase inhibitors have been used to treat various solid malignancies, significantly enhancing the survival and quality of life of patients [43].Hence, we posit that the Tex risk score carries promising potential to predict the effectiveness of targeted therapeutic interventions. To the best of our knowledge, this is the first investigation of the role of Tex in the TIME of OS patients.Nonetheless, we must acknowledge certain limitations of this study.Firstly, the inherent heterogeneity among OS patients may limit the generalizability of the findings obtained through single-cell RNA-seq studies.These results must be validated in large-scale cohorts to ensure their broader applicability.Secondly, the limited number of single-cell OS samples resulted in a relatively small population of ExTs, which restricts the depth of our understanding regarding their precise functions within the TIME of OS.Finally, the molecular mechanisms that promote Tex in the TIME of OS via MYC and FCGR2B necessitate further investigation in the future. Conclusions This study explored the role of T cell exhaustion in the immune microenvironment of OS.The single-cell sequencing data revealed a notable increase in functionally exhausted T cells within OS samples, accompanied by the upregulation of exhaustion marker genes, such as PDCD1, CTLA4, LAG3, ENTPD1, and HAVCR2.Simultaneously, we observed a heightened activation of apoptosis, fatty acid metabolism, xenobiotic metabolism, and the IFN pathway within the ExTs population in OS samples.Finally, we developed a prognostic model based on two Tex-related signatures that accurately predicted the clinical outcomes of OS patients.These findings offer novel perspectives for clinical decisionmaking and the formulation of treatment strategies in the context of OS. Fig. 1 Fig. 1 Results of cell clustering in OS and healthy bone tissues.A T-SNE plots colored by cell clusters.B The expression patterns of marker genes for each cell cluster within the TIME Fig. 2 Fig. 3 Fig. 2 Reclusters of CD8 + T cells in OS and healthy bone tissues.A UMAP plot shows CD8 + T cell subclusters.B Bee swarm plot shows the expression of marker genes in CD8 + T cell subclusters C relative ratio of each cell cluster in OS and bone tissues.C The Monocle Fig. 6 Fig. 7 A Fig. 6 Evaluation and validation of the prognostic model based on Tex-specific genes in both the training and validation cohorts.Risk plot distribution (A), KM curve (B), and ROC curve (C) in the training cohort.Risk plot distribution (D), KM curve (E), and timedependent ROC curve (F) in the validation cohort ◂ Fig. 8 Fig. 8 Drugs sensitivity analysis and validation of candidate genes by IHC.A Drugs sensitivity analysis in different risk groups.B, C The expression levels of candidate genes in osteosarcoma and adjacent
6,500.2
2024-01-27T00:00:00.000
[ "Medicine", "Biology" ]
Phytocomplex of a Standardized Extract from Red Orange (Citrus sinensis L. Osbeck) against Photoaging Excessive exposure to solar radiation is associated with several deleterious effects on human skin. These effects vary from the occasional simple sunburn to conditions resulting from chronic exposure such as skin aging and cancers. Secondary metabolites from the plant kingdom, including phenolic compounds, show relevant photoprotective activities. In this study, we evaluated the potential photoprotective activity of a phytocomplex derived from three varieties of red orange (Citrus sinensis (L.) Osbeck). We used an in vitro model of skin photoaging on two human cell lines, evaluating the protective effects of the phytocomplex in the pathways involved in the response to damage induced by UVA-B. The antioxidant capacity of the extract was determined at the same time as evaluating its influence on the cellular redox state (ROS levels and total thiol groups). In addition, the potential protective action against DNA damage induced by UVA-B and the effects on mRNA and protein expression of collagen, elastin, MMP1, and MMP9 were investigated, including some inflammatory markers (TNF-α, IL-6, and total and phospho NFkB) by ELISA. The obtained results highlight the capacity of the extract to protect cells both from oxidative stress—preserving RSH (p < 0.05) content and reducing ROS (p < 0.01) levels—and from UVA-B-induced DNA damage. Furthermore, the phytocomplex is able to counteract harmful effects through the significant downregulation of proinflammatory markers (p < 0.05) and MMPs (p < 0.05) and by promoting the remodeling of the extracellular matrix through collagen and elastin expression. This allows the conclusion that red orange extract, with its strong antioxidant and photoprotective properties, represents a safe and effective option to prevent photoaging caused by UVA-B exposure. Introduction Skin is the organ with the largest contact area between the human body and the external environment against which it acts as a barrier. It not only protects the body from external environmental damage and prevents water loss, but it also has a certain cosmetic effect. As the largest organ of the human body, the skin shows obvious signs of aging due to various causes including environmental pollutants. Ultimately, the events that characterize skin aging can be mainly classified into chronological aging events and photoaging events [1]. Chronological aging, due to advancing age, is strongly influenced by ethnicity, the individual, and the skin site. It generates internal changes in the skin due to a decrease in the functionality of stem cells with a consequent slowdown in the keratinization process and therefore in skin repair. Photoaging, on the other hand, is caused by prolonged sun exposure particularly Extract Specifications The Red Orange Complex H extract (ROC H ® ) was obtained from the juice of three red orange (Citrus sinensis (L.) Osbeck) varieties: Moro, Sanguinello, and Tarocco with batch number 03202004-01. It was provided by Bionap S.R.L. (Belpasso, Catania, Italy) and standardized to the content of the following bioactive constituents: ferulic acid (2.2% w/w), narirutin (0.4% w/w), hesperidin (9.0% w/w), total flavanones (9.4% w/w), cyanidin 3-O-glucoside (3.1% w/w), and ascorbic acid (6.5% w/w). Quantification and identification of the secondary metabolites were conducted using HPLC-DAD analysis, except for cyanidin-3-O-glucoside quantified by UV spectroscopy. The extract was authenticated by DNA barcoding, and molecular diagnostic tests were performed by TRU-ID Ltd. (TRU-ID Ltd. Research, Guelph, ON, Canada). The certificate of authentication, including further product specification on purity, is available in the Supplementary Materials Section. SOD-Like Activity and DPPH Antioxidant Assay The scavenger effect of C. sinensis extract (0.08-0.160-0.250-0.320 µg/mL) on superoxide anion (SOD-like activity) was recorded as a decrease in absorbance at λ = 340 nm, as previously reported by Malfa et al. [17]. Results are expressed as the percentage of inhibition of NADH oxidation; as a reference compound, superoxide dismutase (SOD) (80 mU) was used. Results represent the average ± S.D. of three independent experiments. The free radical scavenging capacity, analyzed through the ability of the extract to bleach stable 1,1-diphenyl-2-picryl-hydrazyl radical (DPPH), was compared to Trolox (30 µM), a water-soluble derivative of vitamin E, used as a reference compound. After 10 min at room temperature, the absorbance at λ = 517 nm of the DPPH reaction mixture containing different concentrations of C. sinensis extract (20-40-80-160 µg/mL) in 1 mL of ethanol was recorded [15]. The results were obtained from the average of three independent experiments and are expressed as the % decrease in absorbance ± S.D. Cell Culture and Treatments Human foreskin fibroblasts (HFF-1) were obtained from the American Type Culture Collection (ATCC, Rockville, MD, USA). HFF-1 were cultured in Dulbecco's Modified Eagle's Medium supplemented with 15% fetal bovine serum, 4.5 g/L glucose, 100 U/mL penicillin, and 100 mg/mL streptomycin. Immortalized human keratinocytes (NCTC 2544), isolated from the epidermis, were provided by Interlab Cell Line Collection, Genoa, Italy. NCTC 2544 were cultured in MEM medium (Minimal Essential Medium) containing 10% fetal serum and 1% penicillin/streptomycin in a 5% CO 2 atmosphere at 37 • C. After 24 h of incubation in a humidified atmosphere of 5% CO 2 at 37 • C to allow cell attachment, the cells were treated for 4 h with different concentrations (0.1-1-10-50-100-200-400 µg/mL) of C. sinensis extract previously dissolved in the medium and subsequently irradiated with UVA and/or UVB at different doses and times. Cell Culture Irradiation with UVA and UVB The irradiations were performed using 2 different types of photoreactors. For the UVA, the Rayonet was used equipped with two PRP lamps with emission centered at λ = 350 nm (UVA) and λ = 300 nm (UVA) (~1 mW/cm 2 ), while for the UVB, the photoreactor was used with reflecting wall equipped with a lamp with emission centered at λ = 280-300 nm (UVB) (~2 mW/cm 2 ) [18]. The doses of UVA and UVB used for the experiments were those capable of simulating sun exposure. Irradiation doses were calculated using the formula below: irradiation dose (mJ/cm 2 ) = exposure time (s) × irradiance (mW/cm 2 ). NCTC 2544 and HFF-1 cells were irradiated with UVA at different doses (1, 5, 10, and 15 J/cm 2 ) and with UVB at 5, 10, 20, 25, and 50 mJ/cm 2 . Five independently repeated experiments were performed to evaluate the effect of irradiation and combined treatment with C. sinensis extract. For all other experiments, the UVA dose used for irradiation of NCTC 2544 and HFF-1 cells was 15 J/cm 2 , while that of UVB was 25 and 50 mJ/cm 2 . MTT Bioassay Cell viability was performed on a 96 multiwell plate (8 × 10 3 cells/well) using MTT assay that measures the conversion of tetrazolium salt to yield colored formazan in the presence of metabolic activity. The amount of formazan is proportional to the number of living cells [15]. The optical density was measured with a microplate spectrophotometer reader (Titertek Multiskan, Flow Laboratories, Helsinki, Finland) at λ = 570 nm. Results are expressed as percentage cell viability with respect to control (untreated cells). Total Thiol Group Determination Total thiol groups (RSH) were measured using a spectrophotometric assay that was done following the procedure previously described [19]. Briefly, 200 µL of lysate supernatant was added to 2,2-dithio-bis-nitrobenzoic acid (DTNB), and the reaction of thiols with DTNB was measured at λ = 412 nm. Results are expressed as µmol/mg protein. Reactive Oxygen Species Assay ROS levels were measured by dichlorofluorescein diacetate (DCFH-DA) assay as previously described [20]. The fluorescence intensity was detected by fluorescence spectrophotometry (excitation, λ = 488 nm; emission, λ = 525 nm). Results are expressed as fluorescence intensity/mg protein, and for each sample, the total protein content was determined using the Sinergy HTBiotech instrument by measuring the absorbance difference at λ = 280 and λ = 260. RNA Extraction and Reverse Transcription-Polymerase Chain Reaction (RT-PCR) HFF1 and NCTC 2544 cells (1.5 × 10 5 cells/mL) were pretreated with C. sinensis extract (25 and 50 µg/mL) followed by UVB irradiation in 6-well plates and further incubated for 24 h. Total RNA was isolated using TRIzol (Invitrogen, Carlsbad, CA, USA) following the manufacturer's instructions. A total of 1 µg of total RNA was used to prepare the complementary DNA (cDNA) using QuantiTect Reverse Transcription Kit (Qiagen Inc., Valencia, MD, USA). RT-PCR were performed with the QuantiNova SYBR Green RT-PCR Kit according to the manufacturer's instructions on Rotor-Gene Q5PLEX using various predesigned and bioinformatically validated primers sequences (QuantiTect Primer Assays, Qiagen Inc., Valencia, MD, USA) for COLA1A1, ELN, MMP1, and MPP9, according to previously described methods [15]. mRNA expression levels were normalized to GAPDH (glyceraldehyde-3-phosphate dehydrogenase), and all experiments were repeated three times. Statistical Analysis The experimental data are presented as the mean ± standard deviation (S.D.). The statistical analysis of the data was carried out either by one-way analysis of variance (ANOVA) or Kruskal-Wallis test depending on the results of the normality analysis (Shapiro-Wilk normality test) followed by Tukey's post hoc test using the Graph Prism version 5. Differences were considered significant when p ≤ 0.05. Effects of C. sinensis Extract on HFF-1 and NCTC 2544 Cell Viability The treatment of HFF-1 and NCTC 2544 with C. sinensis extract (0.1-1-10-50-100-200-400 µg/mL) for 24 h and 5 days was unable to affect cell viability ( Figure S1, Supplementary Material). Since C. sinensis extract did not induce toxicity, in this research, 100 µg/mL was used as the maximum concentration of extract in order to avoid unnecessarily high concentrations. Effects of UVA and UVB Irradiation on Cell Viability of HFF-1 and NCTC 2544 Pretreated with C. sinensis Extract HFF-1 and NCTC 2544 cells were irradiated with UVA at 1, 5, 10, and 15 J/cm 2 and with UVB at 5, 10, 20, 25, or 50 mJ/cm 2 ( Figure 1A-D). Cell viability was determined 24 h after irradiation. In HFF-1 and NCTC 2544 cells, irradiation with UVA at a dose of 1 J/cm 2 did not modify cell viability, unlike doses of 5, 10, and 15 J/cm 2 that significantly decreased it in a dose-dependent manner ( Figure 1A,B). The dose of 15 J/cm 2 induced approximately 35% and 65% of cell viability in HFF1 and NCTC 2544 cells, respectively. Figure 1A,B show that keratinocytes were much more resistant than fibroblasts to UVA radiation. Figure 1C shows that cell viability of UVB-irradiated fibroblasts decreased in a dose-dependent manner, reaching approximately 70% and 60% at doses of 25 and 50 mJ/cm 2 , respectively. UVB irradiation of NCTC 2544 induced a reduction in cell viability in a dose-dependent manner, reaching 40% and 30% at 25 mJ/cm 2 and 50 mJ/cm 2 , respectively ( Figure 1D). it in a dose-dependent manner ( Figure 1A,B). The dose of 15 J/cm 2 induced approximately 35% and 65% of cell viability in HFF1 and NCTC 2544 cells, respectively. Figure 1A,B show that keratinocytes were much more resistant than fibroblasts to UVA radiation. Figure 1C shows that cell viability of UVB-irradiated fibroblasts decreased in a dose-dependent manner, reaching approximately 70% and 60% at doses of 25 and 50 mJ/cm 2 , respectively. UVB irradiation of NCTC 2544 induced a reduction in cell viability in a dose-dependent manner, reaching 40% and 30% at 25 mJ/cm 2 and 50 mJ/cm 2 , respectively ( Figure 1D). Pretreatment with C. sinensis extract (0.5-5-50-100 µg/mL) for 4 h was able to increase in a dose-dependent manner cell viability of HFF-1 irradiated with a dose equal to 15 J/cm 2 of UVA ( Figure 2A,B). At concentrations of 5, 50, and 100 µg/mL, the extract protected HFF-1 and NCTC 2544 cells from radiation-induced damages by increasing their cell viability in a dose-dependent manner (Figure 2A,B). Figure 2B confirms that keratinocytes are much more resistant than HFF-1 cells to UVA irradiation, and pretreatment with C. sinensis extract slightly increased vitality compared to untreated irradiated cells. Pretreatment with C. sinensis extract (0.5-5-50-100 µg/mL) for 4 h was able to increase in a dose-dependent manner cell viability of HFF-1 irradiated with a dose equal to 15 J/cm 2 of UVA (Figure 2A,B). At concentrations of 5, 50, and 100 µg/mL, the extract protected HFF-1 and NCTC 2544 cells from radiation-induced damages by increasing their cell viability in a dose-dependent manner (Figure 2A,B). Figure 2B confirms that keratinocytes are much more resistant than HFF-1 cells to UVA irradiation, and pretreatment with C. sinensis extract slightly increased vitality compared to untreated irradiated cells. Cell viability of HFF-1, reduced by UVB irradiation at 25 mJ/cm 2 and 50 mJ/cm 2 , was increased by pretreatment with the C. sinensis extract (Figure 3). In particular, the protective effect was greater in cells irradiated at 50 mJ/cm 2 ( Figure 3A,B). Figure 3C,D confirm that the cell viability of NCTC 2544 irradiated at 25 and 50 mJ/cm 2 decreased by about 60% compared to the control and that the pretreatment with the extract (50 and 100 µg/mL) was able to increase viability in a dose-dependent manner with respect to irradiated and untreated cells. Cell viability of HFF-1, reduced by UVB irradiation at 25 mJ/cm 2 and 50 mJ/cm 2 , was increased by pretreatment with the C. sinensis extract (Figure 3). In particular, the protective effect was greater in cells irradiated at 50 mJ/cm 2 ( Figure 3A,B). Figure 3C,D confirm that the cell viability of NCTC 2544 irradiated at 25 and 50 mJ/cm 2 decreased by about 60% compared to the control and that the pretreatment with the extract (50 and 100 µg/mL) was able to increase viability in a dose-dependent manner with respect to irradiated and untreated cells. The scavenger activity of the extract was determined using a method that excluded Fenton-type reactions and the xanthine/xanthine oxidase system. The extract was found to have a significant, dose-dependent scavenger effect, and its effect at 0.320 µg/mL was comparable to that performed by the reference control superoxide dismutase (SOD 80 mU) ( Figure 4A). DPPH Radical Scavenging Activity Assay The scavenger activity of the extract was also confirmed by testing its quenching effect using the DPPH test. The extract showed a significant dose-dependent quenching effect reaching an efficacy of almost 90% at a concentration of 160 µg/mL; this effect was comparable to that of 30 µM Trolox ( Figure 4B). Therefore, the results from this assay demonstrate that the quenching activity of C. sinensis extract was obtained at concentrations higher than those on the superoxide anion. This is probably due to the steric hindrance carried out by the radical itself and, therefore, to the smaller size of the O 2 radical compared to the DPPH radical. In fact, it is known that the steric accessibility of the DPPH radical represents the determining factor for the reaction; thus, small molecules, with better access to the radical site, show a higher antioxidant power. To clarify the protective mechanism of the C. sinensis extract, we measured the levels of RSH in HFF-1 and NCTC 2544 cells pretreated for 4 h with 50 µg/mL and 100 µg/mL of extract and subsequently irradiated with UVA and/or UVB. The determinations were carried out 3 h after irradiation, in order to assess the cell response properly. In HFF-1 and NCTC 2544 cells, the irradiation with UVA at 15 mJ/cm 2 and with UVB at 25 mJ/cm 2 or 50 mJ/cm 2 was able to induce a decrease in RSH levels compared to the nonirradiated controls. In HFF-1 cells, the treatment with 50 and 100 µg/mL in irradiated cells preserved RSH levels and counteracted the decrease in RSH levels ( Figure 5A). On the other hand, the C. sinensis extract was able to bring the levels of RSH back to control values in NCTC 2544 cells ( Figure 5B). UVB irradiation, by reducing RSH levels, makes fibroblasts sensitive to cell damage and therefore susceptible to cell death. The results obtained confirm that C. sinensis, significantly counteracting the decrease in RSH levels, may protect irradiated cells ( Figure 5C). A significant decrease in RSH levels was also detected in the NCTC 2544 cells irradiated with UVB and, consistent with the results obtained in the MTT assay, the pretreatment with the extract, although it countered the reduction, was not able to restore control RSH levels ( Figure 5D). C. sinensis Extract Reduced Reactive Oxygen Species Levels To define the role of oxidative stress in photoaging, we measured the ROS levels in HFF-1 and NCTC 2544 irradiated with UVA and UVB. The irradiation resulted in increased ROS levels in both cell lines. Consistent with the results of RSH levels, pretreatment with C. sinensis extract reduced ROS levels both in HFF-1 and NCTC 2544 cells irradiated with UVA and UVB, confirming the antioxidant ability of the extract ( Figure 6). C. sinensis Extract Counteracted DNA Damage Induced by UVA and UVB Irradiation To confirm the protective effect of the extract, DNA damage was also evaluated in cells using COMET assay. UVA and UVB irradiation in both cell lines resulted in DNA damage that was counteracted by pretreatment with C. sinensis extract in a dose-dependent manner (Figure 7). C. sinensis Extract Reduced Reactive Oxygen Species Levels To define the role of oxidative stress in photoaging, we measured the ROS levels in HFF-1 and NCTC 2544 irradiated with UVA and UVB. The irradiation resulted in increased ROS levels in both cell lines. Consistent with the results of RSH levels, pretreatment with C. sinensis extract reduced ROS levels both in HFF-1 and NCTC 2544 cells irradiated with UVA and UVB, confirming the antioxidant ability of the extract ( Figure 6). C. sinensis ExtractCounteracted DNA Damage Induced by UVA and UVB Irradiation To confirm the protective effect of the extract, DNA damage was also evaluated in cells using COMET assay. UVA and UVB irradiation in both cell lines resulted in DNA damage that was counteracted by pretreatment with C. sinensis extract in a dosedependent manner (Figure 7). Protective Effects of C. sinensis Extract on Extracellular Matrix Remodeling: Collage I, Elastin, MMP-1, and MMP-9 We examined whether the C. sinensis extract can modulate the ECM remo biomarkers, by evaluating the transcriptional and translational expression of (ELN), type I collagen (COL1A1), and Matrix Metalloproteinases (MMP1 and MP RT-PCR and immunoblotting assay. As shown in Figure 8 for HFF-1 cells, UVB tre markedly downregulated the transcriptional and translational level of type I collag to a lesser extent, that of elastin, while the pretreatment with C. sinensis extract r their expressions. Unlike HFF-1 cells, no expression of type I collagen and elas detected in human NCTC 2544 keratinocyte cells ( Figure 9D). Moreover, UVB ex induced the expression of MMP9 and MMP1 in both cell lines, and pretreatment sinensis extract significantly reduced the UVB-stimulated MMP mRNA expressi protein levels, in a concentration-dependent fashion (Figures 8B,E-G and 9A-D). We examined whether the C. sinensis extract can modulate the ECM remodeling biomarkers, by evaluating the transcriptional and translational expression of elastin (ELN), type I collagen (COL1A1), and Matrix Metalloproteinases (MMP1 and MPP9) by RT-PCR and immunoblotting assay. As shown in Figure 8 for HFF-1 cells, UVB treatment markedly downregulated the transcriptional and translational level of type I collagen and, to a lesser extent, that of elastin, while the pretreatment with C. sinensis extract restored their expressions. Unlike HFF-1 cells, no expression of type I collagen and elastin was detected in human NCTC 2544 keratinocyte cells ( Figure 9D). Moreover, UVB exposure induced the expression of MMP9 and MMP1 in both cell lines, and pretreatment with C. sinensis extract significantly reduced the UVB-stimulated MMP mRNA expression and protein levels, in a concentration-dependent fashion (Figures 8B,E-G and 9A-D). Positive Effects of C. sinensis Extract on the Expression of Inflammatory Markers: TNF-α, IL-6, and NFκB To verify the involvement of inflammation in photoaging, medium from unirradiated or UVB-irradiated cells, with or without C. sinensis extract pretreatment, was collected 24 h post irradiation and assayed for TNF-α, IL-6, and total and phospho-NFkB by ELISA. The TNF-α was undetectable in unirradiated fibroblasts and keratinocytes, while the irradiation enhanced the TNF-α release in a dose-dependent manner in both cell lines. This UVB effect was especially evident in human keratinocytes. By C. sinensis extract pre-treatment, TNF-α levels were reduced, and the 50 µg/mL C. sinensis extract concentration was the most efficient with a drop of nearly 50% ( Figure 10A,B). Upon UVB irradiation, IL-6 showed a similar trend to TNF-α with a drastic increase in IL-6 release. Only a pre-treatment with 50 µg/mL of C. sinensis extract decreased the IL-6 amount about 35-40% in both cell lines exposed to 25 and 50 mJ/cm 2 UVB ( Figure 10C,D). The activation of NFκB p65 by phosphorylation was significantly raised by UVB but inhibited/lessened by C. sinensis extract in a concentration-dependent manner ( Figure 10E-H). Discussion Among the different types of oranges, the varietal group of red oranges has the particularity of containing a high quantity of polyphenols, flavonoids, ascorbic acid, and anthocyanins, both in the pulp and in the epicarp [16]. Many experimental results have shown that red orange extracts demonstrate a potent antioxidant activity, antiinflammatory properties, and cytoprotective effects thereby preventing different chronic pathological conditions [23][24][25][26][27][28]. Recently, it was reported that polyphenols may act as a sunscreen from within, by reducing inflammation and oxidative stress and by protecting DNA from the harmful effects of solar radiation [29]. Photoaging is mainly due to exposure to UVA and UVB radiation whose main features and by different molecular mechanisms, alter cellular redox state, DNA integrity, and skin extracellular matrix homeostasis and induce proinflammatory processes [30]. One of the causes of photoaging is certainly the oxidative stress induced by ultraviolet radiation that can cause a decrease in the levels of endogenous antioxidants present in the skin [31][32][33]. The protective effects of plant extracts are closely related to their active ingredient content; in fact, there is a strong correlation between polyphenol content and the antioxidant power of a drug [12]. Results from cell-free antioxidant assays ( Figure 4) confirm that the phytocomplex present in the extract composed of hydroxycinnamic acids, flavanones, anthocyanins, and vitamin C have significant antioxidant properties, capable of counteracting radical chain reactions and preventing possible damage from oxidative stress. The antioxidant effects were validated in both cell lines irradiated with different doses of UVA and UVB. The increased levels of ROS due to radiation exposure were neutralized by the extract, which, at a concentration of 100 µg/mL, was able to maintain ROS levels comparable to those measured for control cells ( Figure 6). In addition, the RSH content, the most important intracellular redox buffer with glutathione as the main representative nonprotein thiol [34], was completely restored by the pretreatment at 100 µg/mL so maintaining the cellular environment reduced Discussion Among the different types of oranges, the varietal group of red oranges has the particularity of containing a high quantity of polyphenols, flavonoids, ascorbic acid, and anthocyanins, both in the pulp and in the epicarp [16]. Many experimental results have shown that red orange extracts demonstrate a potent antioxidant activity, anti-inflammatory properties, and cytoprotective effects thereby preventing different chronic pathological conditions [23][24][25][26][27][28]. Recently, it was reported that polyphenols may act as a sunscreen from within, by reducing inflammation and oxidative stress and by protecting DNA from the harmful effects of solar radiation [29]. Photoaging is mainly due to exposure to UVA and UVB radiation whose main features and by different molecular mechanisms, alter cellular redox state, DNA integrity, and skin extracellular matrix homeostasis and induce proinflammatory processes [30]. One of the causes of photoaging is certainly the oxidative stress induced by ultraviolet radiation that can cause a decrease in the levels of endogenous antioxidants present in the skin [31][32][33]. The protective effects of plant extracts are closely related to their active ingredient content; in fact, there is a strong correlation between polyphenol content and the antioxidant power of a drug [12]. Results from cell-free antioxidant assays (Figure 4) confirm that the phytocomplex present in the extract composed of hydroxycinnamic acids, flavanones, anthocyanins, and vitamin C have significant antioxidant properties, capable of counteracting radical chain reactions and preventing possible damage from oxidative stress. The antioxidant effects were validated in both cell lines irradiated with different doses of UVA and UVB. The increased levels of ROS due to radiation exposure were neutralized by the extract, which, at a concentration of 100 µg/mL, was able to maintain ROS levels comparable to those measured for control cells (Figure 6). In addition, the RSH content, the most important intracellular redox buffer with glutathione as the main representative nonprotein thiol [34], was completely restored by the pretreatment at 100 µg/mL so maintaining the cellular environment reduced ( Figure 5). Low intracellular nonprotein thiol levels are frequently considered a biomarker of aging as well as of pathological conditions [35]. The prevention of oxidative damage induced by UVR in the skin is strictly associated with DNA integrity. In fact, UVR affects DNA directly via photon absorption and indirectly by ROS production, either way inducing genotoxic effects [36]. Among the secondary metabolites of red oranges, flavonoids are a main component, capable of genoprotective activities directly related to their antioxidant action [37]. In fact, flavonoids and other secondary metabolites exerted a protective action in a dose-dependent manner in both cell lines irradiated with UVA-B, as demonstrated by the results of the comet assay. This further confirms the positive result of the pretreatment with the extract on the redox state and on the integrity of the DNA (Figure 7). According to other studies, in fact, the prevention of oxidative damage in the skin through a balanced diet and the exogenous integration of antioxidants, could provide valid help in increasing the endogenous skin protection systems, representing an effective strategy for the protection of the skin from oxidative damage mediated by UVR [31,32,38]. UVR acts on the dermis of the skin, accelerating the hydrolysis of skin collagen and promoting the production of matrix metalloproteinases (MMPs), responsible for the destruction of tissues and the progressive degeneration of the extracellular dermal matrix [6,7,39]. In our experimental model, the pretreatment with C. sinensis extract was able to reduce the deleterious effects of UVB radiation by increasing mRNA and protein expression of type 1 collagen and elastin in HFF1 cells and concomitantly decreasing, in both cell lines, the MMP1 and MMP9 protein and mRNA expression levels with respect to irradiated control cells (Figures 8 and 9). Previous studies demonstrated that in addition to vitamin C, polyphenols, particularly flavonoids, also increased collagen and elastin content and inhibit MMP expression in cell cultures [11,40]. The resultant oxidative stress induced by UVR with the generation of ROS, induces proinflammatory factors by altering the NFκB signal transduction pathways. Therefore, activated skin cells release soluble cytokines that promote the process of aging, inflammation, apoptosis, and carcinogenesis [10,41]. Red orange extracts showed marked antiinflammatory effects both in vitro and in vivo [16,24]. Moreover, hesperidin, narirutin, and cyanidin were demonstrated to suppress proinflammatory cytokine genes including TNF-α, IL-1β, and IL-6 [24,42]. Reported results also show that pretreatment with C. sinensis extract limited the release of inflammatory mediators such as TNFα and IL-6 after exposure to UVB radiation (Figure 10), thus limiting the inflammatory cellular response in both cell lines. These results further confirm the antioxidant and anti-inflammatory properties of the extract. Moreover, the polyphenol content of the extract was also able to counteract the activation of NFκB p65 by phosphorylation, as demonstrated by the obtained values regarding the ratio between NFkB and p-NFkB ( Figure 10). It has been shown that several flavonoids modulate the inflammation pathway by inhibiting NFkB phosphorylation and preventing its translocation into the nucleus, which is essential for the transcription of proinflammatory mediators responsible for long-lasting inflammatory processes [43]. The results obtained in the present study further confirm that by reducing phosphorylation of NFκB, C. sinensis extract may prevent and/or counteract the activation of the inflammation pathway, thus contributing to the inhibition of photoaging. Conclusions The present results demonstrate that phytochemicals contained in red orange standardized extract, mainly represented by flavanones and anthocyanins, exerted a photoprotective action on fibroblasts and keratinocytes exposed to UVA-B radiation, by preventing oxidative stress, DNA damage, extracellular matrix degradation, and remodeling and inflammatory responses related to photoaging. In fact, the phytocomplex confers photoprotection through the inhibition of cellular oxidative damage counteracting skin injuries. According to recent literature data in the field, the use of botanical extracts appears to be an effective additional strategy to reduce the incidence of UV-mediated oxidative photodamage. Certainly, further preclinical and clinical research should be carried out to better confirm the photoprotective potential and the effectiveness of the oral administration of nutraceutical doses of the C. sinensis standardized extract.
6,629
2022-04-25T00:00:00.000
[ "Biology" ]
An association study of FOXO3 variant and longevity Abstract Human longevity is a polygenic and multifactorial trait. Pathways related to lifespan are complex and involve molecular, cellular, and environmental processes. In this analytical observational study, we evaluated the relationship between environment factors, oxidative stress status, DNA integrity level, and the association of FOXO3 (rs2802292), SOD2 (rs4880), APOE (rs429358 and rs7412), and SIRT1 (rs2273773) polymorphisms with longevity in oldest-old individuals from southeastern Brazil. We found an association between the FOXO3 GG genotype and gender. While lifestyle, anthropometric, and biochemical characteristics showed significant results, DNA damage and oxidative stress were not related to lifespan. We found that long-lived individuals with FOXO3 GT genotype had low levels of triglycerides. This study is the first to demonstrate that FOXO3 could be a candidate gene for longevity in the Brazilian population. These results are important in terms of provisions of health care for age-related diseases and lifespan, and provide insight for further research on epigenetic, gene regulation, and expression in oldest-old individuals. Introduction Life expectancy in the world has more than doubled in the last two centuries.People aged 85 years or more, often designated the "oldest-old", are the fastest-growing age group.Longevity is a multifactorial condition, affected by environmental and genetic factors, as well as by oxidative and genomic damage.Several hypotheses have been postulated to explain aging and lifespan, which have attracted widespread scientific and public interest (Brooks-Wilson, 2013;Simm and Klotz, 2015).The reactive oxygen species (ROS) theory of aging is related to oxidative stress and macromolecule damage.Twin-based research has shown that the genetic contribution is approximately 25% and becomes more profound after the age of 85 (Perls et al., 2000).Single nucleotide polymorphisms (SNPs) are commonly used in human longevity studies to investigate common variants associated with lifespan. The Forkhead box O3 (FOXO3) gene mediates metabolic and oxidative stress, and participates in the insulin/insulin-like growth factor-1 signaling (IIS) pathway.Because of this, rs2802292 FOXO3 has been associated with lifespan (Soerensen et al., 2015).Also involved with longevity is the SOD2 (Superoxide Dismutase 2) gene, which encodes a manganese-dependent superoxide dismutase enzyme (Mn-SOD) and is implicated with oxidative stress.Among the SNPs in SOD2 related to longevity, rs4880 is the most studied (Gentschew et al., 2013).Another gene largely studied in reference to longevity and diseases that affect older people has been the Apolipoprotein E gene (APOE).Two SNPs, rs429358 and rs7412, encode a protein of relevance in the process of lipid metabolism (Zhong et al., 2016).Sirtuin 1, or SIRT1, (Silent Information Regulator Type 1) is a NAD + -dependent deacetylase that belongs to a family of SIR proteins.SIRT1 rs2273773 has been shown to be involved in DNA repair, resistance to oxidative stress, and lifespan (Howitz et al., 2003). Studies on the association between longevity and genetic, oxidative, and genomic damage markers have significant clinical importance (Fragoso et al., 2015).However, these works have often yielded conflicting results and not all genetic variants have been replicated.Therefore, the main objective of this work was to study the association of FOXO3 (rs2802292), SOD2 (rs4880), APOE (rs429358 and rs7412), and SIRT1 (rs2273773) polymorphisms with longevity, oxidative stress status, and DNA integrity level in oldest-old individuals from southeastern Brazil. Subjects This is an observational and analytical study of 452 unrelated individuals.The sample of long-lived individuals (LLI) included 220 participants with age ³85 years.The control group had 232 elders with ages between 70-75 years, which is close to the 73.5-year-average lifespan of the Brazilian population (Instituto Brasileiro de Geografia e Estatística, 2010).The chosen age range of controls is in accordance with studies, which claim that elderly people with age close to the lifespan of a certain population are more prone to genetic factors then to environmental factors (Willcox et al., 2008;Anselmi et al., 2009;Flachsbart et al., 2009).Moreover, there is no data about mortality control in Brazil.Likewise, recent predictions by the Instituto Brasileiro de Geografia e Estatística (2013), in the Complete Mortality Table for Brazil, state that the Brazilian elderly population presented life expectancy from 84.7 years to the exact age of 70 years and from 86.7 years to the exact age of 75.Therefore, it is expected that a small portion of the controls admitted in our study will reach the age established for the LLI group, since they presented a mean survival time of 13.2 years (average between 70-84.7 and 75-86.7 years).In each group, sex and age were matched. All the selected participants were from the metropolitan region of Espírito Santo, Grande Vitória, in Southeast Brazil.A geriatrician assisted all participants for 20 years in the Geriatric Unit of the Santa Casa de Misericórdia de Vitória Hospital resting home, (Abrigo à Velhice Desamparada Alta Loureiro Machado, ES, Brazil).This study was approved by the Committee of Human Research of the Universidade Federal do Espírito Santo, Health Sciences Center, Brazil, and performed in accordance with The Code of Ethics of the World Medical Association.To participate in the study, elders or their relatives gave written informed consent.Each individual answered a questionnaire adapted from the International Commission for Protection against Environmental Mutagens and Carcinogens (Carrano and Natarajan, 1988), the Program Gênesis-Gravataí (Flores et al., 2013), and the Survey on Quality of Life Short Form 36 -SF36, adapted to the Brazilian population (Ware and Sherbourne, 1992).Questions were about demographic and socioeconomic characteristics, such as age, sex, ethnicity, education and income, smoking and alcohol consumption, physical activity, diet, and medical issues like vaccinations, medication, chronic diseases, mental health, and functional ability.A face-to-face interview with the participants was done by trained researchers.Medication and the presence of disease were confirmed through the patients' records. Blood sampling The blood samples were collected in 2014 and 2015.Eight milliliters of peripheral blood was collected into a 5% ethylene diamine tetraacetic acid (EDTA) tube (Vacuette, Greiner Labortechnik, Germany).For genotyping, samples were stored at 4 °C prior to analyses.For the oxidative stress and genomic damage analyses, 4 mL of blood were reserved.Genomic DNA was extracted according to previous methodology (Miller et al., 1988).Concentration and purity of genomic DNA were measured using a NanoDrop 2000 Spectrophotometer (Thermo Fisher Scientific, Delaware, USA). Anthropometric, physical and biochemical data Measurements of body mass index (BMI) and waisthip ratio (W/H) were taken from the subjects.Height and weight were assessed with a stadiometer and a scale (Filizola, São Paulo, Brazil), and the hip and waist measurements were taken with a non-extendable anthropometric tape (Sanny, São Paulo, Brazil).Subjects were categorized into BMI groups according to Panamerican Health Organization (Organizacão Pan-Americana da Saúde, 2003) criteria.For the waist-hip ratio classification, cut-off point values were adopted: <1.0 for men and <0.85 for women (World Health Organization, 2000).Biochemical and physical data were defined using the specific criteria of Xavier et al. (2013) and Oliveira et al. (2015) and assessed through patients' records. Oxidative stress analysis and comet assay Participants selected for malondialdehyde (MDA) analysis, an oxidative stress parameter, and alkaline comet assay had to be non-smokers, not be taking antioxidant supplements, not consume alcohol, not have recently undergone x-ray scans, and not have recently undergone surgeries.A total of 100 participants, 50 LLI and 50 controls, were evaluated for MDA levels.Plasma samples of subjects were analyzed for MDA levels by high performance liquid chromatography with diode-array detection (HPLC-DAD) (Antunes et al., 2008) and run in duplicate.Average values were reported as mmol/L of plasma. To investigate genotoxic damage to peripheral blood cells, 15 LLI and 15 controls were evaluated using an alkaline comet assay, as previously described (Singh et al., 1988;Tice et al., 2000).DNA damage was determined by analysis of 100 randomly selected cells from each individual, and measurement of tail size, scored visually, was divided into four categories, ranging from no tail (no damage) to maximally long tails (maximum damage). Statistical analysis Statistical analysis of the data was performed using SPSS software v 23.0 for Windows (IBM corporation, Armonk, New York, USA) and p<0.05 values were considered significant.To test the association between longevity and the rs2802292 FOXO3, rs4880 SOD2, rs429358 and rs7412 APOE, and rs2273773 SIRT1 polymorphisms, Pearson's Chi-square or Fisher's exact tests, with odds ratio (OR) and confidence intervals (CI) of 95%, were carried out.Moreover, the Hardy-Weinberg Equilibrium (HWE) was calculated (p<0.05). Frequencies of demographic and socioeconomic characteristics were compared for each gender and for both groups (controls and long-lived) using Pearson's Chisquare or Fisher's exact tests for categorical variables, and Student's t-tests for continuous variables.For the comparison of biochemical, physical, anthropometric characteristics, and oxidative stress status between LLI and control groups, Student's t-test was used for continuous variables.The normality of the data was verified.For genomic damage analysis, the comparison between LLI and controls was performed using a Mann-Whitney test. To evaluate the distribution of the oxidative damage product, DNA damage, clinical, anthropometric, and biochemical characteristics according to FOXO3 genotypes, within long-lived individuals and controls, we used one-way analysis of variance (ANOVA), followed by Tukey and Pearson's Chi-square tests for categorical variables.To test the association between FOXO3, SOD2, APOE, and SIRT1 genotypes and the health status of long-lived individuals, Pearson's Chi-square or Fisher's exact tests were used."Healthy" was defined as absence of chronic diseases (cardiovascular disease, diabetes, cancer, neurodegenerative, and respiratory diseases) and good functional ability (Willcox et al., 2008;Ware and Sherbourne, 1992). Results Demographic, socioeconomic, anthropometric, biochemical, and physical characteristics of the groups as well as oxidative stress status and genomic damage, are shown in Table 1.The average ages are 72.4 ± 1.7 years for controls and 89.3 ± 4.6 years for LLI.Female participants are predominant in the sample, as well as Caucasian individuals.Most individuals live in their own homes.Among the long-lived individuals, 63.2% were 85-89 years old, 33.6% were nonagenarians, and 3.2% were centenarians. A significant difference in marital status was observed for married controls (50.9%) and LLI widowers (57.6%) (p=0.000).Most participants had one to four years of formal education, with a significant difference between groups (p=0.015); the LLI group had a higher number of individuals who lacked formal education.They also showed lower triglycerides (p=0.009),lower glucose (p=0.017), and low BMI (p=0.003).We observed that the LLI had a higher fruit intake per day, although this had a borderline significance (p=0.052).In the sample as a whole, the average BMI was 26.00 ± 4.57 kg/m 2 .According to the W:H ratio, 19.4% of men and 45.4% of women presented risk for metabolic disease (data not shown). Most individuals did not consume alcohol.We observed a significant difference in alcohol consumption between groups (p=0.019).Men and women in the control sample drank less alcohol (p=0.000). Concerning smoking, we found a higher proportion of women who never smoked, of men who quit smoking, and of men who currently smoke, both in long-lived (p=0.008) and control (p=0.000)groups. Most individuals reported they did not exercise.Comparing the groups, there was a difference in the proportion of people who did not exercise (p=0.004).In LLI, the difference in proportion between men and women was slightly significant (p=0.047). As for medical family history, cancer was more frequently observed in controls (60.5%),whereas heart disease was more frequently observed in LLI (44.8%).The difference in proportion between controls and LLI was significant (p=0.006).In both malondialdehyde and DNA damage analyses, average levels were higher in LLI than in controls, although non-significant.The genotype and allele frequencies of FOXO3 (rs2802292), SOD2 (rs4880), APOE (rs429358 and rs7412), and SIRT1 (rs2273773) for both genders, LLI, and controls are shown in Table 2.In the LLI group, an association was observed between FOXO3 GG genotype and gender (OR=0.348;95% CI=0.139-0.873;p=0.02).However, no association was found between this genotype and longevity, considering the stratification of the sample in elderly men (OR=0.414;95% CI=0.150-1.142;p=0.088) and elderly women (OR=1.137;95% CI=0.666-1.941;p=0,639) within the two groups.No significant difference was observed for the other SNPs.All LLI and control polymorphisms were found in HWE. Comparing the distribution of biochemical variables with the FOXO3 genotypes, there was significant difference between the average triglyceride levels among LLI (p=0.036):individuals with the TG genotype showed low levels of triglycerides, while individuals with the TT genotype showed high levels of triglycerides.For other variables (clinical variables, and oxidative and genomic damage) no significant difference was found (Table 3).The distribution of FOXO3, SOD2, APOE, and SIRT1 genotypes between healthy and frail LLI was also assessed, and data are shown in Table 4, though no significant difference was observed.We did not find an association of protective genotypes between health status and longevity. Discussion The present study aimed to evaluate the association of FOXO3 (rs2802292), SOD2 (rs4880), APOE (rs429358 and rs7412), and SIRT1 (rs2273773) polymorphisms with longevity and the relationship between genomic damage and oxidative stress status in elderly people of southeastern Brazil.A polymorphism of FOXO3 had an association with gender, and anthropometric and biochemical characteristics showed significant results.No relationship was found between longevity and DNA damage status or oxidative stress. Environmental factors may play a role in age-related diseases and longevity, but the relative importance of these factors remains unclear.Previous studies have demonstrated that lifestyle, diet, socioeconomic, biochemical, and anthropometric characteristics affect the development of age-related diseases as well as the health and lifespan of the general population (Praticò, 2002;Britton et al., 2008).Dutta et al. (2011) found no relationship between years of education and lifespan.In our study, we noted that 86.8% of controls and LLI had up to four years of education.We believe that this is because in the 1930s, these individuals had less access to education in Brazil.In addition, the typical monthly income of an elderly in our sample was $200.00, which could also explain the lower education levels.A majority of the LLI are widowed and of the controls, married.However, the Dutta et al. (2011) study, that accompanied elderly from 65 to 85 years, showed that on average, marital status did not influence the survival of participants. Analysis of other biological factors in our study showed low levels of triglycerides and glucose, low BMI, low alcohol consumption (93.7%), and a tendency, in the LLI group, to eat more fruits per day.Dutta et al. (2011) had also shown that participants with a BMI of 30.0 kg/m 2 were less likely to achieve longevity.In a longitudinal study, Hodge et al. (2014) noted that longevity was correlated with both fruit consumption and moderate alcohol consumption.In the Multinational MEDIS Study, the oldest-old had lower BMI levels and a prevalence of dyslipidemia, but no difference between controls and oldest-old relative to education status and marital status (Tyrovolas et al., 2016).We believe these parameters are related with successful aging in our study. DNA damage and products of oxidative stress have also been studied in relation to longevity.No difference in the levels of these two biomarkers was observed in our sample.Lower levels of malondialdehyde may not have been found because most of the studied individuals, in both control and LLI groups, had diseases related to aging that could promote oxidative imbalance.As for DNA integrity, studies show that ³85-year-old individuals have levels of genomic lesions either higher than or similar to younger elders (Franzke et al., 2015).However, our results characterize the frailty of aging per se, as discussed in Taufer et al. (2005).Moreover, we cannot rule out other defense pathways against oxygen reactive species and/or biomarkers that were not studied in the present work and which may affect longevity (Praticò, 2002;Saeed et al., 2005).The hypothesis of oxidative stress resistance states that the increased genomic damage with age is accompanied by efficient antioxidant and repair mechanisms for successful aging (Franzke et al., 2015). FOXO3 is localized in Ch6q21 and belongs to a subfamily of transcription factors that target longevity regulators, implicated in the insulin and insulin-like growth factor signaling pathways (Martins et al., 2016).The FOXO3 protein is involved in diverse cellular and physiological processes, including cell proliferation, apoptosis, cellular responses to oxidative stress, cancer, cell cycle regulation, metabolism, and longevity (Tzivion et al., 2011).The rs2802292 in FOXO3 is a G > T change.A study of the Danish population investigated 15 SNPs in the FOXO3 gene and involved 1088 participants (Soerensen et al., 2015).This research showed a positive association between FOXO3 rs2802292 and 4 other SNPs of this gene with phenotypes shown to predict survival in a combined sample of male and female oldest-old individuals.No association between FOXO3 and type 2 diabetes was found in an elderly Indian population study, which included a sample of 994 type 2 diabetic individuals and 984 normoglycemic controls (Nair et al., 2012).In our sample, we found a significant gender-related difference for GG genotype in the LLI group, although lifespan was not associated with the FOXO3 GG genotype, in neither men nor women. Unlike our study, Willcox et al. (2008) and Anselmi et al. (2009) found that the FOXO3 GG genotype was associated with longevity in long-lived Japanese and Italian men, respectively.Willcox et al. (2008) also observed that the G allele and GG genotype frequencies tended to be increased in long-lived vs control men.A possible reason for the FOXO3 SNP results found in our study is the small sample size (in gender-specific effect), which may decrease the statistical power to detect associations.FOXO3 may play a role in determining longevity, probably by enabling those who have the protective genotype to be shielded in some way from oxidative stress, cell death, and glucose metabolism (Soerensen et al., 2015).Moreover, in our work the oldest individuals carrying the G allele (in the GT genotype) of FOXO3 had lower levels of triglycerides compared to individuals who were homozygous for the T allele.These results corroborated with Willcox et al.'s (2008) work, which found that lower levels of triglyceride may be a phenotype related to healthy aging, and that individuals with at least one G allele have a higher protection factor for longevity compared to individuals homozygous for the T allele.Lower triglyceride values (£150 mg/dL), similar to other variables, are inversely correlated with the increase of visceral adipose tissue and thus with a lower risk for metabolite disease development (Xavier et al., 2013) that may culminate in an unsuccessful aging process.However, we did not identify an association between the G allele of FOXO3 and lifespan in our work. The SOD2 protein (Mn-SOD) is involved in oxidative stress regulation, which is a pathway that leads to longevity.It is a great defense against ROS in the mitochondria and acts at the matrix, converting superoxide radicals into hydrogen peroxide (da Cruz, 2015).The rs4880 in SOD2 (Ch6q25.3) is a T > C that replaces valine for alanine, which may disturb the SOD2 protein activity and unbalance the oxidant-antioxidant equilibrium in the mitochondria (Shimoda-Matsubayashi et al., 1996).Although this rs4880 SNP is the most studied in SOD2 (Gentschew et al., 2013), results from different studies are inconsistent.One of these works tested the association of the rs4880 SNP with longevity in a sample of 1650 long-lived individuals from Denmark, and observed that individuals with the C allele had decreased mortality (p=0.002)(Soerensen et al., 2009).A study conducted in the south of Brazil tested for age-related mortality with 489 volunteers divided into three groups (newborns, 21-79-year-old adults, and 80-105year-old elders), but no association was found (Taufer et al., 2005).Gentschew et al. (2013) found no association in a study of 1612 long-lived individuals (> 95 years old) and 1104 controls (60-75 years old) in a German population.Similarly, we demonstrated that SOD2 is not associated with human longevity in our population.APOE, located on chromosome 19q13.2,has three different isoforms: e2 (cys112, cys158), e3 (cys112, arg158), and e4 (arg112, arg158), designated by two SNPs, rs429358 (T>C) and rs7412 (C>T).Because of its involvement in cholesterol transport processes, this protein can influence the route of lipids and can lead to oxidative stress, neuronal damage, and inflammation (Huebbe et al., 2011).No association was found between APOE and hypertension in a population of 1406 elderly individuals from Bambuí, Brazil (Fuzikawa et al., 2008).However, the e4 allele proved to be a risk factor for premature death in a GWAS study of a Canadian population of healthy oldest-old individuals (Tindale et al., 2014).We did not see a link between APOE and longevity in our population. SIRT1 is a candidate gene for longevity and promoting health, located on chromosome 10q21.3.The SIRT1 protein resides in a nuclear compartment and is a member of a class I family of seven proteins.The activity of this protein depends on the NAD + /NADH ratio, a key indicator for oxygen consumption, suggesting that this protein has a physiological role in regulating metabolic homeostasis (Giblin et al., 2014).Because of SIRT1's potential role as a mediator of lifespan, SIRT1 polymorphic variants, such as the rs2273773 T>C SNP, have been previously studied (Flachsbart et al., 2006).However, only a small number of SIRT1 SNP studies are related to lifespan in humans.During the last years, these polymorphic variants have been investigated in a context of metabolism or calorie restriction, and have been associated with aging disease-related phenotypes (Nogueiras et al., 2012).This association is also supported by a study that showed that SIRT1 genetic variation affects lipid profiles in a sample of 382 Ashkenazi Jews (Han et al., 2015).Flachsbart et al. (2006) sociation between SIRT1 variants and longevity in older individuals. The present work shows a lack of association between FOXO3 (rs2802292), SOD2 (rs4880), SIRT1 (rs2273773), and APOE (rs429358 and rs7412) with longevity.A possible reason for this result may be the small size of our sample.Longevity association studies frequently use large samples of around 1000 individuals for instance (Di Bona et al., 2014;Broer et al., 2015).However, this is the first longevity study in the state of Espírito Santo, Brazil, and we aim to expand the sample population.Another potential reason may be the different age ranges of the individuals in LLI and control groups.Some studies that have found an association with longevity used different age ranges for LLI and controls (Kilic et al., 2015).Additionally, ethnicity may mask certain genetic marks for longevity, considering Brazilian populations are a mixture of Iberian Caucasians, West Africans, and Native Americans (Pena et al., 2011).Each population has its own ethnic features, and allele and genotype frequencies can vary between different regions. No relationship was observed between healthy oldest-old individuals and FOXO3, SOD2, APOE, and SIRT1 genotype frequencies.This result can be explained in the context of the data from the Global Burden of Disease Study 2013 (Murray et al., 2015), which showed that the healthy life expectancy of Brazilians, considering disability-adjusted life-years (DALYs) and healthy life expectancy (HALE), is 65 years.Among the LLI of our sample, 61.9% (135) had chronic-degenerative diseases and functional disabilities.Taking into consideration that all indi-viduals in our study were at least 85 years old, our results corroborate with this information, as they were 20 years old or more beyond the healthy life expectative and thus, at risk for morbidities and disability. In conclusion, although longevity is the result of multiple and complex features, our work suggests that environmental factors and FOXO3 could have an intricate effect on human longevity.Our research contributes to the characterization of the complex mechanisms of aging and lifespan.It may also support the development of better treatments and offer the opportunity for diagnosis and prevention of agerelated diseases, thus, postponing aging and/or prolonging healthy lifespan, and establishing more effective public health strategies.Other approaches, such as epigenetic control and gene regulation and expression, also warrant investigation because they can help to understand the mechanisms that regulate lifespan (Kilic et al., 2015;Benayoun et al., 2015).Overall, we believe that our study help to pave the way to a promising future of genomic geriatrics and personalized medicine.Longevity and FOXO3 Abbreviations: n= number of individuals; LLI = Long-Lived Individuals; OR = Odds Ratio; CI = Confidence Interval; *Comparison between gender; **Comparison between CT and LLI; *** p-values are uncorrected; For APOE, n= 213 (LLI) and n= 213 (Controls).FOXO3 (rs2802292:G>T.RefSeqNM_001455.3),SOD2 (rs4880:T>C.RefSeqNG_008729.1),SIRT1 (rs2273773:T>C.RefSeqNM_001142498.1) and APOE (rs429358:T>C.RefSeqNG_007084.2 and rs7412:C>T.RefSeqNG_007084.2).Table 2 (cont.) Table 1 - Characteristics of study sample. The data are presented as mean ± SD or SEM b (standard deviation or standard error of the mean) or number of subjects with percentage in parentheses.Abbreviations: n -number of individuals; LLI -Long-Lived Individuals; a.u.-arbitrary units; * Comparison between gender; ** Comparison between Controls and LLI; p-values are uncorrected; the adjusted residue of c2 test were used for significant data. Table 2 - Distribution of genotypes and alleles in study groups. Table 3 - found no association for this SNP in a sample of 1245 long-lived German individuals.Comparably, our findings demonstrated no as-Biochemical, anthropometric, clinical variables, oxidative and genomic damages according FOXO3 genotypes. b (standard deviation or standard error of the mean) or number of subjects with percentage in parentheses.Abbreviations: n -number of individuals; LLI -Long-Lived Individuals; a.u.-arbitrary units; * Comparison between mean values of parameters for genotypes within LLI and controls; ** Odds Ratio was not calculated because p > 0.05; a p < 0.05 (Tukey test); For heart disease and diabetes, values refer to that morbidity carrier.FOXO3 (rs2802292:G > T. RefSeqNM_001455.3).
5,695.2
2018-06-11T00:00:00.000
[ "Biology" ]
On the normative equivalence paradigm in cyberspace . Constant evolution of communication technology and expansion of Cyberspace has had a pervasive effect on all areas of human life, activities and interactions, that law unsuccessfully tried to regulate. Cyberspace was for a long period of time considered uncharted territory, an unlimited and open space outside the control of States and the limits on the admissible or accepted conduct of states and other stakeholders were blurred. In this context, the most important challenge and pressing need is to identify normative guidelines applicable in this environment considering its specific features (being unlimited, world-wide availability, anonymity). The aim of the paper is to challenge the elements of the so-called normative equivalence that was developed by several international bodies first in relation with human rights safeguards and extended as generally applicable with some special approaches at the level of the European Union (protection of personal data, strategy on cybersecurity) which will not be addressed. Unlike the state territory which has a material and physical dimension, Cyberspace is entirely human made [4] and constitutes a complex global network, a logical space, unlimited, imperceptible, non-materialized, time-dependent [5] and constantly changing, who cannot exist without the physical support of infrastructure [6], an interconnected information system created by non-state actors, with no borders [7], parallel to the physical territory. All these features make it a sui generis phenomenon [8], that it is probably one of the greatest creations of humanity.At the same time, we should highlight that Cyberspace is not only civil and commercial space, it is also a military space being used for military purposes and activities. In order to justify regulation of Cyberspace, it was characterised as chaotic, anarchic [9] and asymmetric in relation to resources and capabilities in cyberspace [6] that protect the interests of States in relation to other States, affirming this principle does not resolve all the subsequent issues: state responsibility, consequences on the interests and rights of private persons and non-state actors.The complexity of the environment itself, the diversity of its users (States, non-state actors, private companies, private persons) actually determined multiple layers and different types of relationships between its users, depending on their quality and of the nature of the acts.Hence, the environment can be divided technically and legally in several parts: public networks accessible to everyone and not restricted by internal borders, with free access; territorial networks such as military intranets or government networks with limited access; exclusive networks -for e-government services, business, finance -the access is limited to authorised persons [10]. Due to the lack of a standard or objective definition of Cyberspace, it is generally used regarding any aspects of using computer and internet networks [5] and the term "cyber" has been used to describe almost anything that has to do with networks and computers, especially in the security field.Etymological, the term Cyberspace comes from 'cybernetics', which in turn is derived from the Ancient Greek 'kybernētēs', which means 'steersman, governor, pilot, or rudder'.Norbert Wiener defined the term 'cybernetics 'in the title of his book as "Cybernetics: Or, the Control and Communication in the animal and the machine".The idea that humans can interfere with machines and that the resulting system can provide an alternative environment for interaction provides a foundation for the concept of Cyberspace [5]. Legally defining Cyberspace may be impossible due to its dynamic nature and the unpredictability of its future evolution.However, the environment cannot be considered as unregulated or uncharted, rather a regulated as a multistakeholders model [11]. Attempts to define and regulate Cyberspace States did not succeed in the task of completely regulate Cyberspace and to clarify the norms and obligations incumbent to them and to other stakeholders in this environment.There are very few treaties or regional legally binding instruments governing issues of Cyberspace: Convention on Cybercrime [12], its Additional Protocol [13], Shanghai Cooperation Organisation's International Information Security Agreement [14], the International Telecommunication Union Constitution and Convention [15] and International Telecommunication Regulations [16]. All these instruments are limited in scope and terms and do not contain a definition of Cyberspace or of the meaning of state responsible behaviour in this environment.Budapest Convention on Cybercrime has a limited scope to criminal behaviour of persons through informatic systems and does not clarify the jurisdiction issues.The Shanghai Organization (led by Russia and China) [17] defines cyberwarfare but does not tackle other elements of state behaviour or applicable rules of International Law [18]. It also proposed, in 2015, the International Code for Information [19] with very little effect and reaction from other States.From the perspective of the terms used, the International Code for Information security is interesting, yet it has only a declarative purpose.At least partially and formally, the purpose of this code of conduct is similar to the recommendations established within UN, as it "is to identify the rights and responsibilities of States in the information space, promote constructive and responsible behaviour on their part and enhance their cooperation in addressing common threats and challenges in the information space, in order to establish an information environment that is peaceful, secure, open and founded on cooperation, and to ensure that the use of information and communications technologies and information and communications networks facilitates the comprehensive economic and social development and well-being of peoples, and does not run counter to the objective of ensuring international peace and security."[20] It refers to the Charter of the United Nations and especially to the principles of sovereignty, territorial integrity and political independence [21].Overall, it refers to all the rules of International Law that may be related to the cyber activity of States.As opposed to other instruments, this Code uses and focuses on information society, not on cybersecurity [22]. A relevant example of a regional effort in identifying rules applicable to cyber operations is the Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (The Tallinn Manual 2.0) [23] considered to be the most comprehensive academic publication on the subject of cyber activities [24].It contains useful findings concerning the principles of sovereignty and non-intervention in Cyberspace yet, it does not provide an answer to all questions and focuses on issues related to use of force in peacetime, preventive self-defence, cyber-attacks and evaluating imminence of cyber-attacks.As estimated, the Tallinn Manual proved highly influential in matters concerning the activity of States in Cyberspace [25]. There are also several private initiatives on rules applicable in Cyberspace.The Paris Call for Trust and Security in Cyberspace [26] was launched in 2018 as a multistakeholder initiative and formulated nine principles -principle number 9 refers to International normspromotes the widespread acceptance and implementation of international norms of responsible behaviour as well as confidence-building measures in Cyberspace.A Digital Geneva Convention proposed by Microsoft in 2017 underlines the importance of international humanitarian law in cyberspace, without giving details on how this is applicable and to what extent and it is at the same time appreciated [27] and criticised [28]. Consensus that International Law applies in Cyberspace As a result of the works of different bodies within the United Nations on norms applicable in Cyberspace, it is generally accepted that International Law is applicable to cyber operations [29,30] and this consensus puts an end to the controversy in this regard, and to the idea that States should not be involved in regulating this environment and affect the free internet.One legal consequence of this finding is that States should comply with the correlative rights and obligations in their activities therein [31]. The conclusions found in recent soft law acts by the United Nations through different working groups [32] refer to voluntarily non-binding norms, rules and principles of responsible state behaviour in cyberspace, that International Law rules apply in cyberspace, the need to implement confidence-building measures to build trust between States, and to increase global capacity when it comes to ensuring cyber security [29].These reports are considered to be one of the first United Nations initiatives in this regard [33]. The common feature of all legal instruments related to state activity in Cyberspace is that they all recognize that the existing International Law actually provides the framework for regulating the conduct of States in cyberspace [34].However, neither of them provides any indication on exactly how this conclusion may be transferred into practice and it cannot be considered as elements of international custom.State practice continues to be ambiguous or silent, although States enjoy the prerogative of engaging themselves in formally adopting rules of conduct applicable in Cyberspace [24]. Overview of the relevant conclusions of the UN GGE Within the United Nations, the Group of Government Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (hereinafter UN GGE) was created in 2004 [35] as an exclusive body [24] having at the beginning 15 member States: Belarus, Brazil, China, France, Germany, India, Jordan, Malaysia, Mali, Mexico, Russia, South Africa, South Korea, United Kingdom, United States of America [36].Further, a number of 25 States were members of the UN GGE [37].The GGE reports to the General Assembly of the United Nations.Several sessions were held and the most important ones are those from 2013 [38] and 2015 when it was stated that the principles of the United Nations Charter apply to States' conduct and operations in Cyberspace [41].Although a small group, its work is generally considered as the first step in identifying cyber norms and addressing the question of responsible behaviour of States in Cyberspace [39]. One of the most relevant conclusions of the 2013 Report of the UNGGE [40] is "that State sovereignty and the international norms and principles that flow from it apply to States' conduct of ICT-related activities and to their jurisdiction over ICT infrastructure with their territory". All in all, the report contains recommendations and uses as interchangeable the notions of principle, norms and rules [39].Despite and beyond this vagueness, the report directly and repeatedly associates state sovereignty and international norms to ICT related activities and jurisdiction of States over ICT infrastructure within their territory.It mentions that it ''offers the following non-exhaustive views on how International Law applies to the use of ICTs by States: (b) In their use of ICTs, States must observe, among other principles of international law, State sovereignty, sovereign equality (....)"and comply with their international obligations, which corresponds to general rules of international law.[41] The 2015 UN GGE Report confirms the rules and principles of International Law found applicable by the previous report in similar terms.Moreover, it asserts that it considered how International Law applies to the use of ICTs by States [42] but a thorough lecture and analysis does not reveal clearer issues than the previous ones.The 2015 Report also contains a series of recommendations [43], as voluntary norms, rules and principles for the States' behaviour in Cyberspace.That was the maximum result of the UN GGE because in 2017 it ended in a deadlock as it was not able to adopt a consensus report [44].The points of divergent views for States were particularly on the right to self-defence and the applicability of international humanitarian law to cyber conflicts.[45] 5 The deadlock of the UN GGE and the creation of the OEWG From a critical point of view, the findings of these reports are actually limited as they hold the application of International Law rules and enunciate fundamental principles and the need for cooperation between States and other stakeholders in Cyberspace but they emphasize the lack of consensus on how they apply [46], which is the most important issue to be addressed.Moreover, the findings of the UN GGE have at most the meaning of recommendations or soft law as a legal nature and the fact that the Group is currently experiencing deadlock cannot lead to the conclusion that there is opinio juris among States.As a consequence, the UN created in 2018 an Open-ended Working Group (OEWG) [47] with similar prerogatives.In 2018 also, the UN General Assembly established a new GGE to work on these issues starting from 2019.[48].In contrast with the UN GGE, the OEWG is inclusive [29], as it is open to all interested United Nations member States, thus creating the possibility for a larger number of States to participate in and express their opinion.Hence it is a more open and accessible framework for discussion. The objective expressed in resolution A/RES/73/266 "was to provide all Member States the opportunity to engage in interactive discussions and share their views on issues under the GEE's mandate."[48]. The first Substantive meeting took place in December 2019 and it found "broad agreement … that the cumulative outcomes of the previous GGE reports should serve as a basis for discussions in both the GGE and the OEWG" and established that "the new GGE should focus on moving beyond what was already agreed in 2015" and that the 2015 recommendations should be respected [49]. The fact that the UN GGE failed to give clear guidance on how the principles of International Law apply in Cyberspace was considered whether a "blessing in disguise or a major setback" [45].Yet, this situation does not automatically determine the unregulated character of Cyberspace.All these actions reflect a genuine concern and interest from States and United Nations, yet it is rather unlikely that the future work of international bodies within the United Nations or other international organizations will be clearer on issues connected to cyber sovereignty. The existence of two working groups having similar mandates may prove inefficient and highly politically influenced.Instead of obtaining transparency and predictability, the position of States could continue to remain ambiguous.The failure in reaching consensus on how International Law applies is determined by the political nature [44] of these special working groups.However, it must be stressed that the role of the reports of the GGE and OEWG is not a normative one because they may not adopt cyber norms [44], a term subject to criticism and considered inappropriate [50]. Legal uncertainty on the extent to which rules of International Law is applicable in Cyberspace The norms regarding State responsible behaviour in Cyberspace, identified by the works of the two working groups are generally considered norms voluntarily accepted by States [42].This assertion is partially inadequate and presents the great disadvantage of their non-binding force; therefore, they are not part of the hard law and no international obligations may derive from them.Such an assertion is highly relevant as an example, from the perspective of applying the sovereignty principle in Cyberspace, for establishing jurisdiction and other legal consequences.Therefore, these conclusions lead to legal uncertainty. However, works of the OEWG highlight the essential role of the 2015 Report of the UN GGE, the need for the implementation of the voluntary norms identified in order to ensure the public core of the internet, all through a human rights approach [51].Furthermore, on 30 th October 2020, the document The future of discussions on ICTs and Cyberspace at the UN [52] of the OEWG proposes the establishment of a Programme of Action for advancing responsible State behaviour in Cyberspace with a view to ending the dual track discussions (GGE/OEWG) and establishing a permanent United Nations forum to consider the use of ICTs by States in the context of international security, taking into account the acquis of the previous GGEs and OEWG`s work in order to ensure a secure, stable, accessible and peaceful Cyberspace. The positions expressed within OEWG are in perfect congruence with the concept of global digital cooperation proposed by the United Nations Secretary General in its 2019 SHS Web of Conferences 177, 02002 (2023) Legal Perspectives on the Internet.COPEJI 6.0 https://doi.org/10.1051/shsconf/202317702002Report The Age of Digital Interdependence [53] as referring to "ways of working together to address the societal, ethical, legal and economic impacts of digital technologies in order to maximise benefits to society and minimise harms" and as a means of exchanging knowledge and ideas.The approach of the UN in this regard is in accordance with the idea of an open Cyberspace and access to every person to this environment intended to be an inclusive digital economy and society [53]. All forms of rules proposed on cyber activities of States and international organizationsreports, statements or guides of best practices, codes of conduct, scholarly works are not binding and in a generic manner, all could be included in the category of soft law [54] or quasi-norms [55]. The main legal consequence of this situation is the fact that their violation does not determine the international responsibility of States (in the sense of the Draft Articles on State Responsibility) [56] and does not involve the same legal remedies [57].For example, as a general rule, a breach of an international obligation gives rise to reparations [58].Applying this principle to cyberoperations, if a State`s cyber activity violates another`s State sovereignty, the State victim has the right to reparations [57], and this particular issue is still unclear. Regulating Cyberspace appears to be in its beginning phase.In the future process of clarifying the content of rules applicable to States in Cyberspace the evolutive interpretation of existing norms and framework in this environment may prove useful.In this regard, it may be relevant the opinion of the International Court of Justice expressed in the Advisory Opinion on Namibia, which held that an "international instrument has to be interpreted and applied within the framework of the entire legal system prevailing at the time of interpretation" [59]. and thus, taking into consideration the development of the legal system. What is the meaning of normative equivalence? Affirming the idea that the same rules and principles are applicable in the offline and online environment and to the same extent is a solution to solve the dilemma of an uncharted Cyberspace.However, it is not the complete solution and raises many questions and uncertainties as well, given the special features of this environment.As previously mentioned, the equivalence was promoted in the human rights field, as well, by several international bodies, jurisdictional and non-jurisdictional.Moreover, it was anticipated by international legal instruments providing that human rights and fundamental freedoms must be respected regardless of frontiers.This is the case of freedom of expression, enshrined in several universal and regional instruments that are legally binding or part of soft law.For example, Article 19 of the Universal Declaration of Human Rights provides that "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." The same terms -regardless of frontiers -are used by the International Covenant on Civil and Political Rights in its Article 19, the European Convention on Human Rights and Fundamental Freedoms in its Article 10, American Convention on Human Rights in its Article 13.Therefore, States have the obligation to respect freedom of expression in any environment, having a negative and a positive obligation at the same time [60]. Since 2012, the Human Rights Council stressed "that the same rights that people have offline must also be protected online, in particular freedom of expression, which is applicable regardless of frontiers and through any media of one's choice, in accordance with articles 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights;" [61] and at the same time "2. Recognizes the global and open nature of the Internet as a driving force in accelerating progress towards development in its various forms; 3.Calls upon all States to promote and facilitate access to the Internet and international cooperation aimed at the development of media and information and communications facilities in all countries;" The UN Human Rights Council reiterated these conclusions in 2014 [62]. Previously, the Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue underlined the implications of the internet for the freedom of expression and access to information [63]. Also, at the universal level, UNESCO had an assiduous activity usually by cooperation with other entities in supporting Internet freedom and access to information and published a dedicated series on Internet Freedom, in connection with media freedom and pluralism [64].These works are particularly relevant taking into consideration the moment they were adopted and divergent to the idea of applying cyber sovereignty as a form of control over the national territory.For example, Declaration of Principles Building the Information Society: a global challenge in the new Millennium [65], drafted in collaboration with the International Telecommunication Union highlights the importance of access to information for the information society and for the development of persons and society in its entirety. Restrictions on the exercise of freedom of expression and access to information are admissible if they are in accordance with the requirements of the limitation clause provided by paragraph 3 of Article 19.The 2011 General Comment no.34 on freedom of expression of the Human Rights Committee [66] also underlines this conclusion in the following wording: "Any restrictions on the operation of websites, blogs or any other internet-based, electronic or other such information dissemination system, including systems to support such communication, such as internet service providers or search engines, are only permissible to the extent that they are compatible with paragraph 3. Permissible restrictions generally should be content-specific; generic bans on the operation of certain sites and systems are not compatible with paragraph 3. It is also inconsistent with paragraph 3 to prohibit a site or an information dissemination system from publishing material solely on the basis that it may be critical of the government or the political social system espoused by the government." Free access to Internet and digital networks without any barriers, technical, structural or educational is also supported by the OSCE [67].The Council of Europe had the same approach of application of the same standards recognized for the offline environment to the online [68]. Not least, even the 2015 UNGGE Reports also expressly mention the respect of fundamental rights including the right to freedom of expression and the resolution and report adopted by the Human Rights Council and General Assembly [42].As a consequence, there must be a balance between the admissible actions of States that constitutes interferences with the normal exercise of this right and its content.Establishing full state control over the infrastructure, flux of data and information as an effect of state cyber sovereignty would impact the right to access to information in a disproportionate and illegitimate way. The general opinion on this issue determined the idea of fundamental digital rights [69] that actually show that the equivalence paradigm does not completely solve the problem on the framework applicable to human rights in Cyberspace.It is the same situation in respect of the rules and principles of International Law applicable in Cyberspace. Scholars and works of different international bodies have revealed that the existing framework may not be completely adequate to address the complications determined by the special features of Cyberspace, especially in regard of human rights.Exploring the efforts and works of different bodies within the United Nations clearly show that the challenges still exist and in time the topics to be addressed became more complex, which shows that a specialized approach is needed.At the moment, at the United Nations level, following the unsuccess of previous working groups, that are still active and they continue their mandate (in 2020, OEWG was renewed for the period 2021-2025 [70]) and, at the initiative of over 40 States, Programme of action on cybersecurity (PoA) was established; it will become a permanent mechanism after the 2021-2025 OEWG, more inclusive and will address comprehensively the issue of international norms in Cyberspace [71]. Conclusions Applying the same normative framework to the online environment is clearly the best solution against considering the Cyberspace unregulated.However, the challenges posed by this environment that is continuously developing at a very rapid pace must be addressed and at the moment this is one of most sensitive topics of International Law.The normative equivalence must be also analysed through a critical lens which emphasizes the need for new human rights norms and implementation strategies specifically designed for application in cyberspace. Further work from States and international bodies is needed to define the content, scope and limits of International Law principles and rules, by reference to those already established.The fact that different bodies within United Nations were not able to successfully develop International Law rules through diplomatic means and channels suggests this need and the inadequacy of the equivalence paradigm for this environment.Although its work and findings were incomplete and sometimes criticized, the United Nations continues to be the most appropriate and legitimate forum open to all States for discussing all types of issues related to cyberspace.Nevertheless, reaching a common view is more difficult but discussions and debates in the General Assembly would offer the premises of opinio juris on State behaviour and a global governance of Cyberspace. All interests of multi stakeholders are interrelated and interconnected in Cyberspace.As an example, connected to International Law issues, the concept of cyber sovereignty, founded on the principle of sovereignty, is opposed to the idea of global governance of Cyberspace and does not take into account the interests of other stakeholders such as private companies and private persons, to whom Cyberspace is opened to.There is no obstacle in International Law in regulating Cyberspace and conduct of States and other stakeholders by reference to principles and rules already established that may be adapted to the special features of this environment.As it appears, state sovereignty in Cyberspace is limited and it does not have the same scope as the physical territory between its borders.The best approach in this regard would be taking into account that state sovereignty in Cyberspace relates to elements of physical infrastructure on which the existence of Cyberspace depends, yet the issues related to state jurisdiction and extra-territorial effects and what constitutes a breach of sovereignty in Cyberspace should be clarified. Most States are silent and do not publicly express their position on how International Law applies.A possible interpretation is that they are reluctant to bind themselves to new rules until they are sure of how those rules will apply and how the technology and cyber environment will develop in the foreseeable future.Another possible interpretation is that silence provides the framework for any type of activity.This makes finding common ground on how International Law applies more difficult, but not an impossible task.
6,106.8
2023-01-01T00:00:00.000
[ "Computer Science" ]
TGFβ/BMP immune signaling affects abundance and function of C. elegans gut commensals The gut microbiota contributes to host health and fitness, and imbalances in its composition are associated with pathology. However, what shapes microbiota composition is not clear, in particular the role of genetic factors. Previous work in Caenorhabditis elegans defined a characteristic worm gut microbiota significantly influenced by host genetics. The current work explores the role of central regulators of host immunity and stress resistance, employing qPCR and CFU counts to measure abundance of core microbiota taxa in mutants raised on synthetic communities of previously-isolated worm gut commensals. This revealed a bloom, specifically of Enterobacter species, in immune-compromised TGFβ/BMP mutants. Imaging of fluorescently labeled Enterobacter showed that TGFβ/BMP-exerted control operated primarily in the anterior gut and depended on multi-tissue contributions. Enterobacter commensals are common in the worm gut, contributing to infection resistance. However, disruption of TGFβ/BMP signaling turned a normally beneficial Enterobacter commensal to pathogenic. These results demonstrate specificity in gene-microbe interactions underlying gut microbial homeostasis and highlight the pathogenic potential of their disruption. A ll animals harbor complex communities made of diverse microbes, and those of the gut are the most extensive ones. Gut microbes are often referred to as commensalsthat is, causing no harm and having no benefit-and in any given condition some may indeed be just so, but overall, gut microbiotas are beneficial, contributing to features as diverse as development, metabolism, immunity, fecundity, and even behavior [1][2][3][4][5] . Furthermore, abnormal microbiota composition (or dysbiosis) is associated with pathology, and in some cases (i.e., obesity and potentially aging) has been shown to play causal roles 6,7 . In determining the factors that shape microbiota composition, work in vertebrates has been instrumental in revealing a significant impact of diet 7,8 . Environmental factors, such as geography, or life style, were also shown to contribute [9][10][11] . Less is known about the role of genetic factors, which was suggested to have a relatively modest effect size on the microbiota 12 . Nevertheless, one might expect that advantages provided by beneficial microbes to a host over its peers should promote selection of genes and gene variants that enable colonization by such microbes, resulting in host-specific microbiotas shaped to varying degrees by genetic factors. Consistent with this, species-specific gut microbiotas have been identified in various organisms, including apes, bees, termites, and Caenorhabditis elegans [13][14][15][16] . In a few instances, composition of these microbiotas was shown to be associated with host evolution. For example, in bees, the appearance of specific core gut bacterial lineages coincides with the emergence of eusocial bees from solitary ancestors 14 , and the composition and functional impact of microbiotas was found to track phylogenetic relatedness between species of several clades, including deer mice, Drosophila, mosquitoes, and wasps 17 . Our own analysis of nematode microbiotas identified a significant contribution of host genetics to microbiota composition 18 . However, the specific genes behind such contributions are mostly unknown. Results from human studies demonstrated that host metabolism and immunity can shape the human gut microbiota. Human twin studies identified the lactase gene locus as associated with the abundance of Bifidobacterium 19 , and innate immune genes, such as C-type lectins, have been shown to contribute to shaping human gut microbiota function and composition 20 . Whereas human studies rely on associations between natural genetic variation and microbiome composition, studies using model organisms, such as mice, can directly test the effect of a specific host function. In one such study, mice lacking CARD9, an adaptor protein required for innate immune responses, were found to harbor an altered microbiota, compromised in the production of aryl hydrocarbon receptor ligands, which led to increased susceptibility to colitis 21 . However, for the most part, distinguishing gene effects from inter-individual variation in vertebrate models is not trivial, hindering the ability to identify influences of host factors on microbiota composition. An alternative is offered by invertebrate models such as Drosophila melanogaster and C. elegans. Studies in Drosophila have identified mechanisms enabling immune tolerance of gut microbes, and determining the abundance of gut commensals 22,23 . C. elegans offers the additional advantage of working with self-fertilizing genetically homogeneous populations, averaging-out inter-individual variation to discern gene effects. C. elegans has been used extensively for studying molecular mechanisms of innate immunity 24,25 , but decades of growth on monoxenic cultures, typically of an Escherichia coli strain unable to colonize healthy worms, has left a gap in the understanding of its biology and its interactions with benign microbes. This is now changing. Studies of C. elegans interactions with different food bacteria provide insights into metabolic regulation and aging [26][27][28][29] , and recent work defined a characteristic C. elegans gut microbiota, and showed that its composition was conserved across different strains and geographical locations 13,30,31 . Moreover, this composition bore functional significance for worms, with positive impact mainly on development and on immunity, provided typically by Pseudomonadaceae and Enterobacteriaceae bacteria, including host-specific contributions (reviewed in 32 ). Taking advantage of the availability of C. elegans mutants, we examined the contribution of host genes to shaping the C. elegans gut microbiota. RNAseq identified genes involved in digestion and in innate immunity as those upregulated during interactions with complex microbiotas. Analysis of mutants for genes central to these processes, using synthetic communities composed of previously isolated worm gut commensals, and providing a defined environment, identified a role for Transforming Growth Factor (TGF)β/Bone Morphogenetic Protein (BMP) signaling in controlling bacterial abundance of Enterobacter commensals and in determining their contributions to the host. Results Genes modulated during interactions with complex microbiotas. RNAseq analysis was performed to identify C. elegans genes and processes involved in host-microbiota interactions, comparing gene expression in worms grown on complex environmental microbiotas to that in worms grown on E. coli. Two comparisons were performed: one in composted soil microcosms (autoclaved compost reconstituted either with the microbiota from an unautoclaved batch of the same soil, or with a saturated E. coli culture); the second on plates, seeded either with E. coli or with synthetic microbiotas prepared with equal portions of 30 C. elegans gut isolates representing the main core microbiota families (SC1, see Methods). Analyses were performed in agematched adult worms from synchronized populations; three independent populations were analyzed per group. Measurements were obtained for 28,555 unique RNA transcripts (measured in at least one sample), representing 18,873 genes (see Data availability). In worms raised on the synthetic community, 127 genes were significantly upregulated and 163 genes were significantly downregulated compared with worms raised on E. coli (false-discoveryrate-corrected q-value < 0.05, likelihood ratio test, Supplementary Data file 1). Enrichment was found among the upregulated genes for immune genes, as well as for hydrolases (peptidases in particular) (Fig. 1a, Supplementary Data file 2). This is in spite of lack of any indication that SC1 included pathogens that compromised worm survival ( Supplementary Fig. 1a), suggesting that elevated immune activity underlies normal host-microbiota interactions. Among the downregulated genes, only broad gene ontology (GO) terms, such as catabolic processes, were enriched. Many more genes were affected in worms raised in composted soil microcosms: 1269 genes were significantly upregulated and 1815 downregulated, compared with worms grown on E. colisupplemented soil (Supplementary Data file 1). The larger number of differentially expressed genes in soil-grown worms compared with those raised on SC1 indicates that C. elegans does not respond to complex microbiotas in a stereotypical way and suggests that the extent of changes in gene expression may depend on microbial diversity. Among both upregulated and downregulated genes, we found enrichment for genes associated with developmental programs, and to a lesser degree (and specific for downregulated genes) with reproduction (Supplementary Data file 2). In agreement with this, gravid worms harvested from normal soil microcosms held fewer eggs in their uterus compared with those raised on E. coli, either on plates or in soil (one row of eggs versus two). Using SC1 plates, we followed worm development and reproduction more closely. Worms raised on SC1 started laying eggs 1-2 h earlier than worms raised on E. coli, but laid in total 25% fewer eggs ( Supplementary Fig. 1b). This correlated with the identified expression trends, and suggested that exposure to a diverse microbiota modulated host development resulting in a trade-off between development and fecundity. Another prominent trend emerging in microcosm-raised worms was enrichment for upregulated genes involved in response to external stimuli, many of which were immune genes (p = 5E-15, Bonferroni corrected, Supplementary Data file 2). This corroborated the trends observed in worms raised on SC1. The overlap between genes significantly upregulated on microcosm microbiotas or on SC1 included 25 genes (Table 1). The great majority of these (21/25) are reported to be expressed in the intestine (Wormbase [http://www.wormbase.org]). These included C-type lectins, saposins, peptidases, as well as enzymes involved in sphingomyelin metabolism, and were significantly enriched for genes associated with defense and immune responses (p = 6E-5, Bonferroni corrected) and for genes encoding hydrolytic enzymes (p = 4E-5), including several lysosomal enzymes (p = 2E-6), pointing to immune functions as a common denominator among host factors that interact with the microbiota. Expression patterns for several of the genes were confirmed by quantitative (q)RT-PCR, see Supplementary Fig. 2. Among the genes upregulated by the exposure to diverse microbiotas were genes previously described to respond to environmental bacteria, specifically to a Comamonas isolate 33 ( Fig. 1 Genes affected by host-microbiota interactions and involved in shaping the gut microbiota. a Numbers of, and annotations enriched among, genes differentially expressed in worms raised on complex environmental microbiotas compared with those raised on E. coli (detailed in Supplementary Data files 1 and 2 and Supplementary Table 1). b-e Bacterial load in worms of the designated strains raised on the SC1 community (in pg 16S rDNA, see Methods): b All Eubacteria, c Enterobacteriaceae, d Pseudomonadaceae, e Bacillaceae. Shown are averages ± SD of 2-4 independent experiments. Measurements were performed on a pool of 30 worms per experiment. f Relative abundances of each group were calculated based on the values shown in a-d; light gray bars represent relative abundance of all 'other' bacterial groups not directly measured. *p < 0.05 (analysis of variance (ANOVA)) compared with N2. g Colonyforming units (CFUs) representing live bacteria extracted from worm guts, cultured and counted on Enterobacteriaceae-selective media plates (Ent) or on rich media plates (LB), which following subtraction of Ent stands for non-Enterobacteriaceae bacteria. Shown are averages ± SD of counts from four plates (n = 10 worms) per group from a representative experiment of four showing similar results. *p = 0.006, t-test environments, which is in agreement with previous reports of Comamonas species being part of the C. elegans gut microbiota 13,31 . Further in agreement with enrichment for immune genes, a significant overlap was observed between the microbiotaupregulated 25 genes and targets of central immune regulators, including the p38 pathway 34 , the DAF-16/FOXO transcription factor 35 , and TGFβ/ΒΜP signaling 36 (Fig. 1a, Table 1 and Supplementary Table 1). However, how these regulators contributed to responses to, and interactions with complex microbiotas was not immediately obvious, as their targets appeared also among downregulated genes. An exception to this broad overlap was the specific enrichment of p38 targets among genes upregulated on SC1, potentially associated with the presence of a Pseudomonas mendocina isolate previously shown to prime p38-dependent immune responses 37 . TGFβ/BMP signaling is involved in shaping the gut microbiota. Since immunity emerged as a common denominator in worm responses to complex microbiotas, and immune regulators as potential players in shaping these responses, we tested the significance of disrupting such regulators for the composition of the worm gut microbiota. Mutants examined included pmk-1 (km25) and tir-1(qd4), disrupted for the p38 MAPK ortholog and its SARM adaptor, respectively 38 ; dbl-1(nk3) mutants, lacking the TGFβ/BMP-like ligand DBL-1 39 ; daf-16(mu86) mutants, disrupted for a FOXO transcription factor central for stress resistance, immunity, and longevity, along with mutants for one of its targets, the ctl-2 catalase 35 ; and tol-1(nr2033) mutants, disrupted for the sole toll receptor homolog in C. elegans 40 . As a control, we examined tnt-3(ok1011) mutants, which are defective in grinding ingested bacteria, allowing more intact bacteria into the intestine 41 . Beyond factors that operate in the intestinal niche, we examined mutants for osm-6 and ttx-3, which are involved in feeding behavior, which might also affect microbiota structure 42 . All worm strains were raised on plates with the SC1 synthetic community, and their gut microbiota size (total bacterial load) and composition were evaluated in the first day of adulthood, using qPCR with eubacterial and taxa-specific primers, calibrated to known quantities of a full-length amplicon of the 16S ribosomal RNA gene (rDNA) from the respective taxa (see Methods). Measured taxa included Enterobacteriaceae and Pseudomonadaceae, which are the most abundant families in the C. elegans gut, as well as the less common Bacillaceae, which are nevertheless part of the worm core gut microbiota 13 . The tnt-3 control strain showed a significantly greater gut bacterial load than wild-type animals (Fig. 1b). This was expected, as it is impaired in the grinding of bacteria. In contrast, most tested mutants showed only small or insignificant changes in their total bacterial load. The exception was dbl-1 mutants, which demonstrated a threefold increase. This was associated with increases in the abundance of Enterobacteriaceae, and more variably, of Pseudomonadaceae (Figs. 1c, d). The abundance of Bacillaceae family members in wild-type or mutant animals was variable to the extent that no clear differences could be discerned (Fig. 1e). Although total bacterial load in tnt-3 mutants was much larger than in wild-type animals, microbiota composition, represented by the relative abundance of measured taxa, did not significantly change, indicating proportional increases in different microbiota members (Fig. 1f). A similar trend in taxa relative abundance was observed in feeding behavior mutants, suggesting that at least in the context of the plate microbiota, food preference was not a significant factor shaping the gut microbiota. In contrast, mutants for stress and immunity regulators, which had the same total bacterial load as wild-type animals (excluding dbl-1 mutants), showed significantly altered composition: pmk-1, tir-1, and daf-16 mutants showed relative expansion in taxa (one or more) for which we did not have specific primers, i.e., Comamonas, Aeromonas, or others (bundled as "other"), on the expense of the taxa that in wild-type animals are the dominant ones, Enterobacteriaceae and Pseudomonas (Fig. 1f). dbl-1 mutants, on the other hand, harboring an expanded gut microbiota, showed a significant increase in the relative abundance of Enterobacteriaceae. Both the increase in total bacterial load in dbl-1 mutants, and the particular expansion of Enterobacteriaceae were confirmed with colony-forming unit (CFU) counts, evaluating the number of live bacteria in the intestine of wild-type and dbl-1 animals (Fig. 1g). BMP/DBL-1 signaling specifically affects Enterobacter abundance. Focusing on the prominent effects of dbl-1 disruption, we examined whether microbiota expansion in dbl-1 mutants was a general trend in these mutants, or unique to the particular composition of SC1. To this end, we raised wild-type worms and dbl-1 mutants on a different synthetic community (SC2), in which approximately two-thirds of SC1 strains were replaced with distinct isolates, while keeping the same genera represented (Supplementary Table 2). Both on SC1 and on SC2, dbl-1 mutants showed a significantly greater bacterial load compared with wild-type worms ( p < 0.001, Fig. 2a), as well as a threefold more Enterobacteriaceae (Fig. 2b). Furthermore, when grown in composted soil microcosms, providing natural-like microbial diversity, dbl-1 mutants showed only a modest increase in total bacterial load, but a significant twofold increase in the abundance of Enterobacteriaceae (Fig. 2c-f). Together, these observations support the notion that gut microbiota expansion in dbl-1 mutants, and in particular an increase in the abundance of Enterobacteriaceae, represented a general feature of these mutants. To determine whether any specificity could be discerned in the effects of dbl-1 disruption on the gut microbiota, we created modified versions of SC1 with reduced diversity. The first modified community configuration, SC1R, excluded all Pseudomonas isolates, as well as most of the Enterobacteriaceae (Supplementary Table 2). While dbl-1 mutants raised on SC1 have an expanded microbiota compared with wild-type animals, those raised on SC1R did not show this expansion (Fig. 3a), indicating that disruption of dbl-1 had somewhat restricted effects, allowing a bloom of Pseudomonas and/or Enterobacteriaceae, but not of other members of the community. In contrast, tnt-3 mutants raised on SC1R demonstrated an expanded microbiota similar to when raised on SC1, supporting the indiscriminate effects of defective grinding on microbiota expansion. A subtler modification of SC1, SC1R*, which excluded only Enterobacteriaceae species (not all), also abolished the increase in microbiota size in dbl-1 mutants, further indicating specificity toward certain Enterobacteriaceae species. Finally, eliminating only Enterobacter isolates from the synthetic community (SC1R**), leaving-in several other Enterobacteriaceae species (e.g., Escherichia sp., Buttiauxella sp.), was sufficient to abolish microbiota expansion in dbl-1 mutants, pointing at Enterobacter isolates as those affected by dbl-1 disruption. Supporting this, re-examination of the microbiota in dbl-1 mutants raised on SC1, using primers specific for the Enterobacter hsp60 gene, revealed an increase in the Enterobacter load in dbl-1 mutants that could account for the entire increase in bacterial load observed in these animals (Fig. 3b, as compared with Fig. 3a). The observed increase in Enterobacter abundance in dbl-1 mutants could be due to direct effects of host TGFβ/BMP signaling on Enterobacter colonization and/or proliferation, or caused indirectly by changes in microbial interactions affecting the balance between Enterobacter and its competitors. To examine which of the two alternatives is more likely, we raised worms on a tdTomato-expressing derivative of one of the SC1 isolates, a previously characterized Enterobacter cloacae commensal of C. elegans, CEN2ent1 (shortened here to CEent1) 18 . Fluorescent microscopy of adult dbl-1 and wild-type animals demonstrated a significant increase in the abundance of these bacteria in dbl-1 mutants compared with wild-type animals (Fig. 3c). As no other bacteria are present to affect CEent1 abundance, these results indicate that dbl-1 disruption directly affected Enterobacter abundance. Enterobacter bloom is not due to mutants' impaired development. TGFβ/BMP signaling involves ligand binding and activation of heterodimer receptors, downstream SMAD transcriptional regulators (Sma and Mothers against decapentaplegic homologs), and co-activators 43 . In C. elegans, this pathway contributes to immunity, but more visibly to development, and all known mutants have small body size. Whereas the smaller dbl-1 mutants showed a greater bacterial load than wild-type animals, we nevertheless wished to ascertain that this bloom was not caused (perhaps indirectly) by altered development. Disruption of the TGFβ type I receptor gene, sma-6, or the R-SMAD (Receptorregulated SMAD) gene, sma-3, both led to an Enterobacteriaceae expansion, either slightly smaller than in dbl-1 mutants (sma-6) or larger (sma-3) (Fig. 4a-c). In contrast, mutants for the sma-9 co-activator, which are as small as dbl-1 mutants 44 , showed no bacterial expansion whatsoever (Fig. 4a, b). Fluorescent imaging, using the tdTomato-expressing CEent1 to follow Enterobacter colonization, corroborated these results (Fig. 4d, e), further diminishing the likelihood that Enterobacter bloom in TGFβ/ BMP mutants was due to altered development in mutants. This in turn leaves impaired immunity as the more likely cause for the Enterobacter bloom. In line with this, while sma-6, sma-3, and dbl-1 are central immune regulators, sma-9 was reported to be redundant for immune gene induction 45 . TGFβ/BMP-exerted control acts primarily in the anterior gut. To examine the opposite case of dbl-1 disruption, we employed a strain overexpressing dbl-1 from an integrated genomic fragment (dbl-1 o/e) 39 . As observed in animals raised to adulthood on tdTomato-CEent1, Enterobacter load in dbl-1 o/e animals was overall comparable to that in wild-type animals (Fig. 5a, b). However, the distribution of bacteria in the gut was very different: although wild-type animals showed prominent accumulation of Enterobacter in the anterior gut, this was mostly missing in dbl-1 o/e animals (Fig. 5a, b). This suggested that TGFβ/BMP signaling exerted its control over Enterobacter colonization/proliferation mainly in the anterior gut. DBL-1 is expressed primarily in neurons, but its receptors and downstream mediators are expressed in the epidermis, pharynx and in the intestine 43 . To gain insight into where TGFβ/BMP signaling might function to delimit Enterobacter accumulation, we took advantage of sma-3(wk30) mutants, which are heavily colonized with Enterobacter, and a panel of transgenic strains employing tissue-specific promoters to rescue sma-3 expression in different tissues 46 . Expression of sma-3 from its endogenous promoter effectively restored accumulation of tdTomato-CEent1 to wild-type levels (Fig. 5c). Expression of sma-3 from the epidermal dpy-7 promoter also delimited Enterobacter accumulation, although not quite to wild-type levels, and expression from the pharyngeal myo-2 promoter showed Enterobacter accumulation to levels intermediate between those of wild-type and those in sma-3 animals. In contrast, worms with intestinal sma-3 expression (relying on the vha-6 promoter) showed Enterobacter accumulation that was not significantly different from that seen in sma-3 mutants. Similar trends were observed in measurements performed on the anterior gut or on its posterior parts ( Supplementary Fig. 3). These results suggest multi-tissue contributions of TGFβ/BMP signaling to controlling intestinal Enterobacter, with the epidermis providing the more dominant input, whereas local TGFβ/BMP signaling in the intestine appearing to be mostly redundant. TGFβ/BMP disruption turns an Enterobacter commensal to pathogenic. CEent1 was previously shown to be beneficial for C. elegans, accelerating development compared with worms raised on the standard E. coli food, and enhancing resistance to the pathogen Enterococcus faecalis 18 (Fig. 6a). A 4-h exposure of worms to the commensal late in development (at the L4 stage) was sufficient to increase resistance to subsequent infection (Supplementary Table 3). However, dbl-1 mutants developing on CEent1 (or exposed to it late in development) no longer showed enhanced pathogen resistance and instead were significantly more susceptible (Fig. 6b and Supplementary Table 3). sma-3 mutants showed an even greater susceptibility (Fig. 6c, d). Furthermore, sma-3 mutants raised on tdTomato-CEent1 and shifted late in development to pathogen were still colonized 24 h after the shift (9 out of 24 examined) (Fig. 6f), and even 48 h after the shift, at which point colonization was further observed in three out of five cadavers. In contrast, wild-type animals showed no CEent1 colonization persisting after the shift. These results indicate that impaired TGFβ/BMP signaling results in a more persistent Enterobacter colonization, turning the normally beneficial commensal to pathogenic. Interestingly, dbl-1 overexpressing worms, which were more resistant to E. faecalis to begin with, showed no significant increase in their resistance when initially exposed to CEent1 instead of E. coli (Fig. 6e). Discussion With the microbiome linked to many aspects of host health, understanding the interactions between host and its microbes is imperative for the development of therapeutic strategies aiming to alter the microbiota. An important aspect of this is understanding how host genes determine abundance and function of gut commensals. This understanding has been slow to emerge, to some extent due to inter-individual variation in microbiota composition in vertebrate models, which masks gene effects. Here, we demonstrate the utility of C. elegans as a model organism facilitating the identification and characterization of such gene effects, including the distinction between effects on microbiota size (total microbial load) and microbiota composition. Using this model, and screening candidate regulators, we identified a role for TGFβ/ BMP immune signaling in controlling the abundance of members of the genus Enterobacter, common inhabitants of the worm gut, which affect host development and immunity. TGFβ/BMP-exerted control was focused on restricting Enterobacter accumulation in the anterior gut, but full effects depended on contributions from several tissues. Disruption of TGFβ/BMP signaling resulted in an Enterobacter bloom, and turned an otherwise useful commensal to pathogenic. Gut dysbiosis is typically considered a condition that enables invasion and proliferation of environmental opportunistic pathogens. Our results demonstrate that given impairment in host immunity and ensuing dysbiosis, pathogenicity can emerge directly from otherwise beneficial members of the gut community. Current understanding of how host genes and processes shape microbiota composition highlights the importance of mucosal structure and immunity. Genes identified as affecting abundance of gut bacteria include antimicrobial peptides, pattern recognition receptors, cytokines, and genes affecting IgA antibody production [47][48][49] , as well as enzymes glycosylating barrier mucins, which provide sugars harvested by gut bacteria 50,51 . Our RNAseq analysis indicated a similar trend in C. elegans, as worms interacting with a diverse microbiota had elevated expression of genes, mostly intestinal, which are associated with host immunity. Some of these genes encode hydrolytic enzymes, which may serve immune functions, but could also be relevant for releasing nutrients from susceptible ingested bacteria for the benefit of resistant gut commensals. Mutant analysis supported the importance of host immunity in shaping the gut microbiota. Since many of the identified overexpressed genes are members of multi-gene families, implying potential functional redundancy, we focused instead on central upstream immune regulators. The contribution of immune regulators to microbiota composition was conspicuous, and in the case of dbl-1 mutants was recapitulated with different synthetic communities, as well as (with regards to the main feature of Enterobacteriaceae expansion) in worms raised in a natural-like compost microcosm. In contrast, disruption of daf-16, which encodes a general stress resistance regulator and contributes also to immunity, had no effect on microbiota composition 52 . Feeding behavior may also be thought to affect microbiota composition. However, disruption of osm-6 and ttx-3, did not show any effect. This cannot completely rule out involvement of food sensing and feeding behavior in determining microbiota composition, as the synthetic microbiota used is solely made of gut isolates (i.e., commensals) and is thoroughly mixed; under more natural conditions regulators of feeding behavior may contribute by identifying pockets of desirable bacteria in a non-homogeneous environment. Nevertheless, the finding that impaired grinding in tnt-3 mutants affected microbiota size but not composition supports an important role for post-feeding mechanisms, likely those defining the intestinal niche, in shaping microbiota composition. TGFβ/BMP signaling regulates diverse processes in C. elegans, including growth, male tail development, and immunity 43 . Our results point at its roles in immune regulation as those affecting gut bacterial load. To date, not much is known about how TGFβ/ Our results indicate that in controlling commensal colonization it relied on multi-tissue contributions, primarily from the epidermis, but also from the pharynx. Contributions of TGFβ/BMP signaling in the intestinal niche itself were marginal at best (although it seemed to affect Enterobacter accumulation in the posterior gut more than it did in the anterior, see Supplementary Fig. 3). This suggests that TGFβ/BMP signaling may affect the Enterobacter niche indirectly, potentially involving an additional signaling pathway that targets the gut. This mode of regulation may also apply to TGFβ/BMP involvement in resistance to intestinal pathogens. At the same time, results from experiments with dbl-1 o/e worms suggest that the Enterobacter niche in question is the anterior gut, as overexpression of the ligand and the presumed over-activation of the pathway prevented Enterobacter accumulation particularly in that region. It is further possible that colonization of this region is essential for the protective effects of Enterobacter, explaining the inability of CEent1 to enhance infection resistance in dbl-1 o/e animals. The role of immunity in shaping the gut microbiota may be thought to be nonspecific, especially if considering the reliance of C. elegans solely on innate immunity (as all invertebrates). If so, disruption of any immune pathway will cause a relatively indiscriminate proliferation of gut commensals. However, this was not the case, as bacterial proliferation was selective and specific to the disrupted pathway: disruption of TGFβ/BMP signaling led to blooming only of Enterobacter species, not affecting other members of the microbiota, including other isolates of the Enterobacteriaceae family; and disruption of p38 signaling did not affect the abundance of major examined taxa (Pseudomonadaceae and Enterobacteriaceae), and only caused an increase in relative abundance of other taxa, yet to be defined. It might be speculated that such specificity depends on the profile of immune effectors regulated by each of the pathways, and differential susceptibility of different gut microbes to components of these profiles. TGFβ signaling is highly conserved, regulating development and immunity also in vertebrates 55 . Whereas TGFβ/BMP signaling was shown to be associated with immune responses both in Drosophila and in vertebrates 56,57 , it is the role of TGFβ signaling in vertebrate T-cell differentiation and mucosal homeostasis that is better known 58 . Of particular relevance is the role of TGFβ signaling in production of IgA antibodies, which delimit microbiota proliferation in the gut 59 . Early results showed that TGFβ1 deficiency correlated with ulcerative colitis-associated colon cancer, and studies in mice suggested that colitis and cancer might depend on gut microbes 60 . More recently, it has been shown that deficiency in TGFβ signaling in intestinal dendritic cells led to changes in epithelial structure, Enterobacteriaceaedriven dysbiosis, and colitis 61 . Together, these reports describe a central role for TGFβ signaling in controlling gut microbial proliferation. Our results in C. elegans suggest that this role is conserved, even without specialized immune cells or antibodies, and further suggest particular importance, with ancient origins, for TGFβ signaling in controlling gut commensals of the Enterobacteriaceae family. Worm growth in soil and harvesting. Fresh local soil was supplemented with chopped over-ripe apples. The soil produce mixture was allowed to decompose for 2 weeks in the lab, cleared of garden variety worms by autoclaving, and reinoculated with a microbial extract from the original batch of unautoclaved soil 24 h prior to addition of worms (extracts were the supernatants (1800 rpm) from soil resuspended and vortexed in M9 salt solution) 13 . Initially germ-free L1 larvae, obtained by bleaching of gravid worms to release eggs, and hatching them on NGM plates without food, were transferred to soil and grown at 25°C for 3 days. One batch of prepared soil was split into separate 20 mL glass beakers (5 g per vial), and independent worm populations were raised in each (four replicates per genotype). Approximately 100-200 gravid worms were harvested from each population using a Baermann funnel lined with two layers of tissue paper, washed extensively (6 × ), and surface sterilized on plates containing 100 μg/ml gentamicin prior to DNA extraction, using the MO BIO PowerSoil DNA isolation kit (#12888) 13 . Synthetic community and worm growth. Bacteria were previously isolated from worms grown in soil, and identified by 16S sequencing or multi-locus sequencing 13,18,37 . Enterobacteriaceae isolates were cultured on an Enterobacteriaceaeselective medium (Violet Red Bile Glucose, VRBG), whereas all others were cultured on Lysogeny Broth (LB) agar plates. A total of 30 isolates were selected for the synthetic community, representing most core taxa of the previously characterized C. elegans gut microbiota taxa 13 (Supplementary Table 2). Several configurations of this community were used, including SC1 and SC2, which include the same genera, but differ in the specific strains used; SC1R resembles SC1, but lacks Enterobacteriaceae and Pseudomonas isolates; SC1R* lacks most of the Enterobacteriaceae isolates; and SC1R** contains all the SC1 isolates, except for the Enterobacter isolates. For each microbiota, isolates were grown in 1 mL of LB broth at 37°C overnight with shaking. Saturated cultures were combined in equal proportions, concentrated 10 × , and seeded onto NGM plates approximately 30 min before the addition of eggs, obtained from bleaching gravid worms. For each worm strain, 2-3 independent populations of worms, synchronized at L1, were grown on the synthetic microbiotas at 25°C for 3 days. Adult worms were washed, surface sterilized, and used for DNA extraction, as described for worms raised on soil. DNA extraction. For each plate, 30 washed worms were transferred to a lysis buffer of a constant volume (per 30 worms: 6 μl of 10X PCR buffer (Invitrogen #11304011), 3.6 μl of Proteinase K (Fisher Scientific #EO0491), and 50.4 μl of PCRgrade water), and were lysed at 60°C for 1 h, followed by a 15-min incubation at 95°C to inactivate the proteinase K. Samples were stored at -20°C until use. To rule out measurement biases introduced by DNA extraction method, DNA was extracted from synthetic communities mixed in different ratios either using the proteinase K protocol, or the MO BIO PowerSoil DNA isolation kit (#12888). Comparisons (demonstrating similar measurements of microbiota composition) are shown in Supplementary Fig. 4. RNAseq library preparation and analysis. Synchronized worm populations, initiated with eggs obtained from bleached gravid worms, were grown on either NGM plates seeded with E. coli, NGM plates seeded with the synthetic community, in autoclaved soil supplemented with E. coli, or in composted soil, as described above (three independent populations per condition). RNA was extracted from gravid worms using a modified CTAB protocol with Aluminum Ammonium Sulfate and PEG precipitation 62 , followed by purification with the Qiagen AllPrep kit (#80204) to separate RNA from DNA. Sequencing libraries were prepared from total RNA using the Illumina TruSeq RNA Library Kit v2 (RS-122-2001), which synthesizes complementary DNA (cDNA) from mRNA fragments using random primers, and provides 24 indexed adapters for multiplex sequencing. Paired-end sequencing was performed on Illumina HiSeq4000, generating 100 base-pair reads, from which adapter sequences, as well as low-quality reads, were removed prior to analysis, providing 30,270,407 reads/sample on average. Both forward and reverse sequences were used for analysis, employing Kallisto to identify and quantify transcripts, and Sleuth to identify differentially expressed genes, using a false discovery rate-corrected p-value (q-value) cutoff of 0.05, as previously described 63 . GoTermFinder was used to identify enrichment among identified genes for representatives of annotated processes or functions. Enrichment for gene targets of various regulators was calculated using the hypergeometric test. RNA extraction and qRT-PCR. RNA was extracted from 100 to 200 worms per group using Trizol (Invitrogen). Genomic DNA contamination was removed with Turbo DNase (QIAGEN), cDNA was synthesized using iScriptTM (Bio-Rad), and quantitative real-time PCR was carried out using Bio-Rad's SsoAdvanced Universal SYBR Green Supermix, and on a StepOnePlus system (Applied Bio). Ct values were normalized to the respective actin values for each sample, and are presented as fold change over wild-type worm expression value. See Supplementary Table 4 for primer sequences. Quantifying bacterial load (microbiota size) and microbe abundance. Quantitative PCR (qPCR) was used to estimate taxa abundance by quantifying their respective DNA (in pg). Eubacterial 16S rDNA was quantified using the conserved primers 806f and 895r (Supplementary Table 4), and taxa-specific primers were used to quantify Enterobacteriaceae 16S (Ent_MB_F and R), Pseudomonadaceae 16S (Pse435F, Pse686R), and Bacilli 16S (BacilliF, BacilliR) in DNA extracts from worm samples obtained either using standard proteinase K based lysis, or Pow-erSoil, as described above. In addition, taxa-specific primers targeting the Hsp60 gene were used to quantify Enterobacter cloacae (Ent-Hsp60f, Ent-Hsp60r). Cycling parameters: 95°C for 5 min; 45 cycles of 95°C for 15 s, 60°C for 30 s, 72°C for 15 s [30 s for Bacilli]; 72°C for 5 min. Specificity of each primer pair used was confirmed by PCR on each individual member of the synthetic communities, further ensuring no amplification of C. elegans mitochondrial DNA, which can be amplified with some 16S primers. All measurements were performed in duplicate with 1 μl of DNA extract as template, prepared from 30 worms in 60 μl of lysis buffer (for worms grown on synthetic microbiotas) or from 100 worms in 30 μl (DNA extraction from worms grown on soil), to enable comparison of values between worm populations. DNA quantities were estimated using standard curves created for each taxa from qPCR measurements of 10-fold serial dilutions of an appropriate full-length 16S bacterial rDNA in known concentrations, ranging from 25 ng to 0.2 pg/μl (amplified using the same primers as described above); E. coli 16S was used for calibrating both eubacterial and Enterobacteriaceae measurements, Pseudomonas mendocina 16S for Pseuomonadaceae measurements, Bacillus subtilis 16S for Bacilli measurements, and a 500-bp region of the Hsp60 gene from Enterobacter cloacae for Enterobacter Hsp60 measurements. Relative abundance was estimated by dividing the quantity of DNA from each bacterial group by the quantity measured with the universal primers. The ability to estimate relative abundance of bacterial families in a gut community was validated by measuring abundance of members of the different families in plated synthetic communities prepared with different ratios of members of these families. Measured relative abundances reflected the expected mix ratios and were unaffected by the method used for DNA extraction (Supplementary Fig. 4). CFU counts. Following harvesting and washing, worms were ground (10 worms in 250 μl of M9), releasing live bacteria, which were serially diluted. Aliquots were plated onto either LB or VRBG plates, which were incubated at 37°C overnight before counting. Fluorescently tagged Enterobacter cloacae. A plasmid that constitutively expresses tdTomato from the Enterobacter cysB promotor (fragment 132a) 64 , with kanamycin resistance as a selection marker, was constructed at the UC Berkeley MacroLab. The plasmid was introduced into a C. elegans commensal, CEN2ent1, which is naturally ampicillin resistant 18 , using a triparental mating approach 65 . Briefly, the tdTomato plasmid was transformed into competent DH5ɑ E. coli cells using a standard heat shock protocol. CEN2ent1, DH5ɑ-tdTomato, and the DH5ɑ E. coli helper strain, which harbors the pRK2073 F conjugative plasmid with streptomycin resistance, were separately streaked out onto LB plates with either ampicillin, kanamycin, or streptomycin, respectively, and incubated at 37°C overnight. Cells were scraped off from each plate, mixed together on a new LB plate, and incubated overnight at 37°C. Colonies were streaked onto a new LB plate with ampicillin and kanamycin to select for CEN2ent1-tdTomato cells. The isolate was confirmed to not have the conjugative plasmid by testing for streptomycin sensitivity and was confirmed to be CEN2ent1 by sequencing of the full-length 16S gene. Imaging. CEN2ent1-tdTomato was grown overnight in LB broth with 100 μg/ml kanamycin. Cells were concentrated 10× and seeded onto NGM with 100 μg/ml kanamycin before eggs, obtained by bleaching, were added. After 3 days at 25°C, worms were washed off the plates, and washed three times more with M9 buffer to remove external bacteria. Fluorescent images were captured using a Leica MZ16F equipped with a QImaging MicroPublisher 5.0 camera. Worm colonization was scored based on degree of colonization: "high colonization", for colonization throughout the intestine; "moderate colonization", for colonization in no more than half the length of the intestine; "light colonization", for faint colonization, or "no colonization". Alternatively, colonization was quantified using ImageJ, by drawing a line selecting the gut area for each worm and quantifying backgroundsubtracted average intensity in this area. When relevant, signal intensity in the anterior and posterior parts of the intestine were quantified separately. Survival analyses. Worms were exposed to CEent1 (or to control E. coli OP50) from the egg stage till larval L4 stage, or shifted to such plates as L4 larvae, for a 4-h exposure. Subsequently, worms were transferred to plates (Brain Heart Infusion agar, 20 µg/mL gentamicin) pre-plated with the pathogen Enterococcus faecalis strain V583 66 . Dead worms were counted to assess infection resistance. All experiments were carried out at 25°C. Statistical analyses. Comparisons of worm fecundity, qRT-PCR measurements, and CFU counts were evaluated using two-sided Student's t-test. qPCR measurements of microbiota size/total bacterial load and taxa-specific bacterial abundance were evaluated using a one-way analysis of variance (ANOVA) for repeated measures, to account for within-experiment replication (lme function from the nlme R package), with host genotype as a fixed effect, and within-experiment replication as a random effect 67 . Differences in worm colonization by CEent1::GFP were evaluated using ANOVA (aov function in R) followed by Tukey's Honest Significant Difference (HSD) post hoc test when all comparisons were relevant, or linear model when one condition could serve as a reference for comparisons. Kaplan-Meier analysis was used to evaluate survival experiments, followed by a two-sided log-rank test, or a two-sided Wilcoxon test, which assigns greater weight to early time points, representing the bulk of the population. Reporting summary. Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Data availability All relevant data are available from the corresponding author. RNAseq raw data are available at: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE97934. Received: 21 March 2018 Accepted: 20 December 2018 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
9,489.2
2019-02-05T00:00:00.000
[ "Biology", "Medicine", "Environmental Science" ]